Calculation of the number of bits required for the estimation of the bit error ratio
NASA Astrophysics Data System (ADS)
Almeida, Álvaro J.; Silva, Nuno A.; Muga, Nelson J.; André, Paulo S.; Pinto, Armando N.
2014-08-01
We present a calculation of the required number of bits to be received in a system of communications in order to achieve a given level of confidence. The calculation assumes a binomial distribution function for the errors. The function is numerically evaluated and the results are compared with the ones obtained from Poissonian and Gaussian approximations. The performance in terms of the signal-to-noise ratio is also studied. We conclude that for higher number of errors in detection the use of approximations allows faster and more efficient calculations, without loss of accuracy.
Reading boundless error-free bits using a single photon
NASA Astrophysics Data System (ADS)
Guha, Saikat; Shapiro, Jeffrey H.
2013-06-01
We address the problem of how efficiently information can be encoded into and read out reliably from a passive reflective surface that encodes classical data by modulating the amplitude and phase of incident light. We show that nature imposes no fundamental upper limit to the number of bits that can be read per expended probe photon and demonstrate the quantum-information-theoretic trade-offs between the photon efficiency (bits per photon) and the encoding efficiency (bits per pixel) of optical reading. We show that with a coherent-state (ideal laser) source, an on-off (amplitude-modulation) pixel encoding, and shot-noise-limited direct detection (an overly optimistic model for commercial CD and DVD drives), the highest photon efficiency achievable in principle is about 0.5 bits read per transmitted photon. We then show that a coherent-state probe can read unlimited bits per photon when the receiver is allowed to make joint (inseparable) measurements on the reflected light from a large block of phase-modulated memory pixels. Finally, we show an example of a spatially entangled nonclassical light probe and a receiver design—constructible using a single-photon source, beam splitters, and single-photon detectors—that can in principle read any number of error-free bits of information. The probe is a single photon prepared in a uniform coherent superposition of multiple orthogonal spatial modes, i.e., a W state. The code and joint-detection receiver complexity required by a coherent-state transmitter to achieve comparable photon efficiency performance is shown to be much higher in comparison to that required by the W-state transceiver, although this advantage rapidly disappears with increasing loss in the system.
Bit error rate measurement above and below bit rate tracking threshold
NASA Technical Reports Server (NTRS)
Kobayaski, H. S.; Fowler, J.; Kurple, W. (Inventor)
1978-01-01
Bit error rate is measured by sending a pseudo-random noise (PRN) code test signal simulating digital data through digital equipment to be tested. An incoming signal representing the response of the equipment being tested, together with any added noise, is received and tracked by being compared with a locally generated PRN code. Once the locally generated PRN code matches the incoming signal a tracking lock is obtained. The incoming signal is then integrated and compared bit-by-bit against the locally generated PRN code and differences between bits being compared are counted as bit errors.
Precise accounting of bit errors in floating-point computations
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2009-08-01
Floating-point computation generates errors at the bit level through four processes, namely, overflow, underflow, truncation, and rounding. Overflow and underflow can be detected electronically, and represent systematic errors that are not of interest in this study. Truncation occurs during shifting toward the least-significant bit (herein called right-shifting), and rounding error occurs at the least significant bit. Such errors are not easy to track precisely using published means. Statistical error propagation theory typically yields conservative estimates that are grossly inadequate for deep computational cascades. Forward error analysis theory developed for image and signal processing or matrix operations can yield a more realistic typical case, but the error of the estimate tends to be high in relationship to the estimated error. In this paper, we discuss emerging technology for forward error analysis, which allows an algorithm designer to precisely estimate the output error of a given operation within a computational cascade, under a prespecified set of constraints on input error and computational precision. This technique, called bit accounting, precisely tracks the number of rounding and truncation errors in each bit position of interest to the algorithm designer. Because all errors associated with specific bit positions are tracked, and because integer addition only is involved in error estimation, the error of the estimate is zero. The technique of bit accounting is evaluated for its utility in image and signal processing. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm being analyzed, and its error estimation algorithm. Because of the significant overhead involved in error representation, it is shown that bit accounting is less useful for real-time error estimation, but is well suited to analysis in support of algorithm design.
Approximation of Bit Error Rates in Digital Communications
2007-06-01
and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase
Using Bit Errors To Diagnose Fiber-Optic Links
NASA Technical Reports Server (NTRS)
Bergman, L. A.; Hartmayer, R.; Marelid, S.
1989-01-01
Technique for diagnosis of fiber-optic digital communication link in local-area network of computers based on measurement of bit-error rates. Variable optical attenuator inserted in optical fiber to vary power of received signal. Bit-error rate depends on ratio of peak signal power to root-mean-square noise in receiver. For optimum measurements, one selects bit-error rate between 10 to negative 8th power and 10 to negative 4th power. Greater rates result in low accuracy in determination of signal-to-noise ratios, while lesser rates require impractically long measurement times.
Theoretical Accuracy for ESTL Bit Error Rate Tests
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin
1998-01-01
"Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.
Approximate Minimum Bit Error Rate Equalization for Fading Channels
NASA Astrophysics Data System (ADS)
Kovacs, Lorant; Levendovszky, Janos; Olah, Andras; Treplan, Gergely
2010-12-01
A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER) using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF) or minimum mean square error (MMSE) equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel
NASA Astrophysics Data System (ADS)
Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong
2013-12-01
T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.
Bit-Error-Rate Performance of a Gigabit Ethernet O-CDMA Technology Demonstrator (TD)
Hernandez, V J; Mendez, A J; Bennett, C V; Lennon, W J
2004-07-09
An O-CDMA TD based on 2-D (wavelength/time) codes is described, with bit-error-rate (BER) and eye-diagram measurements given for eight users. Simulations indicate that the TD can support 32 asynchronous users.
Threshold-Based Bit Error Rate for Stopping Iterative Turbo Decoding in a Varying SNR Environment
NASA Astrophysics Data System (ADS)
Mohamad, Roslina; Harun, Harlisya; Mokhtar, Makhfudzah; Adnan, Wan Azizun Wan; Dimyati, Kaharudin
2017-01-01
Online bit error rate (BER) estimation (OBE) has been used as a stopping iterative turbo decoding criterion. However, the stopping criteria only work at high signal-to-noise ratios (SNRs), and fail to have early termination at low SNRs, which contributes to an additional iteration number and an increase in computational complexity. The failure of the stopping criteria is caused by the unsuitable BER threshold, which is obtained by estimating the expected BER performance at high SNRs, and this threshold does not indicate the correct termination according to convergence and non-convergence outputs (CNCO). Hence, in this paper, the threshold computation based on the BER of CNCO is proposed for an OBE stopping criterion (OBEsc). From the results, OBEsc is capable of terminating early in a varying SNR environment. The optimum number of iterations achieved by the OBEsc allows huge savings in decoding iteration number and decreasing the delay of turbo iterative decoding.
Bit error rate investigation of spin-transfer-switched magnetic tunnel junctions
NASA Astrophysics Data System (ADS)
Wang, Zihui; Zhou, Yuchen; Zhang, Jing; Huai, Yiming
2012-10-01
A method is developed to enable a fast bit error rate (BER) characterization of spin-transfer-torque magnetic random access memory magnetic tunnel junction (MTJ) cells without integrating with complementary metal-oxide semiconductor circuit. By utilizing the reflected signal from the devices under test, the measurement setup allows a fast measurement of bit error rates at >106, writing events per second. It is further shown that this method provides a time domain capability to examine the MTJ resistance states during a switching event, which can assist write error analysis in great detail. BER of a set of spin-transfer-torque MTJ cells has been evaluated by using this method, and bit error free operation (down to 10-8) for optimized in-plane MTJ cells has been demonstrated.
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.
Calculate bit error rate for digital radio signal transmission
NASA Astrophysics Data System (ADS)
Sandberg, Jorgen
1987-06-01
A method for estimating symbol error rate caused by imperfect transmission channels is proposed. The method relates the symbol error rate to peak-to-peak amplitude and phase ripple, maximum gain slope, and maximum group delay distortion. The performance degradation of QPSK, offset QPSK (OQPSK), M-ary PSK (MSK) signals transmitted over a wideband channel, exhibiting either sinusoidal amplitude or phase ripples are evaluated using the proposed method. The transmission channel model, which is a single filter with transfer characteristics, for modeling the frequency of response of a system is described. Consideration is given to signal detection and system degradation. The calculations reveal that the QPSK modulated carrier degrades less then the OQPSK and MSK carriers for peak-to-peak amplitude ripple values less than 6 dB and peak-to-peak phase ripple values less than 45 deg.
NASA Astrophysics Data System (ADS)
Kavehrad, Mohsen; Sundberg, Carl-Erik W.
1987-04-01
Average bit error probabilities for M-ary quadrature amplitude modulation (MQAM) systems are evaluated using a truncated union bound to calculate an approximate upper bound on the average bit error probability. Coded BPSK and QSPK are studied in a dual-polarized channel with and without an interference compensator. Trellis-coded MQAM signals are also examined. A new technique, dual-channel polarization hopping, which provides diversity gains when applied to coded cross-coupled channels is proposed. Average bit error probabilities for convolutionally coded QAM schemes in cross-coupled interference channels are derived. It is concluded that trellis-coded QAM schemes give larger coding gains in cross-coupled interference channels than in Gaussian noise and the choice of optimum code for the trellis-coded QAM scheme depends on the expected interference level.
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
Bit error rate testing of a proof-of-concept model baseband processor
NASA Technical Reports Server (NTRS)
Stover, J. B.; Fujikawa, G.
1986-01-01
Bit-error-rate tests were performed on a proof-of-concept baseband processor. The BBP, which operates at an intermediate frequency in the C-Band, demodulates, demultiplexes, routes, remultiplexes, and remodulates digital message segments received from one ground station for retransmission to another. Test methods are discussed and test results are compared with the Contractor's test results.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Detecting bit-flip errors in a logical qubit using stabilizer measurements
Ristè, D.; Poletto, S.; Huang, M.-Z.; Bruno, A.; Vesterinen, V.; Saira, O.-P.; DiCarlo, L.
2015-01-01
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318
NASA Astrophysics Data System (ADS)
Milić, Dejan N.; Đorđević, Goran T.
2013-01-01
In this paper, we study the effects of imperfect reference signal recovery on the bit error rate (BER) performance of dual-branch switch and stay combining receiver over Nakagami-m fading/gamma shadowing channels with arbitrary parameters. The average BER of quaternary phase shift keying is evaluated under the assumption that the reference carrier signal is extracted from the received modulated signal. We compute numerical results illustrating simultaneous influence of average signal-to-noise ratio per bit, fading severity, shadowing, phase-locked loop bandwidth-bit duration (BLTb) product, and switching threshold on BER performance. The effects of BLTb on receiver performance under different channel conditions are emphasized. Optimal switching threshold is determined which minimizes BER performance under given channel and receiver parameters.
Characterization of multiple-bit errors from single-ion tracks in integrated circuits
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Edmonds, L. D.; Smith, L. S.
1989-01-01
The spread of charge induced by an ion track in an integrated circuit and its subsequent collection at sensitive nodal junctions can cause multiple-bit errors. The authors have experimentally and analytically investigated this phenomenon using a 256-kb dynamic random-access memory (DRAM). The effects of different charge-transport mechanisms are illustrated, and two classes of ion-track multiple-bit error clusters are identified. It is demonstrated that ion tracks that hit a junction can affect the lateral spread of charge, depending on the nature of the pull-up load on the junction being hit. Ion tracks that do not hit a junction allow the nearly uninhibited lateral spread of charge.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Rate-distortion optimal video transport over IP allowing packets with bit errors.
Harmanci, Oztan; Tekalp, A Murat
2007-05-01
We propose new models and methods for rate-distortion (RD) optimal video delivery over IP, when packets with bit errors are also delivered. In particular, we propose RD optimal methods for slicing and unequal error protection (UEP) of packets over IP allowing transmission of packets with bit errors. The proposed framework can be employed in a classical independent-layer transport model for optimal slicing, as well as in a cross-layer transport model for optimal slicing and UEP, where the forward error correction (FEC) coding is performed at the link layer, but the application controls the FEC code rate with the constraint that a given IP packet is subject to constant channel protection. The proposed method uses a novel dynamic programming approach to determine the optimal slicing and UEP configuration for each video frame in a practical manner, that is compliant with the AVC/H.264 standard. We also propose new rate and distortion estimation techniques at the encoder side in order to efficiently evaluate the objective function for a slice configuration. The cross-layer formulation option effectively determines which regions of a frame should be protected better; hence, it can be considered as a spatial UEP scheme. We successfully demonstrate, by means of experimental results, that each component of the proposed system provides significant gains, up to 2.0 dB, compared to competitive methods.
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.
1990-01-01
Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.
Intelligibility Performance of Narrowband Linear Predictive Vocoders in the Presence of Bit Errors
1977-11-01
BPS 61 APPENDIX C. INTELLIGIBILITY DATA FOR SUSTENTION FEATURE C.l. DRT test words for sustention 62 C.2. Data table: Sustention intelligibility...scores for LPC and PLPC processors 63 C.3. Analysis of variance summaries: C.3.1. Sustention (Total) 66 C.3.2. Sustention (voiced) 67 C.3.3... Sustention (unvoiced) 68 C.4. Cumulative distributions: DRT scores for sustention C.4.1. LPC-10 at 2400 BPS with bit errors 69 C.4.2. PLPC at
Theoretical Bit Error Rate Performance of the Kalman Filter Excisor for FM Interference
1992-12-01
un filtre de Kalman as- servi numeriquement par verrouillage de phase et s’avere quasi-optimum quant a la demodulation d’une interference de type MF...Puisqu’on presuppose que l’interftrence est plus forte le signal ou que le bruit, le filtre de Kalman se verrouille sur l’interfdrence et permet...AD-A263 018 THEORETICAL BIT ERROR RATE PERFORMANCE OF THE KALMAN FILTER EXCISOR FOR FM INTERFERENCE by Brian RKominchuk APR 19 1993 DEFENCE RESEARCH
Digitally modulated bit error rate measurement system for microwave component evaluation
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.; Budinger, James M.
1989-01-01
The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.
Bit error rate testing of fiber optic data links for MMIC-based phased array antennas
NASA Astrophysics Data System (ADS)
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-06-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
Bit error rate testing of fiber optic data links for MMIC-based phased array antennas
NASA Technical Reports Server (NTRS)
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-01-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
Bit error rate tester using fast parallel generation of linear recurring sequences
Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.
2003-05-06
A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.
Noise and measurement errors in a practical two-state quantum bit commitment protocol
NASA Astrophysics Data System (ADS)
Loura, Ricardo; Almeida, Álvaro J.; André, Paulo S.; Pinto, Armando N.; Mateus, Paulo; Paunković, Nikola
2014-05-01
We present a two-state practical quantum bit commitment protocol, the security of which is based on the current technological limitations, namely the nonexistence of either stable long-term quantum memories or nondemolition measurements. For an optical realization of the protocol, we model the errors, which occur due to the noise and equipment (source, fibers, and detectors) imperfections, accumulated during emission, transmission, and measurement of photons. The optical part is modeled as a combination of a depolarizing channel (white noise), unitary evolution (e.g., systematic rotation of the polarization axis of photons), and two other basis-dependent channels, namely the phase- and bit-flip channels. We analyze quantitatively the effects of noise using two common information-theoretic measures of probability distribution distinguishability: the fidelity and the relative entropy. In particular, we discuss the optimal cheating strategy and show that it is always advantageous for a cheating agent to add some amount of white noise—the particular effect not being present in standard quantum security protocols. We also analyze the protocol's security when the use of (im)perfect nondemolition measurements and noisy or bounded quantum memories is allowed. Finally, we discuss errors occurring due to a finite detector efficiency, dark counts, and imperfect single-photon sources, and we show that the effects are the same as those of standard quantum cryptography.
NASA Astrophysics Data System (ADS)
Hao, Chen; Liyuan, Liu; Dongmei, Li; Chun, Zhang; Zhihua, Wang
2010-10-01
A 12-bit intrinsic accuracy digital-to-analog converter integrated into standard digital 0.18 μm CMOS technology is proposed. It is based on a current steering segmented 6+6 architecture and requires no calibration. By dividing one most significant bit unary source into 16 elements located in 16 separated regions of the array, the linear gradient errors and quadratic errors can be averaged and eliminated effectively. A novel static performance testing method is proposed. The measured differential nonlinearity and integral nonlinearity are 0.42 and 0.39 least significant bit, respectively. For 12-bit resolution, the converter reaches an update rate of 100 MS/s. The chip operates from a single 1.8 V voltage supply, and the core die area is 0.28 mm2.
SITE project. Phase 1: Continuous data bit-error-rate testing
NASA Technical Reports Server (NTRS)
Fujikawa, Gene; Kerczewski, Robert J.
1992-01-01
The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.
NASA Technical Reports Server (NTRS)
Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb
2004-01-01
A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.
NASA Astrophysics Data System (ADS)
Tithi, F. H.; Majumder, S. P.
2017-03-01
Analysis is carried out for a single span wavelength division multiplexing (WDM) transmission system with distributed Raman amplification to find the effect of amplifier induced crosstalk on the bit error rate (BER) with different system parameters. The results are evaluated in terms of crosstalk power induced in a WDM channel due to Raman amplification, optical signal to crosstalk ratio (OSCR) and BER at any distance for different pump power and number of WDM channels. The results show that the WDM system suffers power penalty due to crosstalk which is significant at higher pump power, higher channel separation and number of WDM channel. It is noticed that at a BER 10-9, the power penalty is 8.7 dB and 10.5 dB for the length of 180 km and number of WDM channel N=32 and 64 respectively when the pump power is 20 mW and is higher at high pump power. Analytical results are validated by simulation.
Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications
NASA Technical Reports Server (NTRS)
Shalkhauser, Kurt A.
1987-01-01
Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.
Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications
NASA Technical Reports Server (NTRS)
Shalkhauser, Kurt A.; Fujikawa, Gene
1986-01-01
Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.
Biological Gender Differences in Students' Errors on Mathematics Achievement Tests
ERIC Educational Resources Information Center
Stewart, Christie; Root, Melissa M.; Koriakin, Taylor; Choi, Dowon; Luria, Sarah R.; Bray, Melissa A.; Sassu, Kari; Maykel, Cheryl; O'Rourke, Patricia; Courville, Troy
2017-01-01
This study investigated developmental gender differences in mathematics achievement, using the child and adolescent portion (ages 6-19 years) of the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants were divided into two age categories: 6 to 11 and 12 to 19. Error categories within the Math Concepts & Applications…
2009-03-10
AFIT/GE/ENG/09-01 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR...the United States Air Force, Department of Defense, or the United States Government. AFIT/GE/ENG/09-01 Bit-Error-Rate-Minimizing Channel Shortening...School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the
1983-12-01
Environment 52 34. Comparison of Regression Lines Estimating Scores for the Sustention Intelligibility Feature vs Bit Error Rate for the DOD LPC-10 Vocoder in...both conditions, the feature "sibilation" obtained the highest scores, and the features "graveness" and " sustention " received the poorest scores, but...were under much greater impairment in the noise environment. Details of the variations in scores for sustention are shown in Figure 34, and, for
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
NASA Astrophysics Data System (ADS)
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate
NASA Astrophysics Data System (ADS)
Chau, H. F.
2002-12-01
A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.
Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate
Chau, H.F.
2002-12-01
A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1{radical}(5){approx_equal}27.6%, thereby making it the most error resistant scheme known to date.
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.
NASA Astrophysics Data System (ADS)
Liang, Bin; Gunawan, Erry; Law, Choi Look; Teh, Kah Chan
Analytical expressions based on the Gauss-Chebyshev quadrature (GCQ) rule technique are derived to evaluate the bit-error rate (BER) for the time-hopping pulse position modulation (TH-PPM) ultra-wide band (UWB) systems under a Nakagami-m fading channel. The analyses are validated by the simulation results and adopted to assess the accuracy of the commonly used Gaussian approximation (GA) method. The influence of the fading severity on the BER performance of TH-PPM UWB system is investigated.
Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas
NASA Astrophysics Data System (ADS)
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-01-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe
2008-11-15
Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.
NASA Astrophysics Data System (ADS)
Nazrul Islam, A. K. M.; Majumder, S. P.
2015-06-01
Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.
NASA Astrophysics Data System (ADS)
Krishnan, Prabu; Sriram Kumar, D.
2014-12-01
Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.
Pseudo-random-bit-sequence phase modulation for reduced errors in a fiber optic gyroscope.
Chamoun, Jacob; Digonnet, Michel J F
2016-12-15
Low noise and drift in a laser-driven fiber optic gyroscope (FOG) are demonstrated by interrogating the sensor with a low-coherence laser. The laser coherence was reduced by broadening its optical spectrum using an external electro-optic phase modulator driven by either a sinusoidal or a pseudo-random bit sequence (PRBS) waveform. The noise reduction measured in a FOG driven by a modulated laser agrees with the calculations based on the broadened laser spectrum. Using PRBS modulation, the linewidth of a laser was broadened from 10 MHz to more than 10 GHz, leading to a measured FOG noise of only 0.00073 deg/√h and a drift of 0.023 deg/h. To the best of our knowledge, these are the lowest noise and drift reported in a laser-driven FOG, and this noise is below the requirement for the inertial navigation of aircraft.
Extending the lifetime of a quantum bit with error correction in superconducting circuits
NASA Astrophysics Data System (ADS)
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.
2016-08-01
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
Extending the lifetime of a quantum bit with error correction in superconducting circuits.
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S M; Jiang, L; Mirrahimi, Mazyar; Devoret, M H; Schoelkopf, R J
2016-08-25
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The 'break-even' point of QEC--at which the lifetime of a qubit exceeds the lifetime of the constituents of the system--has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.
2016-01-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287
Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C
2016-06-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.
NASA Technical Reports Server (NTRS)
Cox, Christina B.; Coney, Thom A.
1999-01-01
The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.
NASA Astrophysics Data System (ADS)
Schmidt-Nielsen, Astrid; Kallman, Howard J.
1987-11-01
The comprehension of narrowband digital speech with bit errors was tested by using a sentence verification task. The use of predicates that were either strongly or weakly related to the subjects (e.g., A toad has warts./ A toad has eyes.) varied the difficulty of the verification task. The test conditions included unprocessed and processed speech using a 2.4 kb/s (kilobits per second) linear predictive coding (LPC) voice processing algorithm with random bit error rates of 0 percent, 2 percent, and 5 percent. In general, response accuracy decreased and reaction time increased with LPC processing and with increasing bit error rates. Weakly related true sentences and strongly related false sentences were more difficult than their counterparts. Interactions between sentence type and speech processing conditions are discussed.
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Daugherty, Elaine S.; Kramarchuk, Ihor
1987-01-01
The performance of microwave systems and components for digital data transmission can be characterized by a plot of the bit-error rate as a function of the signal to noise ratio (or E sub b/E sub o). Methods for the efficient automated measurement of bit-error rates and signal-to-noise ratios, developed at NASA Lewis Research Center, are described. Noise measurement considerations and time requirements for measurement accuracy, as well as computer control and data processing methods, are discussed.
NASA Astrophysics Data System (ADS)
Elnoubi, Said M.
1990-06-01
The performance of constant envelope digital partial response continuous phase modulation (PRCPM) with two-bit differential detection and offset receiver diversity is theoretically analyzed in fast Rayleigh fading channels. A simple closed-form expression for the probability of error is derived and evaluated for cases of practical interest to researchers and designers of land mobile radio systems. It is shown that the dynamic bit error rate (BER) performance is considerably improved using the offset diversity scheme. Thus, many PRCPM signals having a compact power spectrum can be used in future digital mobile radio systems.
Blankertz, Benjamin; Dornhege, Guido; Schäfer, Christin; Krepki, Roman; Kohlmorgen, Jens; Müller, Klaus-Robert; Kunzmann, Volker; Losch, Florian; Curio, Gabriel
2003-06-01
Brain-computer interfaces (BCIs) involve two coupled adapting systems--the human subject and the computer. In developing our BCI, our goal was to minimize the need for subject training and to impose the major learning load on the computer. To this end, we use behavioral paradigms that exploit single-trial EEG potentials preceding voluntary finger movements. Here, we report recent results on the basic physiology of such premovement event-related potentials (ERP). 1) We predict the laterality of imminent left- versus right-hand finger movements in a natural keyboard typing condition and demonstrate that a single-trial classification based on the lateralized Bereitschaftspotential (BP) achieves good accuracies even at a pace as fast as 2 taps/s. Results for four out of eight subjects reached a peak information transfer rate of more than 15 b/min; the four other subjects reached 6-10 b/min. 2) We detect cerebral error potentials from single false-response trials in a forced-choice task, reflecting the subject's recognition of an erroneous response. Based on a specifically tailored classification procedure that limits the rate of false positives at, e.g., 2%, the algorithm manages to detect 85% of error trials in seven out of eight subjects. Thus, concatenating a primary single-trial BP-paradigm involving finger classification feedback with such secondary error detection could serve as an efficient online confirmation/correction tool for improvement of bit rates in a future BCI setting. As the present variant of the Berlin BCI is designed to achieve fast classifications in normally behaving subjects, it opens a new perspective for assistance of action control in time-critical behavioral contexts; the potential transfer to paralyzed patients will require further study.
ERIC Educational Resources Information Center
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo
2016-01-01
We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275
Achievement Error Differences of Students with Reading versus Math Disorders
ERIC Educational Resources Information Center
Avitia, Maria; DeBiase, Emily; Pagirsky, Matthew; Root, Melissa M.; Howell, Meiko; Pan, Xingyu; Knupp, Tawnya; Liu, Xiaochen
2017-01-01
The purpose of this study was to understand and compare the types of errors students with a specific learning disability in reading and/or writing (SLD-R/W) and those with a specific learning disability in math (SLD-M) made in the areas of reading, writing, language, and mathematics. Clinical samples were selected from the norming population of…
Yu, Changyuan; Zhang, Shaoliang; Kam, Pooi Yuen; Chen, Jian
2010-06-07
The bit-error rate (BER) expressions of 16- phase-shift keying (PSK) and 16- quadrature amplitude modulation (QAM) are analytically obtained in the presence of a phase error. By averaging over the statistics of the phase error, the performance penalty can be analytically examined as a function of the phase error variance. The phase error variances leading to a 1-dB signal-to-noise ratio per bit penalty at BER=10(-4) have been found to be 8.7 x 10(-2) rad(2), 1.2 x 10(-2) rad(2), 2.4 x 10(-3) rad(2), 6.0 x 10(-4) rad(2) and 2.3 x 10(-3) rad(2) for binary, quadrature, 8-, and 16-PSK and 16QAM, respectively. With the knowledge of the allowable phase error variance, the corresponding laser linewidth tolerance can be predicted. We extend the phase error variance analysis of decision-aided maximum likelihood carrier phase estimation in M-ary PSK to 16QAM, and successfully predict the laser linewidth tolerance in different modulation formats, which agrees well with the Monte Carlo simulations. Finally, approximate BER expressions for different modulation formats are introduced to allow a quick estimation of the BER performance as a function of the phase error variance. Further, the BER approximations give a lower bound on the laser linewidth requirements in M-ary PSK and 16QAM. It is shown that as far as laser linewidth tolerance is concerned, 16QAM outperforms 16PSK which has the same spectral efficiency (SE), and has nearly the same performance as 8PSK which has lower SE. Thus, 16-QAM is a promising modulation format for high SE coherent optical communications.
ERIC Educational Resources Information Center
Prewett, Peter N.; McCaffery, Lucy K.
1993-01-01
Examined relationship between Kaufman Brief Intelligence Test (K-BIT), Stanford-Binet, two-subtests short form, and Kaufman Test of Educational Achievement (K-TEA) with population of 75 academically referred students. K-BIT correlated significantly with Stanford-Binet and K-TEA Math, Reading, and Spelling scores. Results support use of K-BIT as…
ERIC Educational Resources Information Center
Root, Melissa M.; Marchis, Lavinia; White, Erica; Courville, Troy; Choi, Dowon; Bray, Melissa A.; Pan, Xingyu; Wayte, Jessica
2017-01-01
This study investigated the differences in error factor scores on the Kaufman Test of Educational Achievement-Third Edition between individuals with mild intellectual disabilities (Mild IDs), those with low achievement scores but average intelligence, and those with low intelligence but without a Mild ID diagnosis. The two control groups were…
NASA Astrophysics Data System (ADS)
Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang
2015-07-01
Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.
Kikuchi, Kazuro
2012-02-27
We develop a systematic method for characterizing semiconductor-laser phase noise, using a low-speed offline digital coherent receiver. The field spectrum, the FM-noise spectrum, and the phase-error variance measured with such a receiver can completely describe phase-noise characteristics of lasers under test. The sampling rate of the digital coherent receiver should be much higher than the phase-fluctuation speed. However, 1 GS/s is large enough for most of the single-mode semiconductor lasers. In addition to such phase-noise characterization, interpolating the taken data at 1.25 GS/s to form a data stream at 10 GS/s, we can predict the bit-error rate (BER) performance of multi-level modulated optical signals at 10 Gsymbol/s. The BER degradation due to the phase noise is well explained by the result of the phase-noise measurements.
NASA Technical Reports Server (NTRS)
Safren, H. G.
1987-01-01
The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.
NASA Astrophysics Data System (ADS)
Murshid, Syed H.; Chakravarty, Abhijit
2011-06-01
Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.
Psychometric Approach to Error Analysis on Response Patterns of Achievement Tests.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; And Others
Implementation of an adaptive achievement test for teaching signed-numbers operations to junior high students is described. A computer program capable of finding 240 basic errors in signed-number computations was written on the PLATO system and used for analyzing a 64-item conventional test, as well as an adaptive test of addition problems. The…
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
NASA Astrophysics Data System (ADS)
Queiroz, Wamberto J. L.; Lopes, Waslon T. A.; Madeiro, Francisco; Alencar, Marcelo S.
2010-12-01
This paper presents an alternative method for determining exact expressions for the bit error probability (BEP) of modulation schemes subject to Nakagami-[InlineEquation not available: see fulltext.] fading. In this method, the Nakagami-[InlineEquation not available: see fulltext.] fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami-[InlineEquation not available: see fulltext.] random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami-[InlineEquation not available: see fulltext.] fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of [InlineEquation not available: see fulltext.]-ary quadrature amplitude modulation ([InlineEquation not available: see fulltext.]-QAM), [InlineEquation not available: see fulltext.]-ary pulse amplitude modulation ([InlineEquation not available: see fulltext.]-PAM), and rectangular quadrature amplitude modulation ([InlineEquation not available: see fulltext.]-QAM) under Nakagami-[InlineEquation not available: see fulltext.] fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.
Experimental demonstration of topological error correction.
Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei
2012-02-22
Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Allam, F. M.
1985-07-09
A drilling bit comprising a drill body formed from a base portion and a crown portion having a plurality of cutting elements; the base and crown portions are interengaged by a connection portion. An external opening in the crown portion communicates with a core-receiving section in the connecting portion. A core milling assembly, comprising a pair of rotatable, frustum-shaped rotary members, is supported in the connecting section. Each rotary member carries a plurality of cutting elements. During drilling, a core is received in the core-receiving section, where it is milled by the rotation of the rotary members.
ERIC Educational Resources Information Center
Keuning, Jos; Hemker, Bas
2014-01-01
The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…
ERIC Educational Resources Information Center
Saigh, Philip A.; Khairallah, Shereen
1983-01-01
The concurrent validity of the Diagnostic Analysis of Reading Errors (DARE) subtests was studied, based on the responses of Lebanese secondary and postsecondary students relative to their achievement in an English course or on a standardized test of English proficiency. The results indicate that the DARE is not a viable predictor of English…
ERIC Educational Resources Information Center
Prueher, Jane
The purpose of this study was to investigate the extent to which written error-correcting feedback on teacher-made criterion-referenced tests results in increased achievement of high school students taking algebra. In addition, student attitudes toward chapter tests and changes that may occur in those attitudes resulting from teacher treatment of…
Predicting First Grade Achievement from Form Errors in Printing at the Start of Pre-Kindergarten.
ERIC Educational Resources Information Center
Simner, Marvin L.
A 3-year longitudinal investigation indicated that form errors in printing that children make can aid in the identification of at-risk or failure-prone pupils as early as the start of prekindergarten. Two samples were selected, one consisting of 104 and the other of 63 prekindergarten children. Mean age of the samples was 52 months. Item analysis…
Gurkin, N V; Konyshev, V A; Novikov, A G; Treshchikov, V N; Ubaydullaev, R R
2015-01-31
We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)
NASA Astrophysics Data System (ADS)
Gurkin, N. V.; Konyshev, V. A.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.
2015-01-01
We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s-1 DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 - 50 km up to a maximum length of 250 km.
Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers
Mainsah, Boyla O.; Morton, Kenneth D.; Collins, Leslie M.; Sellers, Eric W.; Throckmorton, Chandra S.
2016-01-01
P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies (>70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35–185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (−47 – 0%). Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44–416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43–433%). PMID:25438320
Improved Error Thresholds for Measurement-Free Error Correction
NASA Astrophysics Data System (ADS)
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Improved Error Thresholds for Measurement-Free Error Correction.
Crow, Daniel; Joynt, Robert; Saffman, M
2016-09-23
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
FMO-based H.264 frame layer rate control for low bit rate video transmission
NASA Astrophysics Data System (ADS)
Cajote, Rhandley D.; Aramvith, Supavadee; Miyanaga, Yoshikazu
2011-12-01
The use of flexible macroblock ordering (FMO) in H.264/AVC improves error resiliency at the expense of reduced coding efficiency with added overhead bits for slice headers and signalling. The trade-off is most severe at low bit rates, where header bits occupy a significant portion of the total bit budget. To better manage the rate and improve coding efficiency, we propose enhancements to the H.264/AVC frame layer rate control, which take into consideration the effects of using FMO for video transmission. In this article, we propose a new header bits model, an enhanced frame complexity measure, a bit allocation and a quantization parameter adjustment scheme. Simulation results show that the proposed improvements achieve better visual quality compared with the JM 9.2 frame layer rate control with FMO enabled using a different number of slice groups. Using FMO as an error resilient tool with better rate management is suitable in applications that have limited bandwidth and in error prone environments such as video transmission for mobile terminals.
Cohen, J.H.; Maurer, W.C.; Westcott, P.A.
1994-12-31
Four 3-in. (76.2-mm) diameter experimental bits utilizing large TSP cutters were manufactured in an attempt to develop improved hard rock drill bits. The bits were tested on a 2 3/8-in. (60.3-mm) downhole motor that operated at speeds up to 2,700 rpm and delivered up to 48 hp (36 kW). The TSP bits drilled Batesville marble at rates up to 550 ft/hr (168 m/hr) compared to 50 to 100 ft/hr (15 to 30 m/hr) for conventional roller cone bit drilling in this type of rock. The high penetration rates were achieved because the large cutters cut deep grooves in the rock and there was good clearance beneath the bits due to the large bit/rock standoff distance. None of the large cutters broke during the tests despite the severe drilling conditions and high power levels delivered to the bits, thus overcoming cutter breakage problems experienced with smaller TSP bits on earlier tests. The large cutter TSP bits were capable of operating at much higher power levels than the 48 hp (36 kW) delivered by the drilling motor, showing the need for improved high-power motors for use with these improved TSP bits.
Hood, M.
1986-02-11
A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face. 4 figs.
Hood, Michael
1986-01-01
A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face.
Robust relativistic bit commitment
NASA Astrophysics Data System (ADS)
Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony
2016-12-01
Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.
Mine roof drill bits that save money
Ford, L.M.
1982-04-01
Sandia National Laboratories, Albuquerque, NM, has developed advanced technology roof bolt drill bits which have demonstrated longer life, higher penetration rates at lower thrust and torque, and lower specific energy than conventional roof bolt drill bits. This is achieved through use of advanced technology cutting materials and novel bit body designs. These bits have received extensive laboratory and mine testing. Their performance has been evaluated and estimates of their value in reducing coal production costs have been made. The work was sponsored by the United States Department of Energy.
N-bits all-optical circular shift register based on semiconductor optical amplifier buffer
NASA Astrophysics Data System (ADS)
Lazzeri, Emma; Berrettini, Gianluca; Meloni, Gianluca; Bogoni, Antonella; Potì, Luca
2011-03-01
In the perspective of a future all-optical communication network optical shift register will play an important role especially for what concerns several binary functions, such as serial to parallel conversion and cyclic operations, that are involved in techniques allowing error detection and correction as parity check, or cyclic redundancy check. During the last decades, several attempts of realizing circulating memories or shift register in the optical domain were made, with some limits in terms of functionality, number of bit to be stored (under three), scalability or photonic integrability. In this paper, we present a new approach to realize a circulating optical shift register consisting on an SOA-based optical buffer (OB) and a bit selecting circuit (BSC). The OB is potentially integrable and is able to store a finite number of bit at high bit rate. The BSC returns consecutive bits at a lower clock rate, achieving proper shift register function. The bit selection is realized by means of four wave mixing (FWM) in a Kerr medium, and the sequence cancellation is allowed to enable new sequence storing. Experimental validation of the scheme for fB=59MHz and fB=236MHz shows optical signal to noise ratio per bit penalty of 5.6dB at BER=10-9.
Theoretical and subjective bit assignments in transform picture
NASA Technical Reports Server (NTRS)
Jones, H. W., Jr.
1977-01-01
It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.
24-Hour Relativistic Bit Commitment
NASA Astrophysics Data System (ADS)
Verbanis, Ephanielle; Martin, Anthony; Houlmann, Raphaël; Boso, Gianluca; Bussières, Félix; Zbinden, Hugo
2016-09-01
Bit commitment is a fundamental cryptographic primitive in which a party wishes to commit a secret bit to another party. Perfect security between mistrustful parties is unfortunately impossible to achieve through the asynchronous exchange of classical and quantum messages. Perfect security can nonetheless be achieved if each party splits into two agents exchanging classical information at times and locations satisfying strict relativistic constraints. A relativistic multiround protocol to achieve this was previously proposed and used to implement a 2-millisecond commitment time. Much longer durations were initially thought to be insecure, but recent theoretical progress showed that this is not so. In this Letter, we report on the implementation of a 24-hour bit commitment solely based on timed high-speed optical communication and fast data processing, with all agents located within the city of Geneva. This duration is more than 6 orders of magnitude longer than before, and we argue that it could be extended to one year and allow much more flexibility on the locations of the agents. Our implementation offers a practical and viable solution for use in applications such as digital signatures, secure voting and honesty-preserving auctions.
Broadband Integrated Transmittances (BITS)
NASA Astrophysics Data System (ADS)
Davis, Roger E.; Berrick, Stephen W.
1995-02-01
Broadband Integrated Transmittances (BITS) is an EOSAEL module that calculates transmittance for systems with broad spectral response. Path-integrated concentration data from COMBIC, other EOSAEL modules, or user models are used as input for BITS. The primary function of BITS is to provide rigorous transmittance calculations for broadband systems, replacing the Beer-Lambert law used in most obscuration models. To use BITS, the system detector, filters, optics, and source spectral functions must be defined. The spectral transmittances of the atmosphere and mass extinction coefficient spectral data for the obscurant are also required. The output consists of transmittance as a function of concentration length for Beer's law and band-integrated computation methods. The theory of the model, a description of the module organization, and an operations guide that provides input and output in EOSAEL format are provided in this user's guide. Example uses for BITS are also included.
A cascaded error control coding scheme for space and satellite communication
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
An error control coding scheme for space and satellite communications is presented. The scheme is attained by cascading two codes, the inner and outer codes. Error performance of the scheme is analyzed. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be achieved even for a high channel bit-error-rate. Several exmple schemes are studied. One of the example schemes is proposed to NASA for satellite or spacecraft downlink error control.
ERIC Educational Resources Information Center
Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.
2013-01-01
In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…
Taylor, M.R.; Murdock, A.D.; Evans, S.M.
1996-12-31
PDC drill bit developments have been made to achieve higher penetration rates and longer life, involving a compromise between open, light set, bits for speed and heavy set ones for durability. Developments are described which provided the benefits of both in a revolutionary hydraulic and mechanical design. The hydraulic design causes mud to flow first towards the bit Centre and then outwards. It was extensively flow tested using high speed photography to ensure that bit balling was prevented. It includes features to address bit whirl which were demonstrated in full scale laboratory testing to reduce the bit`s vibration level. The mechanical design maximizes open face volume, known to benefit penetration rate, by using very high blades. However, the heights attainable can be limited by the bit body`s mechanical strength. Steel was chosen to maximize blade strength and was coated with a newly developed hardfacing to improve erosion resistance. A program of fatigue testing assured adequate strength.
Taylor, M.R.; Murdock, A.D.; Evans, S.M.
1999-03-01
PDC drill bit developments have been made to achieve higher penetration rates and longer life, involving a compromise between open, light set, bits for speed, and heavy set ones for durability. Developments are described which provided the benefits of both in a revolutionary hydraulic and mechanical design. The hydraulic design causes mud to flow first towards the bit center and then outwards. It was extensively flow tested using high-speed photography to ensure that bit balling was prevented. It includes features to address bit whirl which were demonstrated in full scale laboratory testing to reduce the bit`s vibration level. The mechanical design maximizes open-face volume, known to benefit penetration rate, by using very high blades. However, the heights attainable can be limited by the bit body`s mechanical strength. Steel was chosen to maximize blade strength and was coated with a newly developed hardfacing to improve erosion resistance. A program of fatigue testing assured adequate strength.
Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.
1994-01-01
A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.
Bit-serial neuroprocessor architecture
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
2001-01-01
A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.
... Emergency Room? What Happens in the Operating Room? Hey! A Flea Bit Me! KidsHealth > For Kids > Hey! A Flea Bit Me! Print A A A ... For Kids For Parents MORE ON THIS TOPIC Hey! A Gnat Bit Me! Hey! A Bedbug Bit ...
ERIC Educational Resources Information Center
Pagirsky, Matthew S.; Koriakin, Taylor A.; Avitia, Maria; Costa, Michael; Marchis, Lavinia; Maykel, Cheryl; Sassu, Kari; Bray, Melissa A.; Pan, Xingyu
2017-01-01
A large body of research has documented the relationship between attention-deficit hyperactivity disorder (ADHD) and reading difficulties in children; however, there have been no studies to date that have examined errors made by students with ADHD and reading difficulties. The present study sought to determine whether the kinds of achievement…
Recent developments in polycrystalline diamond-drill-bit design
Huff, C.F.; Varnado, S.G.
1980-05-01
Development of design criteria for polycrystalline diamond compact (PDC) drill bits for use in severe environments (hard or fractured formations, hot and/or deep wells) is continuing. This effort consists of both analytical and experimental analyses. The experimental program includes single point tests of cutters, laboratory tests of full scale bits, and field tests of these designs. The results of laboratory tests at simulated downhole conditions utilizing new and worn bits are presented. Drilling at simulated downhole pressures was conducted in Mancos Shale and Carthage Marble. Comparisons are made between PDC bits and roller cone bits in drilling with borehole pressures up to 5000 psi (34.5 PMa) with oil and water based muds. The PDC bits drilled at rates up to 5 times as fast as roller bits in the shale. In the first field test, drilling rates approximately twice those achieved with conventional bits were achieved with a PDC bit. A second test demonstrated the value of these bits in correcting deviation and reaming.
Spatio-Temporal Waveform Design for Multiuser Massive MIMO Downlink With 1-bit Receivers
NASA Astrophysics Data System (ADS)
Gokceoglu, Ahmet; Bjornson, Emil; Larsson, Erik G.; Valkama, Mikko
2017-03-01
Internet-of-Things (IoT) refers to a high-density network of low-cost low-bitrate terminals and sensors where also low energy consumption is one central feature. As the power-budget of classical receiver chains is dominated by the high-resolution analog-to-digital converters (ADCs), there is a growing interest towards deploying receiver architectures with reduced-bit or even 1-bit ADCs. In this paper, we study waveform design, optimization and detection aspects of multi-user massive MIMO downlink where user terminals adopt very simple 1-bit ADCs with oversampling. In order to achieve spectral efficiency higher than 1 bit/s/Hz per real-dimension, we propose a two-stage precoding, namely a novel quantization precoder followed by maximum-ratio transmission (MRT) or zero-forcing (ZF) type spatial channel precoder which jointly form the multi-user-multiantenna transmit waveform. The quantization precoder outputs are optimized, under appropriate transmitter and receiver filter bandwidth constraints, to provide controlled inter-symbol-interference (ISI) enabling the input symbols to be uniquely detected from 1-bit quantized observations with a low-complexity symbol detector in the absence of noise. An additional optimization constraint is also imposed in the quantization precoder design to increase the robustness against noise and residual inter-user-interference (IUI). The purpose of the spatial channel precoder, in turn, is to suppress the IUI and provide high beamforming gains such that good symbol-error rates (SERs) can be achieved in the presence of noise and interference. Extensive numerical evaluations illustrate that the proposed spatio-temporal precoder based multiantenna waveform design can facilitate good multi-user link performance, despite the extremely simple 1-bit ADCs in the receivers, hence being one possible enabling technology for the future low-complexity IoT networks.
... of the arachnid family, which also includes mites, spiders, and scorpions . A tick attaches itself to the ... MORE ON THIS TOPIC Hey! A Brown Recluse Spider Bit Me! Hey! A Bedbug Bit Me! Going ...
1991-12-01
integracion . Smart BIT/TSMD provides Rome Laboratory with a laboratory testbed to evaluate and assess the individual characteristics as well as the integration...returning its MIL-STD-1553B data tables and BIT status to normal (no fault) data. When the scenario requires sensory -caused faults, the UUT computer sets...uncorrelated faults. Information Enhanced BIT is a technique that uses additional sensory data to complement the standard BIT information. Sensory information
Classical teleportation of a quantum Bit
Cerf; Gisin; Massar
2000-03-13
Classical teleportation is defined as a scenario where the sender is given the classical description of an arbitrary quantum state while the receiver simulates any measurement on it. This scenario is shown to be achievable by transmitting only a few classical bits if the sender and receiver initially share local hidden variables. Specifically, a communication of 2.19 bits is sufficient on average for the classical teleportation of a qubit, when restricted to von Neumann measurements. The generalization to positive-operator-valued measurements is also discussed.
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Entanglement and Quantum Error Correction with Superconducting Qubits
NASA Astrophysics Data System (ADS)
Reed, Matthew
2015-03-01
Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.
NASA Astrophysics Data System (ADS)
Richard, Ryan M.; Herbert, John M.
2013-06-01
Previous electronic structure studies that have relied on fragmentation have been primarily interested in those methods' abilities to replicate the supersystem energy (or a related energy difference) without recourse to the ability of those supersystem results to replicate experiment or high accuracy benchmarks. Here we focus on replicating accurate ab initio benchmarks, that are suitable for comparison to experimental data. In doing this it becomes imperative that we correct our methods for basis-set superposition errors (BSSE) in a computationally feasible way. This criterion leads us to develop a new method for BSSE correction, which we term the many-body counterpoise correction, or MBn for short. MBn is truncated at order n, in much the same manner as a normal many-body expansion leading to a decrease in computational time. Furthermore, its formulation in terms of fragments makes it especially suitable for use with pre-existing fragment codes. A secondary focus of this study is directed at assessing fragment methods' abilities to extrapolate to the complete basis set (CBS) limit as well as compute approximate triples corrections. Ultimately, by analysis of (H_2O)_6 and (H_2O)_{10}F^- systems, it is concluded that with large enough basis-sets (triple or quad zeta) fragment based methods can replicate high level benchmarks in a fraction of the time.
Optimizing journal bearing bit performance
Moerbe, O.E.; Evans, W.
1986-10-01
This article explains that continuous progress in the field of rock bit technology has produced many new designs and improved features in the tri-cone rock bits used today. Much of the research and advancements have centered around journal bearing systems, seals and lubricants leading to greatly extended bearing life. These improved bearing systems, incorporated into both tooth and insert-type bits, have not only increased the effective life of a rock bit, but have also allowed greater energy levels to be applied. This, in turn, has allowed for higher rates of penetration and lower costs per foot of hole drilled. Continuous improvements in journal bearing bits allowing them to run longer and harder have required similar advancements to be made in cutting structures. In tooth bit designs, these improvements have been basically limited to the areas of gauge protection and to application of hardfacing materials.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro
2017-04-01
A soft/write-error-resilient nonvolatile flip-flop (NVFF) using three-terminal magnetic tunnel junctions (MTJs) is presented. The proposed NVFF exploits a redundant structure with a majority bit implicitly stored, which is tolerant to soft errors including both single-event transients (SETs) and single-event upsets (SEUs). For write-error resilience, all the bits of the redundant MTJs are written using the majority bit with a shared write-current path, exhibiting 1-bit soft-error correction and 1-bit write-error masking. In addition, the shared writing scheme reduces the number of write-current paths to one-third of that with a redundant NVFF with 1-bit soft/write-error masking. Using 65 nm CMOS/MTJ technologies, the proposed NVFF achieves a few orders-of-magnitude reduction in the failure in time (FIT), a 31% reduction in the transistor count, and a 65% reduction in the write energy in comparison with the redundant NVFF.
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized
Drill bit assembly for releasably retaining a drill bit cutter
Glowka, David A.; Raymond, David W.
2002-01-01
A drill bit assembly is provided for releasably retaining a polycrystalline diamond compact drill bit cutter. Two adjacent cavities formed in a drill bit body house, respectively, the disc-shaped drill bit cutter and a wedge-shaped cutter lock element with a removable fastener. The cutter lock element engages one flat surface of the cutter to retain the cutter in its cavity. The drill bit assembly thus enables the cutter to be locked against axial and/or rotational movement while still providing for easy removal of a worn or damaged cutter. The ability to adjust and replace cutters in the field reduces the effect of wear, helps maintains performance and improves drilling efficiency.
Nanomagnetic Bit Cells for MRAM Applications
NASA Astrophysics Data System (ADS)
Engel, Brad
2007-03-01
Magnetoresistive Random Access Memory (MRAM) combines magnetic tunnel junction devices with standard silicon-based microelectronics to obtain the combined attributes of non-volatility, high-speed operation, and unlimited read/write endurance not found in any other existing memory technology. The first MRAM product to market, Freescale's 4Mb MR2A16A, is built on 180 nm CMOS technology with magnetic bit cells of 300 nm minimum dimensions integrated in the upper layers of metal. At these dimensions, both the magnetic switching and magnetoresistive property distributions are governed by a combination of material and patterning variations. One of the keys to controlling these distributions and insuring manufacturability was the invention of the Toggle Write mode. This mode uses a balanced synthetic antiferromagnetic free layer combined with a phased write pulse sequence to achieve robust magnetic switching margin by eliminating the half-select disturb issue found in conventional approaches. Another crucial solution was the ability to deposit and pattern high-quality, high-TMR magnetic tunnel junctions with narrow bit-to-bit resistance variation, low defect density and long-term reliability. In this talk, I will present details of each of the above technology elements, the performance and bit cell reliability, and the scaling behavior to the reduced dimensions of advanced technology nodes.
... Emergency Room? What Happens in the Operating Room? Hey! A Gnat Bit Me! KidsHealth > For Kids > Hey! A Gnat Bit Me! Print A A A ... For Kids For Parents MORE ON THIS TOPIC Hey! A Fire Ant Stung Me! Hey! A Flea ...
... Emergency Room? What Happens in the Operating Room? Hey! A Bedbug Bit Me! KidsHealth > For Kids > Hey! A Bedbug Bit Me! Print A A A ... For Kids For Parents MORE ON THIS TOPIC Hey! A Bee Stung Me! Hey! A Scorpion Stung ...
Simultaneous message framing and error detection
NASA Technical Reports Server (NTRS)
Frey, A. H., Jr.
1968-01-01
Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.
Experimental unconditionally secure bit commitment
NASA Astrophysics Data System (ADS)
Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adan; Pan, Jian-Wei
2014-03-01
Quantum physics allows unconditionally secure communication between parties that trust each other. However, when they do not trust each other such as in the bit commitment, quantum physics is not enough to guarantee security. Only when relativistic causality constraints combined, the unconditional secure bit commitment becomes feasible. Here we experimentally implement a quantum bit commitment with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. Bits are successfully committed with less than 5 . 68 ×10-2 cheating probability. This provides an experimental proof of unconditional secure bit commitment and demonstrates the feasibility of relativistic quantum communication.
Autonomously stabilized entanglement between two superconducting quantum bits.
Shankar, S; Hatridge, M; Leghtas, Z; Sliwa, K M; Narla, A; Vool, U; Girvin, S M; Frunzio, L; Mirrahimi, M; Devoret, M H
2013-12-19
Quantum error correction codes are designed to protect an arbitrary state of a multi-qubit register from decoherence-induced errors, but their implementation is an outstanding challenge in the development of large-scale quantum computers. The first step is to stabilize a non-equilibrium state of a simple quantum system, such as a quantum bit (qubit) or a cavity mode, in the presence of decoherence. This has recently been accomplished using measurement-based feedback schemes. The next step is to prepare and stabilize a state of a composite system. Here we demonstrate the stabilization of an entangled Bell state of a quantum register of two superconducting qubits for an arbitrary time. Our result is achieved using an autonomous feedback scheme that combines continuous drives along with a specifically engineered coupling between the two-qubit register and a dissipative reservoir. Similar autonomous feedback techniques have been used for qubit reset, single-qubit state stabilization, and the creation and stabilization of states of multipartite quantum systems. Unlike conventional, measurement-based schemes, the autonomous approach uses engineered dissipation to counteract decoherence, obviating the need for a complicated external feedback loop to correct errors. Instead, the feedback loop is built into the Hamiltonian such that the steady state of the system in the presence of drives and dissipation is a Bell state, an essential building block for quantum information processing. Such autonomous schemes, which are broadly applicable to a variety of physical systems, as demonstrated by the accompanying paper on trapped ion qubits, will be an essential tool for the implementation of quantum error correction.
NASA Astrophysics Data System (ADS)
2010-08-01
Can excitons be used to achieve scalable control of quantum light? Steffen Michaelis de Vasconcellos explained to Nature Photonics that the optoelectrical control of exciton qubits in quantum dots offers great promise.
Simplified 2-bit photonic digital-to-analog conversion unit based on polarization multiplexing
NASA Astrophysics Data System (ADS)
Zhang, Fangzheng; Gao, Bindong; Ge, Xiaozhong; Pan, Shilong
2016-03-01
A 2-bit photonic digital-to-analog conversion unit is proposed and demonstrated based on polarization multiplexing. The proposed 2-bit digital-to-analog converter (DAC) unit is realized by optical intensity weighting and summing, and its complexity is greatly reduced compared with the traditional 2-bit photonic DACs. Performance of the proposed 2-bit DAC unit is experimentally investigated. The established 2-bit DAC unit achieves a good linear transfer function, and the effective number of bits is calculated to be 1.3. Based on the proposed 2-bit DAC unit, two DAC structures with higher (>2) bit resolutions are proposed and discussed, and the system complexity is expected to be reduced by half by using the proposed technique.
Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation
NASA Technical Reports Server (NTRS)
Swift, G.
2002-01-01
JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.
Experimental Unconditionally Secure Bit Commitment
NASA Astrophysics Data System (ADS)
Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adán; Pan, Jian-Wei
2014-01-01
Quantum physics allows for unconditionally secure communication between parties that trust each other. However, when the parties do not trust each other such as in the bit commitment scenario, quantum physics is not enough to guarantee security unless extra assumptions are made. Unconditionally secure bit commitment only becomes feasible when quantum physics is combined with relativistic causality constraints. Here we experimentally implement a quantum bit commitment protocol with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. The security of the protocol relies on the properties of quantum information and relativity theory. In each run of the experiment, a bit is successfully committed with less than 5.68×10-2 cheating probability. This demonstrates the experimental feasibility of quantum communication with relativistic constraints.
New group demodulator for bandlimited and bit asynchronous FDMA signals
NASA Astrophysics Data System (ADS)
Kobayashi, K.; Kumagai, T.; Kato, S.
1994-05-01
The authors propose a group demodulator that employs the multisymbol chirp Fourier transform to demodulate pulse-shaped and time-asynchronous signals. Computer simulation results show that the bit error rate degradation of the proposed group demodulator at BER = 10(exp -3) is less than 0.3 dB with a 7 symbol chirp Fourier transform.
Optimal encryption of quantum bits
Boykin, P. Oscar; Roychowdhury, Vwani
2003-04-01
We show that 2n random classical bits are both necessary and sufficient for encrypting any unknown state of n quantum bits in an informationally secure manner. We also characterize the complete set of optimal protocols in terms of a set of unitary operations that comprise an orthonormal basis in a canonical inner product space. Moreover, a connection is made between quantum encryption and quantum teleportation that allows for a different proof of optimality of teleportation.
NASA Astrophysics Data System (ADS)
Natsui, Masanori; Tamakoshi, Akira; Endoh, Tetsuo; Ohno, Hideo; Hanyu, Takahiro
2017-04-01
A magnetic-tunnel-junction (MTJ)-based video coding hardware with an MTJ-write-error-rate relaxation scheme as well as a nonvolatile storage capacity reduction technique is designed and fabricated in a 90 nm MOS and 75 nm perpendicular MTJ process. The proposed MTJ-oriented dynamic error masking scheme suppresses the effect of write operation errors on the operation result of LSI, which results in the increase in an acceptable MTJ write error rate up to 7.8 times with less than 6% area overhead, while achieving 79% power reduction compared with that of the static-random-access-memory-based one.
Rock drill bit lubrication system
Johansson, C.
1980-07-08
A drill bit is described that includes a body part, a first chamber in said body part for containing a fluid lubricat under pressure higher than atmosphere during operation of the drill bit, at least one bit segment extending from said body part, a generally conical cutting element mounted on said bit segment and freely rotatable thereon thus forming a cutting element assembly, the improvement in combination therewith, wherein: said bit segment iclujdes an annular part having inner and outer circumferential bearing surfaces, said conical cutting element has corresponding bearing surfaces adjacent those of said annular part thereby forming two pairs of bearing surfaces defining first and second raceways, the second raceway being radially outward of the first raceway, said second raceway further includes a plurlaity of ball bearing elements distributed therein, this second raceway and ball bearing elements forming a locking bearing for retaining said conical cutting element coupled to said annular part of said bit segment, said cutting element assembly further comprising a plurality of rolling bearing elements distributed in said second raceway forming an inner bearing, and lubrication mens for lubricating said raceways and bearing elements therein.
String bit models for superstring
Bergman, O.; Thorn, C.B.
1995-12-31
The authors extend the model of string as a polymer of string bits to the case of superstring. They mainly concentrate on type II-B superstring, with some discussion of the obstacles presented by not II-B superstring, together with possible strategies for surmounting them. As with previous work on bosonic string work within the light-cone gauge. The bit model possesses a good deal less symmetry than the continuous string theory. For one thing, the bit model is formulated as a Galilei invariant theory in (D {minus} 2) + 1 dimensional space-time. This means that Poincare invariance is reduced to the Galilei subgroup in D {minus} 2 space dimensions. Naturally the supersymmetry present in the bit model is likewise dramatically reduced. Continuous string can arise in the bit models with the formation of infinitely long polymers of string bits. Under the right circumstances (at the critical dimension) these polymers can behave as string moving in D dimensional space-time enjoying the full N = 2 Poincare supersymmetric dynamics of type II-B superstring.
String bit models for superstring
Bergman, O.; Thorn, C.B.
1995-11-15
We extend the model of string as a polymer of string bits to the case of superstring. We mainly concentrate on type II-B superstring, with some discussion of the obstacles presented by not II-B superstring, together with possible strategies for surmounting them. As with previous work on bosonic string we work within the light-cone gauge. The bit model possesses a good deal less symmetry than the continuous string theory. For one thing, the bit model is formulated as a Galilei-invariant theory in [({ital D}{minus}2)+1]-dimensional space-time. This means that Poincare invariance is reduced to the Galilei subgroup in {ital D}{minus}2 space dimensions. Naturally the supersymmetry present in the bit model is likewise dramatically reduced. Continuous string can arise in the bit models with the formation of infinitely long polymers of string bits. Under the right circumstances (at the critical dimension) these polymers can behave as string moving in {ital D}-dimensional space-time enjoying the full {ital N}=2 Poincare supersymmetric dynamics of type II-B superstring.
SEMICONDUCTOR INTEGRATED CIRCUITS: An undersampling 14-bit cyclic ADC with over 100-dB SFDR
NASA Astrophysics Data System (ADS)
Weitao, Li; Fule, Li; Dandan, Guo; Chun, Zhang; Zhihua, Wang
2010-02-01
A high linearity, undersampling 14-bit 357 kSps cyclic analog-to-digital convert (ADC) is designed for a radio frequency identification transceiver system. The passive capacitor error-average (PCEA) technique is adopted for high accuracy. An improved PCEA sampling network, capable of eliminating the crosstalk path of two pipelined stages, is employed. Opamp sharing and the removal of the front-end sample and hold amplifier are utilized for low power dissipation and small chip area. An additional digital calibration block is added to compensate for the error due to defective layout design. The presented ADC is fabricated in a 180 nm CMOS process, occupying 0.65 × 1.6 mm2. The input of the undersampling ADC achieves 15.5 MHz with more than 90 dB spurious free dynamic range (SFDR), and the peak SFDR is as high as 106.4 dB with 2.431 MHz input.
An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution
NASA Astrophysics Data System (ADS)
Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray
2013-09-01
In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. The final results obtained after steganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.
Bit storage and bit flip operations in an electromechanical oscillator.
Mahboob, I; Yamaguchi, H
2008-05-01
The Parametron was first proposed as a logic-processing system almost 50 years ago. In this approach the two stable phases of an excited harmonic oscillator provide the basis for logic operations. Computer architectures based on LC oscillators were developed for this approach, but high power consumption and difficulties with integration meant that the Parametron was rendered obsolete by the transistor. Here we propose an approach to mechanical logic based on nanoelectromechanical systems that is a variation on the Parametron architecture and, as a first step towards a possible nanomechanical computer, we demonstrate both bit storage and bit flip operations.
Bit by bit: the Darwinian basis of life.
Joyce, Gerald F
2012-01-01
All known examples of life belong to the same biology, but there is increasing enthusiasm among astronomers, astrobiologists, and synthetic biologists that other forms of life may soon be discovered or synthesized. This enthusiasm should be tempered by the fact that the probability for life to originate is not known. As a guiding principle in parsing potential examples of alternative life, one should ask: How many heritable "bits" of information are involved, and where did they come from? A genetic system that contains more bits than the number that were required to initiate its operation might reasonably be considered a new form of life.
A bit serial sequential circuit
NASA Technical Reports Server (NTRS)
Hu, S.; Whitaker, S.
1990-01-01
Normally a sequential circuit with n state variables consists of n unique hardware realizations, one for each state variable. All variables are processed in parallel. This paper introduces a new sequential circuit architecture that allows the state variables to be realized in a serial manner using only one next state logic circuit. The action of processing the state variables in a serial manner has never been addressed before. This paper presents a general design procedure for circuit construction and initialization. Utilizing pass transistors to form the combinational next state forming logic in synchronous sequential machines, a bit serial state machine can be realized with a single NMOS pass transistor network connected to shift registers. The bit serial state machine occupies less area than other realizations which perform parallel operations. Moreover, the logical circuit of the bit serial state machine can be modified by simply changing the circuit input matrix to develop an adaptive state machine.
NASA Astrophysics Data System (ADS)
Smarandache, Florentin; Christianto, V.
2011-03-01
Mu-bit is defined here as `multi-space bit'. It is different from the standard meaning of bit in conventional computation, because in Smarandache's multispace theory (also spelt multi-space) the bit is created simultaneously in many subspaces (that form together a multi-space). This new `bit' term is different from multi-valued-bit already known in computer technology, for example as MVLong. This new concept is also different from qu-bit from quantum computation terminology. We know that using quantum mechanics logic we could introduce new way of computation with `qubit' (quantum bit), but the logic remains Neumann. Now, from the viewpoint of m-valued multi-space logic, we introduce a new term: `mu-bit' (from `multi-space bit).
A long lifetime, low error rate RRAM design with self-repair module
NASA Astrophysics Data System (ADS)
Zhiqiang, You; Fei, Hu; Liming, Huang; Peng, Liu; Jishun, Kuang; Shiying, Li
2016-11-01
Resistive random access memory (RRAM) is one of the promising candidates for future universal memory. However, it suffers from serious error rate and endurance problems. Therefore, exploring a technical solution is greatly demanded to enhance endurance and reduce error rate. In this paper, we propose a reliable RRAM architecture that includes two reliability modules: error correction code (ECC) and self-repair modules. The ECC module is used to detect errors and decrease error rate. The self-repair module, which is proposed for the first time for RRAM, can get the information of error bits and repair wear-out cells by a repair voltage. Simulation results show that the proposed architecture can achieve lowest error rate and longest lifetime compared to previous reliable designs. Project supported by the New Century Excellent Talents in University (No. NCET-12-0165) and the National Natural Science Foundation of China (Nos. 61472123, 61272396).
Hondo, Toshinobu; Kawai, Yousuke; Toyoda, Michisato
2015-01-01
Rapid acquisition of time-of-flight (TOF) spectra from fewer acquisitions on average was investigated using the newly introduced 12-bit digitizer, Keysight model U5303A. This is expected to achieve a spectrum acquisition 32 times faster than the commonly used 8-bit digitizer for an equal signal-to-noise (S/N) ratio. Averaging fewer pulses improves the detection speed and chromatographic separation performance. However, increasing the analog-to-digital converter bit resolution for a high-frequency signal, such as a TOF spectrum, increases the system noise and requires the timing jitter (aperture error) to be minimized. We studied the relationship between the S/N ratio and the average number of acquisitions using U5303A and compared this with an 8-bit digitizer. The results show that the noise, measured as root-mean-square, decreases linearly to the square root of the average number of acquisitions without background subtraction, which means that almost no systematic noise existed in our signal bandwidth of interest (a few hundreds megahertz). In comparison, 8-bit digitizers that are commonly used in the market require 32 times more pulses with background subtraction.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Extending the Error Correction Capability of Linear Codes,
be made to tolerate and correct up to (k-1) bit failures. Thus if the classical error correction bounds are assumed, a linear transmission code used...in digital circuitry is under-utilized. For example, the single- error - correction , double-error-detection Hamming code could be used to correct up to...two bit failures with some additional error correction circuitry. A simple algorithm for correcting these extra errors in linear codoes is presented. (Author)
Study of bit error rate (BER) for multicarrier OFDM
NASA Astrophysics Data System (ADS)
Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad
2012-10-01
Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.
... dientes Video: Getting an X-ray Hey! A Mosquito Bit Me! KidsHealth > For Kids > Hey! A Mosquito ... español ¡Ay! ¡Me picó un mosquito! What's a Mosquito? A mosquito (say: mus-KEE-toe) is an ...
Cheat sensitive quantum bit commitment.
Hardy, Lucien; Kent, Adrian
2004-04-16
We define cheat sensitive cryptographic protocols between mistrustful parties as protocols which guarantee that, if either cheats, the other has some nonzero probability of detecting the cheating. We describe an unconditionally secure cheat sensitive nonrelativistic bit commitment protocol which uses quantum information to implement a task which is classically impossible; we also describe a simple relativistic protocol.
... What Happens in the Operating Room? Hey! A Mosquito Bit Me! KidsHealth > For Kids > Hey! A Mosquito ... español ¡Ay! ¡Me picó un mosquito! What's a Mosquito? A mosquito (say: mus-KEE-toe) is an ...
... of a sesame seed, and are tan to gray in color. Lice need to suck a tiny bit of blood to survive, and they sometimes live on people's heads and lay eggs in the hair , on the back of the neck, or behind ...
... leave you alone. Reviewed by: Elana Pearl Ben-Joseph, MD Date reviewed: September 2016 For Teens For Kids For Parents MORE ON THIS TOPIC Hey! A Fire Ant Stung Me! Hey! A Scorpion Stung Me! Hey! A Black Widow Spider Bit Me! Hey! A Brown Recluse ...
24-bit color image quantization for 8-bits color display based on Y-Cr-Cb
NASA Astrophysics Data System (ADS)
Chang, Long-Wen; Liu, Tsann-Shyong
1993-10-01
A new fast algorithm that can display true 24-bits color images of JPEG and MPEG on a 8 bits color display is described. Instead of generating a colormap in the R-G-B color space conventionally, we perform analysis of color images based on the Y-Cr-Cb color space. By using Bayes decision rule, the representative values for Y component are selected based on its histogram. Then, the representative values for Cr and Cb components are determined by their conditional histogram assuming Y. Finally, a fast lookup table that can generate R-G-B outputs for Y-Cr-Cb inputs without matrix transformation is addressed. The experimental results show that good-looking quality color quantization images can be achieved by our proposed algorithm.
Classification system adopted for fixed cutter bits
Winters, W.J.; Doiron, H.H.
1988-01-01
The drilling industry has begun adopting the 1987 International Association of Drilling Contractors' (IADC) method for classifying fixed cutter drill bits. By studying the classification codes on bit records and properly applying the new IADC fixed cutter dull grading system to recently run bits, the end-user should be able to improve the selection and usage of fixed cutter bits. Several users are developing databases for fixed cutter bits in an effort to relate field performance to some of the more prominent bit design characteristics.
Stability of single skyrmionic bits
Hagemeister, J.; Romming, N.; von Bergmann, K.; Vedmedenko, E. Y.; Wiesendanger, R.
2015-01-01
The switching between topologically distinct skyrmionic and ferromagnetic states has been proposed as a bit operation for information storage. While long lifetimes of the bits are required for data storage devices, the lifetimes of skyrmions have not been addressed so far. Here we show by means of atomistic Monte Carlo simulations that the field-dependent mean lifetimes of the skyrmionic and ferromagnetic states have a high asymmetry with respect to the critical magnetic field, at which these lifetimes are identical. According to our calculations, the main reason for the enhanced stability of skyrmions is a different field dependence of skyrmionic and ferromagnetic activation energies and a lower attempt frequency of skyrmions rather than the height of energy barriers. We use this knowledge to propose a procedure for the determination of effective material parameters and the quantification of the Monte Carlo timescale from the comparison of theoretical and experimental data. PMID:26465211
Demonstration of low-power bit-interleaving TDM PON.
Van Praet, Christophe; Chow, Hungkei; Suvakovic, Dusan; Van Veen, Doutje; Dupas, Arnaud; Boislaigue, Roger; Farah, Robert; Lau, Man Fai; Galaro, Joseph; Qua, Gin; Anthapadmanabhan, N Prasanth; Torfs, Guy; Yin, Xin; Vetter, Peter
2012-12-10
A functional demonstration of bit-interleaving TDM downstream protocol for passive optical networks (Bi-PON) is reported. The proposed protocol presents a significant reduction in dynamic power consumption in the customer premise equipment over the conventional TDM protocol. It allows to select the relevant bits of all aggregated incoming data immediately after clock and data recovery (CDR) and, hence, allows subsequent hardware to run at much lower user rate. Comparison of experimental results of FPGA-based implementations of Bi-PON and XG-PON shows that more than 30x energy-savings in protocol processing is achievable.
Development of PDC Bits for Downhole Motors
Karasawa, H.; Ohno, T.
1995-01-01
To develop polycrystalline hamond compact (PDC) bits of the full-face type which can be applied to downhole motor drilling, drilling tests for granite and two types of andesite were conducted using bits with 98.43 and 142.88 mm diameters. The bits successfully drilled these types of rock at rotary speeds from 300 to 400 rpm.
14-bit pipeline-SAR ADC for image sensor readout circuits
NASA Astrophysics Data System (ADS)
Wang, Gengyun; Peng, Can; Liu, Tianzhao; Ma, Cheng; Ding, Ning; Chang, Yuchun
2015-03-01
A two stage 14bit pipeline-SAR analog-to-digital converter includes a 5.5bit zero-crossing MDAC and a 9bit asynchronous SAR ADC for image sensor readout circuits built in 0.18um CMOS process is described with low power dissipation as well as small chip area. In this design, we employ comparators instead of high gain and high bandwidth amplifier, which consumes as low as 20mW of power to achieve the sampling rate of 40MSps and 14bit resolution.
Quantifying the Impact of Single Bit Flips on Floating Point Arithmetic
Elliott, James J; Mueller, Frank; Stoyanov, Miroslav K; Webster, Clayton G
2013-08-01
In high-end computing, the collective surface area, smaller fabrication sizes, and increasing density of components have led to an increase in the number of observed bit flips. If mechanisms are not in place to detect them, such flips produce silent errors, i.e. the code returns a result that deviates from the desired solution by more than the allowed tolerance and the discrepancy cannot be distinguished from the standard numerical error associated with the algorithm. These phenomena are believed to occur more frequently in DRAM, but logic gates, arithmetic units, and other circuits are also susceptible to bit flips. Previous work has focused on algorithmic techniques for detecting and correcting bit flips in specific data structures, however, they suffer from lack of generality and often times cannot be implemented in heterogeneous computing environment. Our work takes a novel approach to this problem. We focus on quantifying the impact of a single bit flip on specific floating-point operations. We analyze the error induced by flipping specific bits in the most widely used IEEE floating-point representation in an architecture-agnostic manner, i.e., without requiring proprietary information such as bit flip rates and the vendor-specific circuit designs. We initially study dot products of vectors and demonstrate that not all bit flips create a large error and, more importantly, expected value of the relative magnitude of the error is very sensitive on the bit pattern of the binary representation of the exponent, which strongly depends on scaling. Our results are derived analytically and then verified experimentally with Monte Carlo sampling of random vectors. Furthermore, we consider the natural resilience properties of solvers based on the fixed point iteration and we demonstrate how the resilience of the Jacobi method for linear equations can be significantly improved by rescaling the associated matrix.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Moon, Inkyu; Yi, Faliu; Lee, Yeon H; Javidi, Bahram
2014-05-01
In this work, we evaluate the avalanche effect and bit independence properties of the double random phase encoding (DRPE) algorithm in the Fourier and Fresnel domains. Experimental results show that DRPE has excellent bit independence characteristics in both the Fourier and Fresnel domains. However, DRPE achieves better avalanche effect results in the Fresnel domain than in the Fourier domain. DRPE gives especially poor avalanche effect results in the Fourier domain when only one bit is changed in the plaintext or in the encryption key. Despite this, DRPE shows satisfactory avalanche effect results in the Fresnel domain when any other number of bits changes in the plaintext or in the encryption key. To the best of our knowledge, this is the first report on the avalanche effect and bit independence behaviors of optical encryption approaches for bit units.
Characterization of L10-FePt/Fe based exchange coupled composite bit pattern media
NASA Astrophysics Data System (ADS)
Wang, Hao; Li, Weimin; Rahman, M. Tofizur; Zhao, Haibao; Ding, Jun; Chen, Yunjie; Wang, Jian-Ping
2012-04-01
L10-FePt exchange coupled composite (ECC) bit patterned media has been considered as a potential candidate to achieve high thermal stability and writability for future high density magnetic recording. In this paper, FePt based ECC bit patterned structures with 31 nm bit size and 37 nm pitch size were fabricated using di-block copolymer lithography on 3 inch wafer. Remanant states were tracked using magnetic force microscopy (MFM). DC demagnetization (DCD) curves were plotted by counting the reversed bits in the MFM images. Magnetic domains in which the magnetizations of the neighboring bits were aligned to the same direction were observed in the MFM patterns. Thermal decay measurement was performed for the samples to obtain the thermal stability and gain factor. The thermal barrier was found around 210 kBT with a gain factor up to 1.57 for the bit patterned structure FePt(4 nm)/Fe(4 nm).
BIT BY BIT: A Game Simulating Natural Language Processing in Computers
ERIC Educational Resources Information Center
Kato, Taichi; Arakawa, Chuichi
2008-01-01
BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…
13 bits per incident photon optical communications demonstration
NASA Astrophysics Data System (ADS)
Farr, William H.; Choi, John M.; Moision, Bruce
2013-03-01
Minimizing the mass and power burden of a laser transceiver on a spacecraft for interplanetary optical communications links drives requires operation in a "photon starved" regime. The relevant performance metric in the photon starved regime is Photon Information Efficiency (PIE) with units of bits per photon. Measuring this performance at the detector plane of an optical communications receiver, prior art has achieved performance levels around one bit per incident photon using pulse position modulation (PPM). By combining a PPM modulator with greater than 75 dB extinction ratio with a tungsten silicide (WSi) superconducting nanowire detector with greater than 83% detection efficiency we have demonstrated an optical communications link at 13 bits per incident photon.
Bit-string methods for selective compound acquisition
Rhodes; Willett; Dunbar; Humblet
2000-03-01
Selective compound acquisition programs need to ensure that the compounds that are chosen do not contain undesirable functionality. This is easy to achieve if a supplier is prepared to provide unambiguous structure representations for the compounds that they have available: this paper discusses selection techniques that can be used when a supplier is prepared to make available only fragment bit-string representations for the compounds in their catalog. Experiments with three databases and three types of bit-string show that a simple k-nearest-neighbor searching method provides a surprisingly effective, although far from perfect, way of selecting compounds when only bit-string representations are available. A second approach, based on the use of a fragment weighting scheme analogous to those used in substructural analysis studies, proved to be noticeably less effective in operation.
Low-bit-rate subband image coding with matching pursuits
NASA Astrophysics Data System (ADS)
Rabiee, Hamid; Safavian, S. R.; Gardos, Thomas R.; Mirani, A. J.
1998-01-01
In this paper, a novel multiresolution algorithm for low bit-rate image compression is presented. High quality low bit-rate image compression is achieved by first decomposing the image into approximation and detail subimages with a shift-orthogonal multiresolution analysis. Then, at the coarsest resolution level, the coefficients of the transformation are encoded by an orthogonal matching pursuit algorithm with a wavelet packet dictionary. Our dictionary consists of convolutional splines of up to order two for the detail and approximation subbands. The intercorrelation between the various resolutions is then exploited by using the same bases from the dictionary to encode the coefficients of the finer resolution bands at the corresponding spatial locations. To further exploit the spatial correlation of the coefficients, the zero trees of wavelets (EZW) algorithm was used to identify the potential zero trees. The coefficients of the presentation are then quantized and arithmetic encoded at each resolution, and packed into a scalable bit stream structure. Our new algorithm is highly bit-rate scalable, and performs better than the segmentation based matching pursuit and EZW encoders at lower bit rates, based on subjective image quality and peak signal-to-noise ratio.
Potter, Beth K; Chakraborty, Pranesh; Kronick, Jonathan B; Wilson, Kumanan; Coyle, Doug; Feigenbaum, Annette; Geraghty, Michael T; Karaceper, Maria D; Little, Julian; Mhanni, Aizeddin; Mitchell, John J; Siriwardena, Komudi; Wilson, Brenda J; Syrowatka, Ania
2013-06-01
Across all areas of health care, decision makers are in pursuit of what Berwick and colleagues have called the "triple aim": improving patient experiences with care, improving health outcomes, and managing health system impacts. This is challenging in a rare disease context, as exemplified by inborn errors of metabolism. There is a need for evaluative outcomes research to support effective and appropriate care for inborn errors of metabolism. We suggest that such research should consider interventions at both the level of the health system (e.g., early detection through newborn screening, programs to provide access to treatments) and the level of individual patient care (e.g., orphan drugs, medical foods). We have developed a practice-based evidence framework to guide outcomes research for inborn errors of metabolism. Focusing on outcomes across the triple aim, this framework integrates three priority themes: tailoring care in the context of clinical heterogeneity; a shift from "urgent care" to "opportunity for improvement"; and the need to evaluate the comparative effectiveness of emerging and established therapies. Guided by the framework, a new Canadian research network has been established to generate knowledge that will inform the design and delivery of health services for patients with inborn errors of metabolism and other rare diseases.
An 18-bit high performance audio σ-Δ D/A converter
NASA Astrophysics Data System (ADS)
Hao, Zhang; Xiaowei, Huang; Yan, Han; Cheung, Ray C.; Xiaoxia, Han; Hao, Wang; Guo, Liang
2010-07-01
A multi-bit quantized high performance sigma-delta (σ-Δ) audio DAC is presented. Compared to its single-bit counterpart, the multi-bit quantization offers many advantages, such as simpler σ-Δ modulator circuit, lower clock frequency and smaller spurious tones. With the data weighted average (DWA) mismatch shaping algorithm, element mismatch errors induced by multi-bit quantization can be pushed out of the signal band, hence the noise floor inside the signal band is greatly lowered. To cope with the crosstalk between digital and analog circuits, every analog component is surrounded by a guard ring, which is an innovative attempt. The 18-bit DAC with the above techniques, which is implemented in a 0.18 μm mixed-signal CMOS process, occupies a core area of 1.86 mm2. The measured dynamic range (DR) and peak SNDR are 96 dB and 88 dB, respectively.
Arbitrarily Long Relativistic Bit Commitment
NASA Astrophysics Data System (ADS)
Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony
2015-12-01
We consider the recent relativistic bit commitment protocol introduced by Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015)] and present a new security analysis against classical attacks. In particular, while the initial complexity of the protocol scales double exponentially with the commitment time, our analysis shows that the correct dependence is only linear. This has dramatic implications in terms of implementation: in particular, the commitment time can easily be made arbitrarily long, by only requiring both parties to communicate classically and perform efficient classical computation.
NASA Astrophysics Data System (ADS)
Tur, Moshe; Carmeli, Ran
1993-08-01
A Lyot depolarizer is used in a bit-rate-limiter of the Mach-Zehnder type in order to reduce the phase induced intensity noise, which otherwise, sets a floor on the bit error ratio. Results show an improvement of 3 dB, at the expense of longer arms and a more complicated design.
Switching field distribution of exchange coupled ferri-/ferromagnetic composite bit patterned media
NASA Astrophysics Data System (ADS)
Oezelt, Harald; Kovacs, Alexander; Fischbacher, Johann; Matthes, Patrick; Kirk, Eugenie; Wohlhüter, Phillip; Heyderman, Laura Jane; Albrecht, Manfred; Schrefl, Thomas
2016-09-01
We investigate the switching field distribution and the resulting bit error rate of exchange coupled ferri-/ferromagnetic bilayer island arrays by micromagnetic simulations. Using islands with varying microstructure and anisotropic properties, the intrinsic switching field distribution is computed. The dipolar contribution to the switching field distribution is obtained separately by using a model of a triangular patterned island array resembling 1.4 Tb/in2 bit patterned media. Both contributions are computed for different thicknesses of the soft exchange coupled ferrimagnet and also for ferromagnetic single phase FePt islands. A bit patterned media with a bilayer structure of FeGd( 5 nm )/FePt( 5 nm ) shows a bit error rate of 10-4 with a write field of 1.16 T .
Bit Threads and Holographic Entanglement
NASA Astrophysics Data System (ADS)
Freedman, Michael; Headrick, Matthew
2017-05-01
The Ryu-Takayanagi (RT) formula relates the entanglement entropy of a region in a holographic theory to the area of a corresponding bulk minimal surface. Using the max flow-min cut principle, a theorem from network theory, we rewrite the RT formula in a way that does not make reference to the minimal surface. Instead, we invoke the notion of a "flow", defined as a divergenceless norm-bounded vector field, or equivalently a set of Planck-thickness "bit threads". The entanglement entropy of a boundary region is given by the maximum flux out of it of any flow, or equivalently the maximum number of bit threads that can emanate from it. The threads thus represent entanglement between points on the boundary, and naturally implement the holographic principle. As we explain, this new picture clarifies several conceptual puzzles surrounding the RT formula. We give flow-based proofs of strong subadditivity and related properties; unlike the ones based on minimal surfaces, these proofs correspond in a transparent manner to the properties' information-theoretic meanings. We also briefly discuss certain technical advantages that the flows offer over minimal surfaces. In a mathematical appendix, we review the max flow-min cut theorem on networks and on Riemannian manifolds, and prove in the network case that the set of max flows varies Lipshitz continuously in the network parameters.
Stability of single skyrmionic bits
NASA Astrophysics Data System (ADS)
Vedmedenko, Olena; Hagemeister, Julian; Romming, Niklas; von Bergmann, Kirsten; Wiesendanger, Roland
The switching between topologically distinct skyrmionic and ferromagnetic states has been proposed as a bit operation for information storage. While long lifetimes of the bits are required for data storage devices, the lifetimes of skyrmions have not been addressed so far. Here we show by means of atomistic Monte Carlo simulations that the field-dependent mean lifetimes of the skyrmionic and ferromagnetic states have a high asymmetry with respect to the critical magnetic field, at which these lifetimes are identical. According to our calculations, the main reason for the enhanced stability of skyrmions is a different field dependence of skyrmionic and ferromagnetic activation energies and a lower attempt frequency of skyrmions rather than the height of energy barriers. We use this knowledge to propose a procedure for the determination of effective material parameters and the quantification of the Monte Carlo timescale from the comparison of theoretical and experimental data. Financial support from the DFG in the framework of the SFB668 is acknowledged.
Bit Threads and Holographic Entanglement
NASA Astrophysics Data System (ADS)
Freedman, Michael; Headrick, Matthew
2016-11-01
The Ryu-Takayanagi (RT) formula relates the entanglement entropy of a region in a holographic theory to the area of a corresponding bulk minimal surface. Using the max flow-min cut principle, a theorem from network theory, we rewrite the RT formula in a way that does not make reference to the minimal surface. Instead, we invoke the notion of a "flow", defined as a divergenceless norm-bounded vector field, or equivalently a set of Planck-thickness "bit threads". The entanglement entropy of a boundary region is given by the maximum flux out of it of any flow, or equivalently the maximum number of bit threads that can emanate from it. The threads thus represent entanglement between points on the boundary, and naturally implement the holographic principle. As we explain, this new picture clarifies several conceptual puzzles surrounding the RT formula. We give flow-based proofs of strong subadditivity and related properties; unlike the ones based on minimal surfaces, these proofs correspond in a transparent manner to the properties' information-theoretic meanings. We also briefly discuss certain technical advantages that the flows offer over minimal surfaces. In a mathematical appendix, we review the max flow-min cut theorem on networks and on Riemannian manifolds, and prove in the network case that the set of max flows varies Lipshitz continuously in the network parameters.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
Performance analyses of subcarrier BPSK modulation over M turbulence channels with pointing errors
NASA Astrophysics Data System (ADS)
Ma, Shuang; Li, Ya-tian; Wu, Jia-bin; Geng, Tian-wen; Wu, Zhiyong
2016-05-01
An aggregated channel model is achieved by fitting the Weibull distribution, which includes the effects of atmospheric attenuation, M distributed atmospheric turbulence and nonzero boresight pointing errors. With this approximate channel model, the bit error rate ( BER) and the ergodic capacity of free-space optical (FSO) communication systems utilizing subcarrier binary phase-shift keying (BPSK) modulation are analyzed, respectively. A closed-form expression of BER is derived by using the generalized Gauss-Lagueree quadrature rule, and the bounds of ergodic capacity are discussed. Monte Carlo simulation is provided to confirm the validity of the BER expressions and the bounds of ergodic capacity.
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.
Quantum Error Correction with Biased Noise
NASA Astrophysics Data System (ADS)
Brooks, Peter
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled
A 14-bit 100-MS/s CMOS pipelined ADC featuring 83.5-dB SFDR
NASA Astrophysics Data System (ADS)
Nan, Zhao; Qi, Wei; Huazhong, Yang; Hui, Wang
2014-09-01
This paper demonstrates a 14-bit 100 MS/s CMOS pipelined analog-to-digital converter (ADC). The nonlinearity model for bootstrapped switches is established to optimize the design parameters of bootstrapped switches, and the calculations based on this model agree well with the measurement results. In order to achieve high linearity, a gradient-mismatch cancelling technique is proposed, which eliminates the first order gradient error of sampling capacitors by combining arrangement of reference control signals and capacitor layout. Fabricated in a 0.18-μm CMOS technology, this ADC occupies 10.16-mm2 area. With statistics-based background calibration of finite opamp gain in the first stage, the ADC achieves 83.5-dB spurious free dynamic range and 63.7-dB signal-to-noise-and distortion ratio respectively, and consumes 393 mW power with a supply voltage of 2 V.
Error-thresholds for qudit-based topological quantum memories
NASA Astrophysics Data System (ADS)
Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.
2014-03-01
Extending the quantum computing paradigm from qubits to higher-dimensional quantum systems allows for increased channel capacity and a more efficient implementation of quantum gates. However, to perform reliable computations an efficient error-correction scheme adapted for these multi-level quantum systems is needed. A promising approach is via topological quantum error correction, where stability to external noise is achieved by encoding quantum information in non-local degrees of freedom. A key figure of merit is the error threshold which quantifies the fraction of physical qudits that can be damaged before logical information is lost. Here we analyze the resilience of generalized topological memories built from d-level quantum systems (qudits) to bit-flip errors. The error threshold is determined by mapping the quantum setup to a classical Potts-like model with bond disorder, which is then investigated numerically using large-scale Monte Carlo simulations. Our results show that topological error correction with qutrits exhibits an improved error threshold in comparison to qubit-based systems.
Stinger Enhanced Drill Bits For EGS
Durrand, Christopher J.; Skeem, Marcus R.; Crockett, Ron B.; Hall, David R.
2013-04-29
The project objectives were to design, engineer, test, and commercialize a drill bit suitable for drilling in hard rock and high temperature environments (10,000 meters) likely to be encountered in drilling enhanced geothermal wells. The goal is provide a drill bit that can aid in the increased penetration rate of three times over conventional drilling. Novatek has sought to leverage its polycrystalline diamond technology and a new conical cutter shape, known as the Stinger®, for this purpose. Novatek has developed a fixed bladed bit, known as the JackBit®, populated with both shear cutter and Stingers that is currently being tested by major drilling companies for geothermal and oil and gas applications. The JackBit concept comprises a fixed bladed bit with a center indenter, referred to as the Jack. The JackBit has been extensively tested in the lab and in the field. The JackBit has been transferred to a major bit manufacturer and oil service company. Except for the attached published reports all other information is confidential.
NASA Astrophysics Data System (ADS)
Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero
2016-10-01
In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.
Unequal error protection for H.263 video over indoor DECT channel
NASA Astrophysics Data System (ADS)
Abrardo, Andrea; Barni, Mauro; Garzelli, Andrea
1999-12-01
Several techniques have been proposed to limit the effect of error propagation in video sequences coded at a very low bit rate. The best performance is achieved by combined FEC and ARQ coding strategies. However, retransmission of corrupted data frames introduces additional delay which may be critical either for real-time bidirectional communications, or when the round-trip delay of data frames is high. In such cases, only a FEC strategy is feasible. Full reliable protection of the H.263 stream would produce a significant increase of the overall transmission bit rate. In this paper, an unequal error protection (UEP) FEC coding strategy is proposed. The proposed technique operates by protecting only the most important bits of an H.263 coded video with periodically INTRA refreshed GOB's. ARQ techniques are not considered to avoid delays and simplify the receiver structure. Experimental tests are carried out by simulating a video transmission over a DECT channel in an indoor environment. The results, in terms of PSNR and overall bit rate, prove the effectiveness of the proposed UEP FEC coding.
Photon-number-resolving detector with 10 bits of resolution
Jiang, Leaf A.; Dauler, Eric A.; Chang, Joshua T
2007-06-15
A photon-number-resolving detector with single-photon resolution is described and demonstrated. It has 10 bits of resolution, does not require cryogenic cooling, and is sensitive to near ir wavelengths. This performance is achieved by flood illuminating a 32x32 element In{sub x}Ga{sub 1-x}AsP Geiger-mode avalanche photodiode array that has an integrated counter and digital readout circuit behind each pixel.
Steganography forensics method for detecting least significant bit replacement attack
NASA Astrophysics Data System (ADS)
Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao
2015-01-01
We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
Hey! A Brown Recluse Spider Bit Me!
... putting them on. Reviewed by: Elana Pearl Ben-Joseph, MD Date reviewed: September 2016 For Teens For Kids For Parents MORE ON THIS TOPIC Hey! A Fire Ant Stung Me! Hey! A Tarantula Bit Me! Hey! A Scorpion Stung Me! Hey! A Black Widow Spider Bit Me! Contact Us Print Resources ...
Quantum error correction via robust probe modes
Yamaguchi, Fumiko; Nemoto, Kae; Munro, William J.
2006-06-15
We propose a scheme for quantum error correction using robust continuous variable probe modes, rather than fragile ancilla qubits, to detect errors without destroying data qubits. The use of such probe modes reduces the required number of expensive qubits in error correction and allows efficient encoding, error detection, and error correction. Moreover, the elimination of the need for direct qubit interactions significantly simplifies the construction of quantum circuits. We will illustrate how the approach implements three existing quantum error correcting codes: the three-qubit bit-flip (phase-flip) code, the Shor code, and an erasure code.
Drag blade bit with diamond cutting elements
Radtke, R. P.; Morris, W. V.
1985-02-19
A drag blade bit for connection on a drill string has a hollow body on which there are welded a plurality of cutting or drilling blades. The blades extend longitudinally and radially of the bit body and terminate in relatively flat, radially extending cutting edges. A plurality of cutters are positioned in and spaced along the cutting edges and consists of cylindrical sintered carbide inserts with polycrystalline diamond cutting elements mounted thereon. Hardfacing is provided on the cutting edges between the cutters and on the other surfaces of the blades and the bit body subject to abrasive wear. One or more nozzles are positioned in passages from the interior of the bit body for directing flow of drilling fluid for flushing cuttings from the well bore and for cooling the bit.
Rotary drill bit with rotary cutters
Brandenstein, M.; Ernst, H.M.; Kunkel, H.; Olschewski, A.; Walter, L.
1981-03-31
A rotary drill bit is described that has a drill bit body and at least one trunnion projecting from the drill bit body and a rotary cutter supported on at least one pair of radial rolling bearings on the trunnion. The rolling elements of at least one bearing are guided on at last one axial end facing the drill bit body in an outer bearing race groove incorporated in the bore of the rotary cutter. The inner bearing groove is formed on the trunnion for the rolling elements of the radial roller bearing. A filling opening is provided for assembly of the rolling elements comprising a channel which extends through the drill bit body and trunnion and is essentially axially oriented having one terminal end adjacent the inner bearing race groove and at least one filler piece for sealing the opening. The filling opening is arranged to provide a common filling means for each radial bearing.
Rotary drill bit with rotary cutters
Lachonius, L.
1981-04-28
A rotary drill bit is described having a drill bit body and at least one trunnion projecting from the drill bit body and a rotary cutter supported on at least one radial roller bearing on the trunnion. The rolling elements of the bearing are guided on at least one axial end facing the drill bit body in an outer bearing race groove incorporated in the bore of the rotary cutter. The inner bearing race groove is formed on the trunnion for the rolling elements of the radial roller bearing. At least one filling opening is provided which extends through the drill bit body and trunnion and is essentially axially oriented having one terminal end adjacent the inner bearing race groove and at least one pair of filler piece for sealing the opening. One of the filler pieces is made of an elastically compressible material.
Rotary drill bit with rotary cutter
Brandenstein, M.; Kunkel, H.; Olschewski, A.; Walter, L.
1981-03-17
A rotary drill bit having a drill bit body and at least one trunnion projecting from the drill bit body and a rotary cutter supported on at least one radial roller bearing on the trunnion. The rolling elements of the bearing are guided on at least one axial end facing the drill bit body in an outer bearing race groove incorporated in the bore of the rotary cutter. The inner bearing race groove is formed on the trunnion for the rolling elements of the radial roller bearing. At least one filling opening is provided which extends through the drill bit body and trunnion and is essentially axially oriented having one terminal end adjacent the inner bearing race groove and at least one filler piece for sealing the opening.
NASA Technical Reports Server (NTRS)
Fujiwara, Toru; Kasami, Tadao; Lin, Shu
1989-01-01
The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.
Masking of errors in transmission of VAPC-coded speech
NASA Technical Reports Server (NTRS)
Cox, Neil B.; Froese, Edwin L.
1990-01-01
A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.
High performance 14-bit pipelined redundant signed digit ADC
NASA Astrophysics Data System (ADS)
Narula, Swina; Pandey, Sujata
2016-03-01
A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design.
NASA Astrophysics Data System (ADS)
Fu, Hui-hua; Wang, Ping; Wang, Ran-ran; Liu, Xiao-xia; Guo, Li-xin; Yang, Yin-tang
2016-07-01
The average bit error rate ( ABER) performance of a decode-and-forward (DF) based relay-assisted free-space optical (FSO) communication system over gamma-gamma distribution channels considering the pointing errors is studied. With the help of Meijer's G-function, the probability density function (PDF) and cumulative distribution function (CDF) of the aggregated channel model are derived on the basis of the best path selection scheme. The analytical ABER expression is achieved and the system performance is then investigated with the influence of pointing errors, turbulence strengths and structure parameters. Monte Carlo (MC) simulation is also provided to confirm the analytical ABER expression.
Noyes, H.P.
1990-01-29
We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc{sup 2} in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc{sup 2} our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G{sub {pi}N}{sup 2}){sup 2} = (2m{sub N}/m{sub {pi}}){sup 2} {minus} 1. 21 refs., 1 fig.
Changes realized from extended bit-depth and metal artifact reduction in CT
Glide-Hurst, C.; Chen, D.; Zhong, H.; Chetty, I. J.
2013-06-15
Purpose: High-Z material in computed tomography (CT) yields metal artifacts that degrade image quality and may cause substantial errors in dose calculation. This study couples a metal artifact reduction (MAR) algorithm with enhanced 16-bit depth (vs standard 12-bit) to quantify potential gains in image quality and dosimetry. Methods: Extended CT to electron density (CT-ED) curves were derived from a tissue characterization phantom with titanium and stainless steel inserts scanned at 90-140 kVp for 12- and 16-bit reconstructions. MAR was applied to sinogram data (Brilliance BigBore CT scanner, Philips Healthcare, v.3.5). Monte Carlo simulation (MC-SIM) was performed on a simulated double hip prostheses case (Cerrobend rods embedded in a pelvic phantom) using BEAMnrc/Dosxyz (400 000 0000 histories, 6X, 10 Multiplication-Sign 10 cm{sup 2} beam traversing Cerrobend rod). A phantom study was also conducted using a stainless steel rod embedded in solid water, and dosimetric verification was performed with Gafchromic film analysis (absolute difference and gamma analysis, 2% dose and 2 mm distance to agreement) for plans calculated with Anisotropic Analytic Algorithm (AAA, Eclipse v11.0) to elucidate changes between 12- and 16-bit data. Three patients (bony metastases to the femur and humerus, and a prostate cancer case) with metal implants were reconstructed using both bit depths, with dose calculated using AAA and derived CT-ED curves. Planar dose distributions were assessed via matrix analyses and using gamma criteria of 2%/2 mm. Results: For 12-bit images, CT numbers for titanium and stainless steel saturated at 3071 Hounsfield units (HU), whereas for 16-bit depth, mean CT numbers were much larger (e.g., titanium and stainless steel yielded HU of 8066.5 {+-} 56.6 and 13 588.5 {+-} 198.8 for 16-bit uncorrected scans at 120 kVp, respectively). MC-SIM was well-matched between 12- and 16-bit images except downstream of the Cerrobend rod, where 16-bit dose was {approx}6
Cheat sensitive quantum bit commitment via pre- and post-selected quantum states
NASA Astrophysics Data System (ADS)
Li, Yan-Bing; Wen, Qiao-Yan; Li, Zi-Chen; Qin, Su-Juan; Yang, Ya-Tao
2014-01-01
Cheat sensitive quantum bit commitment is a most important and realizable quantum bit commitment (QBC) protocol. By taking advantage of quantum mechanism, it can achieve higher security than classical bit commitment. In this paper, we propose a QBC schemes based on pre- and post-selected quantum states. The analysis indicates that both of the two participants' cheat strategies will be detected with non-zero probability. And the protocol can be implemented with today's technology as a long-term quantum memory is not needed.
Room temperature single-photon detectors for high bit rate quantum key distribution
Comandar, L. C.; Patel, K. A.; Fröhlich, B. Lucamarini, M.; Sharpe, A. W.; Dynes, J. F.; Yuan, Z. L.; Shields, A. J.; Penty, R. V.
2014-01-13
We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.
Measurement Error. For Good Measure....
ERIC Educational Resources Information Center
Johnson, Stephen; Dulaney, Chuck; Banks, Karen
No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…
Add 16-bit processing to any computer
Fry, W.
1983-01-01
A zoom computer is a simple, fast, and friendly computer in a very small package. Zoom architecture provides an easy migration path from existing 8-bit computers to today's 16-bit and tomorrow's 32-bit designs. With zoom, the benefits of the VLSI technological explosion can be attained with your present peripherals-there is no need to purchase new peripherals because all your old applications run unhindered on zoom. And in addition to all your old applications, zoom offers a whole new world of processing power at your fingertips.
Finger vein recognition based on a personalized best bit map.
Yang, Gongping; Xi, Xiaoming; Yin, Yilong
2012-01-01
Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition.
Finger Vein Recognition Based on a Personalized Best Bit Map
Yang, Gongping; Xi, Xiaoming; Yin, Yilong
2012-01-01
Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition. PMID:22438735
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip
NASA Technical Reports Server (NTRS)
Timoc, C.; Tran, T.; Wongso, J.
1992-01-01
This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.
A practical quantum bit commitment protocol
NASA Astrophysics Data System (ADS)
Arash Sheikholeslam, S.; Aaron Gulliver, T.
2012-01-01
In this paper, we introduce a new quantum bit commitment protocol which is secure against entanglement attacks. A general cheating strategy is examined and shown to be practically ineffective against the proposed approach.
FastBit: Interactively Searching Massive Data
Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming
2009-06-23
As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.
An optical ultrafast random bit generator
NASA Astrophysics Data System (ADS)
Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael
2010-01-01
The generation of random bit sequences based on non-deterministic physical mechanisms is of paramount importance for cryptography and secure communications. High data rates also require extremely fast generation rates and robustness to external perturbations. Physical generators based on stochastic noise sources have been limited in bandwidth to ~100 Mbit s-1 generation rates. We present a physical random bit generator, based on a chaotic semiconductor laser, having time-delayed self-feedback, which operates reliably at rates up to 300 Gbit s-1. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.
Hey! A Brown Recluse Spider Bit Me!
... in the Operating Room? Hey! A Brown Recluse Spider Bit Me! KidsHealth > For Kids > Hey! A Brown ... picó una reclusa parda! What's a Brown Recluse Spider? The brown recluse spider is one of the ...
28-Bit serial word simulator/monitor
NASA Technical Reports Server (NTRS)
Durbin, J. W.
1979-01-01
Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.
A Study of a Standard BIT Circuit.
1977-02-01
availability through improved fault detection and isolation techniques. The particular approach taken in this study involves the use of built-in-test (BIT...circuits at the replaceable unit level to facilitate fault detection and isolation .
Diffusion bonding of Stratapax for drill bits
Middleton, J.N.; Finger, J.T.
1983-01-01
A process has been developed for the diffusion bonding of General Electric's Stratapax drill blanks to support studs for cutter assemblies in drill bits. The diffusion bonding process is described and bond strength test data are provided for a variety of materials. The extensive process details, provided in the Appendices, should be sufficient to enable others to successfully build diffusion-bonded drill bit cutter assemblies.
Neural network implementation using bit streams.
Patel, Nitish D; Nguang, Sing Kiong; Coghill, George G
2007-09-01
A new method for the parallel hardware implementation of artificial neural networks (ANNs) using digital techniques is presented. Signals are represented using uniformly weighted single-bit streams. Techniques for generating bit streams from analog or multibit inputs are also presented. This single-bit representation offers significant advantages over multibit representations since they mitigate the fan-in and fan-out issues which are typical to distributed systems. To process these bit streams using ANNs concepts, functional elements which perform summing, scaling, and squashing have been implemented. These elements are modular and have been designed such that they can be easily interconnected. Two new architectures which act as monotonically increasing differentiable nonlinear squashing functions have also been presented. Using these functional elements, a multilayer perceptron (MLP) can be easily constructed. Two examples successfully demonstrate the use of bit streams in the implementation of ANNs. Since every functional element is individually instantiated, the implementation is genuinely parallel. The results clearly show that this bit-stream technique is viable for the hardware implementation of a variety of distributed systems and for ANNs in particular.
Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun
2015-09-01
A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.
Proper bit design improves penetration rate in abrasive horizontal wells
Gentges, R.J. )
1993-08-09
Overall drilling penetration rates nearly tripled, and drill bit life nearly doubled compared to conventional bits when specially designed natural diamond and polycrystalline diamond compact (PDC) bits were used during a seven-well horizontal drilling program. The improvement in drilling performance from better-designed bits lowered drilling costs at ANR Pipeline Co.'s Reed City gas storage field in Michigan. Laboratory tests with scaled down bits used on abrasive cores helped determine the optimum design for drilling the gas storage wells. The laboratory test results and actual field data were used to develop a matrix-body natural diamond bit, which was later modified to become a matrix-body, blade-type polycrystalline diamond compact bit. This bit had excellent penetration rates and abrasion resistance. The paper describes the background to the project, bit selection, natural diamond bits, field results, new bit designs, and field results from the new design.
Multi-bit biomemory consisting of recombinant protein variants, azurin.
Yagati, Ajay Kumar; Kim, Sang-Uk; Min, Junhong; Choi, Jeong-Woo
2009-01-01
In this study a protein-based multi-bit biomemory device consisting of recombinant azurin with its cysteine residue modified by site-directed mutagenesis method has been developed. The recombinant azurin was directly immobilized on four different gold (Au) electrodes patterned on a single silicon substrate. Using cyclic voltammetry (CV), chronoamperometry (CA) and open circuit potential amperometry (OCPA) methods the memory function of the fabricated biodevice was validated. The charge transfer occurs between protein molecules and Au electrode enables a bi-stable electrical conductivity allowing the system to be used as a digital memory device. Data storage is achieved by applying redox potentials which are within the range of 200mV. Oxidation and open circuit potentials with current sensing were used for writing and reading operations respectively. Applying oxidation potentials in different combinations to each Au electrodes, multi-bit information was stored in to the azurin molecules. Finally, the switching robustness and reliability of the proposed device has been examined. The results suggest that the proposed device has a function of memory and can be used for the construction of nano-scale multi-bit information storage device.
Application of morphological bit planes in retinal blood vessel extraction.
Fraz, M M; Basit, A; Barman, S A
2013-04-01
The appearance of the retinal blood vessels is an important diagnostic indicator of various clinical disorders of the eye and the body. Retinal blood vessels have been shown to provide evidence in terms of change in diameter, branching angles, or tortuosity, as a result of ophthalmic disease. This paper reports the development for an automated method for segmentation of blood vessels in retinal images. A unique combination of methods for retinal blood vessel skeleton detection and multidirectional morphological bit plane slicing is presented to extract the blood vessels from the color retinal images. The skeleton of main vessels is extracted by the application of directional differential operators and then evaluation of combination of derivative signs and average derivative values. Mathematical morphology has been materialized as a proficient technique for quantifying the retinal vasculature in ocular fundus images. A multidirectional top-hat operator with rotating structuring elements is used to emphasize the vessels in a particular direction, and information is extracted using bit plane slicing. An iterative region growing method is applied to integrate the main skeleton and the images resulting from bit plane slicing of vessel direction-dependent morphological filters. The approach is tested on two publicly available databases DRIVE and STARE. Average accuracy achieved by the proposed method is 0.9423 for both the databases with significant values of sensitivity and specificity also; the algorithm outperforms the second human observer in terms of precision of segmented vessel tree.
Spin glasses and error-correcting codes
NASA Technical Reports Server (NTRS)
Belongie, M. L.
1994-01-01
In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.
NASA Technical Reports Server (NTRS)
Chen, C. C.; Gardner, C. S.
1986-01-01
The performance of an optical PPM intersatellite link in the presence of spatial and temporal tracking errors is investigated. It is shown that for a given rms spatial tracking error, an optimal transmitter beamwidth exists which minimizes the probability of bit error. The power penalty associated with the spatial tracking error when the transmitter beamwidth is adjusted to achieve optimal performance is shown to be large (greater than 9 dB) when the rms pointing jitter becomes a significant fraction (greater than 30 percent) of the diffraction limited beamwidth. The power penalty due to temporal tracking error, on the other hand, is relatively small (less than 0.1 dB) when the tracking loop bandwidth is less than 0.1 percent of the slot frequency. By properly allocating losses to spatial and temporal tracking errors, it is seen that a 10 to the -9th error rate can be achieved for a realistic link design with an approximately 3 dB signal power margin.
CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.
Single-chip pulse programmer for magnetic resonance imaging using a 32-bit microcontroller.
Handa, Shinya; Domalain, Thierry; Kose, Katsumi
2007-08-01
A magnetic resonance imaging (MRI) pulse programmer has been developed using a single-chip microcontroller (ADmicroC7026). The microcontroller includes all the components required for the MRI pulse programmer: a 32-bit RISC CPU core, 62 kbytes of flash memory, 8 kbytes of SRAM, two 32-bit timers, four 12-bit DA converters, and 40 bits of general purpose I/O. An evaluation board for the microcontroller was connected to a host personal computer (PC), an MRI transceiver, and a gradient driver using interface circuitry. Target (embedded) and host PC programs were developed to enable MRI pulse sequence generation by the microcontroller. The pulse programmer achieved a (nominal) time resolution of approximately 100 ns and a minimum time delay between successive events of approximately 9 micros. Imaging experiments using the pulse programmer demonstrated the effectiveness of our approach.
The best bits in an iris code.
Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J
2009-06-01
Iris biometric systems apply filters to iris images to extract information about iris texture. Daugman's approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. Our research is the first to present experiments documenting that some bits are more consistent than others. Different regions of the iris are compared to evaluate their relative consistency, and contrary to some previous research, we find that the middle bands of the iris are more consistent than the inner bands. The inconsistent-bit phenomenon is evident across genders and different filter types. Possible causes of inconsistencies, such as segmentation, alignment issues, and different filters are investigated. The inconsistencies are largely due to the coarse quantization of the phase response. Masking iris code bits corresponding to complex filter responses near the axes of the complex plane improves the separation between the match and nonmatch Hamming distance distributions.
Nugget hardfacing toughens roller cone bits
1996-11-25
A new hardfacing material made of pure sintered tungsten carbide nuggets has improved roller cone rock bit performance in extremely hard lithologies, increasing penetration rates and extending bit life through multiple formations. In a recent test run in the Shushufindi 95 wells in Ecuador, a Security DBS 9 7/8-in. MPSF IADC 117M (International Association of Drilling Contractors bit code) bit with this new hardfacing drilled out the float equipment, cement, and show and then 3,309 ft of hard formations. The bit drilled through the Orteguaza claystone/shale/sand and chert formations and then to total depth at 6,309 ft in the Tiyuyacu shale/sand. The 3,309-ft interval was drilled at an average penetration rate (ROP) of 52.5 ft/hr. The proprietary nugget material was tested according to the American Society for Testing Materials (ASTM) G65 wear test method, a standard industry method of measuring wear resistance. The nugget material had ASTM wear test resistance more than twice that of standard hardfacing from conventional tungsten carbide.
Evaluations of bit sleeve and twisted-body bit designs for controlling roof bolter dust
Beck, T.W.
2015-01-01
Drilling into coal mine roof strata to install roof bolts has the potential to release substantial quantities of respirable dust. Due to the proximity of drill holes to the breathing zone of roof bolting personnel, dust escaping the holes and avoiding capture by the dust collection system pose a potential respiratory health risk. Controls are available to complement the typical dry vacuum collection system and minimize harmful exposures during the initial phase of drilling. This paper examines the use of a bit sleeve in combination with a dust-hog-type bit to improve dust extraction during the critical initial phase of drilling. A twisted-body drill bit is also evaluated to determine the quantity of dust liberated in comparison with the dust-hog-type bit. Based on the results of our laboratory tests, the bit sleeve may reduce dust emissions by one-half during the initial phase of drilling before the drill bit is fully enclosed by the drill hole. Because collaring is responsible for the largest dust liberations, overall dust emission can also be substantially reduced. The use of a twisted-body bit has minimal improvement on dust capture compared with the commonly used dust-hog-type bit. PMID:26257435
Managing the Number of Tag Bits Transmitted in a Bit-Tracking RFID Collision Resolution Protocol
Landaluce, Hugo; Perallos, Asier; Angulo, Ignacio
2014-01-01
Radio Frequency Identification (RFID) technology faces the problem of message collisions. The coexistence of tags sharing the communication channel degrades bandwidth, and increases the number of bits transmitted. The window methodology, which controls the number of bits transmitted by the tags, is applied to the collision tree (CT) protocol to solve the tag collision problem. The combination of this methodology with the bit-tracking technology, used in CT, improves the performance of the window and produces a new protocol which decreases the number of bits transmitted. The aim of this paper is to show how the CT bit-tracking protocol is influenced by the proposed window, and how the performance of the novel protocol improves under different conditions of the scenario. Therefore, we have performed a fair comparison of the CT protocol, which uses bit-tracking to identify the first collided bit, and the new proposed protocol with the window methodology. Simulations results show that the proposed window positively decreases the total number of bits that are transmitted by the tags, and outperforms the CT protocol latency in slow tag data rate scenarios. PMID:24406861
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Technical Reports Server (NTRS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
1989-01-01
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Friction of drill bits under Martian pressure
NASA Astrophysics Data System (ADS)
Zacny, K. A.; Cooper, G. A.
2007-03-01
Frictional behavior was investigated for two materials that are good candidates for Mars drill bits: Diamond Impregnated Segments and Polycrystalline Diamond Compacts (PDC). The bits were sliding against dry sandstone and basalt rocks under both Earth and Mars atmospheric pressures and also at temperatures ranging from subzero to over 400 °C. It was found that the friction coefficient dropped from approximately 0.16 to 0.1 as the pressure was lowered from the Earth's pressure to Mars' pressure, at room temperature. This is thought to be a result of the loss of weakly bound water on the sliding surfaces. Holding the pressure at 5 torr and increasing the temperature to approximately 200°C caused a sudden increase in the friction coefficient by approximately 50%. This is attributed to the loss of surface oxides. If no indication of the bit temperature is available, an increase in drilling torque could be misinterpreted as being caused by an increase in auger torque (due to accumulation of cuttings) rather than being the result of a loss of oxide layers due to elevated bit temperatures. An increase in rotational speed (to allow for clearing of cuttings) would then cause greater frictional heating and would increase the drilling torque further. Therefore it would be advisable to monitor the bit temperature or, if that is not possible, to include pauses in drilling to allow the heat to dissipate. Higher friction would also accelerate the wear of the drill bit and in turn reduce the depth of the hole.
Just noticeable disparity error-based depth coding for three-dimensional video
NASA Astrophysics Data System (ADS)
Luo, Lei; Tian, Xiang; Chen, Yaowu
2014-07-01
A just noticeable disparity error (JNDE) measurement to describe the maximum tolerated error of depth maps is proposed. Any error of depth value inside the JNDE range would not cause a noticeable distortion observed by human eyes. The JNDE values are used to preprocess the original depth map in the prediction process during the depth coding and to adjust the prediction residues for further improvement of the coding quality. The proposed scheme can be incorporated in any standardized video coding algorithm based on prediction and transform. The experimental results show that the proposed method can achieve a 34% bit rate saving for depth video coding. Moreover, the perceptual quality of the synthesized view is also improved by the proposed method.
Quantum bit commitment under Gaussian constraints
NASA Astrophysics Data System (ADS)
Mandilara, Aikaterini; Cerf, Nicolas J.
2012-06-01
Quantum bit commitment has long been known to be impossible. Nevertheless, just as in the classical case, imposing certain constraints on the power of the parties may enable the construction of asymptotically secure protocols. Here, we introduce a quantum bit commitment protocol and prove that it is asymptotically secure if cheating is restricted to Gaussian operations. This protocol exploits continuous-variable quantum optical carriers, for which such a Gaussian constraint is experimentally relevant as the high optical nonlinearity needed to effect deterministic non-Gaussian cheating is inaccessible.
Secure quantum bit commitment against empty promises
He Guangping
2006-08-15
The existence of unconditionally secure quantum bit commitment (QBC) is excluded by the Mayers-Lo-Chau no-go theorem. Here we look for the second-best: a QBC protocol that can defeat certain quantum attacks. By breaking the knowledge symmetry between the participants with quantum algorithm, a QBC protocol is proposed and is proven to be secure against a major kind of coherent attacks - the dummy attack, in which the participant makes an empty promise instead of committing to a specific bit. Therefore it surpasses previous QBC protocols which are secure against individual attacks only.
A 16K-bit static IIL RAM with 25-ns access time
NASA Astrophysics Data System (ADS)
Inabe, Y.; Hayashi, T.; Kawarada, K.; Miwa, H.; Ogiue, K.
1982-04-01
A 16,384 x 1-bit RAM with 25-ns access time, 600-mW power dissipation, and 33 sq mm chip size has been developed. Excellent speed-power performance with high packing density has been achieved by an oxide isolation technology in conjunction with novel ECL circuit techniques and IIL flip-flop memory cells, 980 sq microns (35 x 28 microns) in cell size. Development results have shown that IIL flip-flop memory cell is a trump card for assuring achievement of a high-performance large-capacity bipolar RAM, in the above 16K-bit/chip area.
Protected Polycrystalline Diamond Compact Bits For Hard Rock Drilling
Robert Lee Cardenas
2000-10-31
Two bits were designed. One bit was fabricated and tested at Terra-Tek's Drilling Research Laboratory. Fabrication of the second bit was not completed due to complications in fabrication and meeting scheduled test dates at the test facility. A conical bit was tested in a Carthage Marble (compressive strength 14,500 psi) and Sierra White Granite (compressive strength 28,200 psi). During the testing, Hydraulic Horsepower, Bit Weight, Rotation Rate, were varied for the Conical Bit, a Varel Tricone Bit and Varel PDC bit. The Conical Bi did cut rock at a reasonable rate in both rocks. Beneficial effects from the near and through cutter water nozzles were not evident in the marble due to test conditions and were not conclusive in the granite due to test conditions. At atmospheric drilling, the Conical Bit's penetration rate was as good as the standard PDC bit and better than the Tricone Bit. Torque requirements for the Conical Bit were higher than that required for the Standard Bits. Spudding the conical bit into the rock required some care to avoid overloading the nose cutters. The nose design should be evaluated to improve the bit's spudding characteristics.
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression
Multiple bit differential detection of offset QPSK
NASA Technical Reports Server (NTRS)
Simon, M.
2003-01-01
Analogous to multiple symbol differential detection of quadrature phase-shift-keying, a multiple bit differential detection scheme is described for offset QPSK that also exhibits continuous improvement in performance with increasing observation interval. Being derived from maximum-likelihood (ML) considerations, the proposed scheme is purported to be the most power efficient scheme for such a modulation and detection method.
Hey! A Black Widow Spider Bit Me!
... dientes Video: Getting an X-ray Hey! A Black Widow Spider Bit Me! KidsHealth > For Kids > Hey! A Black ... Me picó una araña viuda negra! What's a Black Widow Spider? The black widow spider is one of the ...
1 /N perturbations in superstring bit models
NASA Astrophysics Data System (ADS)
Thorn, Charles B.
2016-03-01
We develop the 1 /N expansion for stable string bit models, focusing on a model with bit creation operators carrying only transverse spinor indices a =1 ,…,s . At leading order (N =∞ ), this model produces a (discretized) light cone string with a "transverse space" of s Grassmann worldsheet fields. Higher orders in the 1 /N expansion are shown to be determined by the overlap of a single large closed chain (discretized string) with two smaller closed chains. In the models studied here, the overlap is not accompanied with operator insertions at the break/join point. Then, the requirement that the discretized overlap has a smooth continuum limit leads to the critical Grassmann "dimension" of s =24 . This "protostring," a Grassmann analog of the bosonic string, is unusual, because it has no large transverse dimensions. It is a string moving in one space dimension, and there are neither tachyons nor massless particles. The protostring, derived from our pure spinor string bit model, has 24 Grassmann dimensions, 16 of which could be bosonized to form 8 compactified bosonic dimensions, leaving 8 Grassmann dimensions—the worldsheet content of the superstring. If the transverse space of the protostring could be "decompactified," string bit models might provide an appealing and solid foundation for superstring theory.
NASA Astrophysics Data System (ADS)
Wang, Chao; Mou, Xuanqin; Hong, Wei; Zhang, Lei
2013-02-01
In lossy image/video encoding, there is a compromise between the number of bits (rate) and the extent of distortion. Bits need to be properly allocated to different sources, such as frames and macro blocks (MBs). Since the human eyes are more sensitive to the difference than the absolute value of signals, the MINMAX criterion suggests to minimizing the maximum distortion of the sources to limit quality fluctuation. There are many works aimed to such constant quality encoding, however, almost all of them focus on the frame layer bit allocation, and use PSNR as the quality index. We suggest that the bit allocation for MBs should also be constrained in the constant quality, and furthermore, perceptual quality indices should be used instead of PSNR. Based on this idea, we propose a multi-pass block-layer bit allocation scheme for quality constrained encoding. The experimental results show that the proposed method can achieve much better encoding performance. Keywords: Bit allocation, block-layer, perceptual quality, constant quality, quality constrained
Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding
NASA Astrophysics Data System (ADS)
Zhang, Yun; Jiang, Gangyi; Yu, Mei; Chen, Ken; Dai, Qionghai
2010-12-01
We propose a Stereoscopic Visual Attention- (SVA-) based regional bit allocation optimization for Multiview Video Coding (MVC) by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI) is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over [InlineEquation not available: see fulltext.]% bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by [InlineEquation not available: see fulltext.] dB at the cost of insensitive image quality degradation of the background image.
NASA Astrophysics Data System (ADS)
Tumok, N. Nur
1989-12-01
A variable bit width delta modulator (VBWDM) demodulator was designed, built and tested to achieve voice and music communication using a bandlimited channel. Only baseband modulation is applied to the input signal. Since there is no clock used during the digitizing process at the modulator, no bit synchronization is required for signal recovery in the receiver. The modulator is a hybrid design using 7 linear and 3 digital integrated circuits (IC), and the demodulator uses 2 linear ICs. A lowpass filter (LPF) is used to simulate the channel. The average number of bits sent over the channel is measured with a frequency counter at the output of the modulator. The minimum bandwidth required for the LPF is determined according to the intelligibility of the recovered message. Measurements indicate an average bit rate required for intelligible voice transmission is in the range of 2 to 4 kilobits per seconds (kbps) and between 2 to 5 kbps for music. The channel 3 dB bandwidth required is determined to be 1.5 kilohertzs. Besides the hardware simplicity, VBWDM provides an option for intelligible digitized voice transmission at very low bit rates without requiring synchronization. Another important feature of the modulator design is that no bits are sent when no signal is present at the input which saves transmitter power (important for mobile stations) and reduces probability of intercept and jamming in military applications.
Quantum error correction in a solid-state hybrid spin register.
Waldherr, G; Wang, Y; Zaiser, S; Jamali, M; Schulte-Herbrüggen, T; Abe, H; Ohshima, T; Isoya, J; Du, J F; Neumann, P; Wrachtrup, J
2014-02-13
Error correction is important in classical and quantum computation. Decoherence caused by the inevitable interaction of quantum bits with their environment leads to dephasing or even relaxation. Correction of the concomitant errors is therefore a fundamental requirement for scalable quantum computation. Although algorithms for error correction have been known for some time, experimental realizations are scarce. Here we show quantum error correction in a heterogeneous, solid-state spin system. We demonstrate that joint initialization, projective readout and fast local and non-local gate operations can all be achieved in diamond spin systems, even under ambient conditions. High-fidelity initialization of a whole spin register (99 per cent) and single-shot readout of multiple individual nuclear spins are achieved by using the ancillary electron spin of a nitrogen-vacancy defect. Implementation of a novel non-local gate generic to our electron-nuclear quantum register allows the preparation of entangled states of three nuclear spins, with fidelities exceeding 85 per cent. With these techniques, we demonstrate three-qubit phase-flip error correction. Using optimal control, all of the above operations achieve fidelities approaching those needed for fault-tolerant quantum operation, thus paving the way to large-scale quantum computation. Besides their use with diamond spin systems, our techniques can be used to improve scaling of quantum networks relying on phosphorus in silicon, quantum dots, silicon carbide or rare-earth ions in solids.
Frictional ignition with coal-mining bits. Information Circular/1990
Courtney, W.G.
1990-01-01
The publication reviews recent U.S. Bureau of Mines studies of frictional ignition of a methane-air environment by coal mining bits cutting into sandstone and the effectiveness of remedial techniques to reduce the likelihood of frictional ignition. Frictional ignition with a mining bit always involves a worn bit having a wear flat on the tip of the bit. The worn bit forms hot spots on the surface of the sandstone because of frictional abrasion. The hot spots then can ignite the methane-air environment. A small wear flat forms a small hot spot, which does not give ignition, while a large wear flat forms a large hot spot, which gives ignition. The likelihood of frictional ignition can be somewhat reduced by using a mushroom-shaped tungsten-carbide bit tip on the mining bit and by increasing the bit clearance angle; it can be significantly reduced by using a water spray nozzle in back of each bit.
... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Single-event upset (SEU) in a DRAM with on-chip error correction
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Schwartz, H. R.; Watson, R. K.; Hasnain, Z.; Nevile, L. R.
1987-01-01
Results are given of SEU measurements on 256K dynamic RAMs with on-chip error correction. They are claimed to be the first ever reported. A (12/8) Hamming error-correcting code was incorporated in the layout. Physical separation of the bits in each code word was used to guard against multiple bits being disrupted in any given word. Significant reduction in observed errors is reported.
Error-resilient compression and transmission of scalable video
NASA Astrophysics Data System (ADS)
Cho, Sungdae; Pearlman, William A.
2000-12-01
Compressed video bitstreams require protection from channel errors in a wireless channel and protection from packet loss in a wired ATM channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single ARQ (automatic- repeat-request) proved to be an effective means for protecting the bitstream. There were two problems with this scheme: the noiseless reverse channel ARQ may not be feasible in practice; and, in the absence of channel coding and ARQ, the decoded sequence was hopelessly corrupted even for relatively clean channels. In this paper, we first show how to make the 3-D SPIHT bitstream more robust to channel errors by breaking the wavelet transform into a number of spatio-temporal tree blocks which can be encoded and decoded independently. This procedure brings the added benefit of parallelization of the compression and decompression algorithms. Then we demonstrate the packetization of the bit stream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness. Then we encode each packet with a channel code. Not only does this protect the integrity of the packets in most cases, but it also allows detection of packet decoding failures, so that only the cleanly recovered packets are reconstructed. This procedure obviates ARQ, because the performance is only about 1 dB worse than normal 3-D SPIHT with FEC and ARQ. Furthermore, the parallelization makes possible real-time implementation in hardware and software.
Errors and Their Mitigation at the Kirchhoff-Law-Johnson-Noise Secure Key Exchange
Saez, Yessica; Kish, Laszlo B.
2013-01-01
A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The most interesting finding is that the error probability decays exponentially with the duration of the time window of single bit exchange. The results indicate that it is feasible to have so small error probabilities of the exchanged bits that error correction algorithms are not required. The results are demonstrated with practical considerations. PMID:24303033
Reducing Truncation Error In Integer Processing
NASA Technical Reports Server (NTRS)
Thomas, J. Brooks; Berner, Jeffrey B.; Graham, J. Scott
1995-01-01
Improved method of rounding off (truncation of least-significant bits) in integer processing of data devised. Provides for reduction, to extremely low value, of numerical bias otherwise generated by accumulation of truncation errors from many arithmetic operations. Devised for use in integer signal processing, in which rescaling and truncation usually performed to reduce number of bits, which typically builds up in sequence of operations. Essence of method to alternate direction of roundoff (plus, then minus) on alternate occurrences of truncated values contributing to bias.
Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.
Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu
2017-03-15
The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.
Entanglement enhanced bit rate over multiple uses of a lossy bosonic channel with memory
NASA Astrophysics Data System (ADS)
Lupo, C.; Mancini, S.
2010-03-01
We present a study of the achievable rates for classical information transmission via a lossy bosonic channel with memory, using homodyne detection. A comparison with the memoryless case shows that the presence of memory enhances the bit rate if information is encoded in collective states, i.e., states which are entangled over different uses of the channel.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Kasami, Tadao; Fujiwara, Toru; Takata, Toyoo; Lin, Shu
1988-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error-correcting codes, called the inner and outer codes. Its error performance is analyzed for a binary symmetric channel with bit-error rate epsilon less than 1/2. It is shown that, if the inner and outer codes are chosen properly, high reliability can be attained even for a high-channel bit-error rate. Specific examples with inner codes ranging from high rates and Reed-Solomon codes as outer codes are considered, and their error probabilities evaluated. They all provide high reliability even for high bit-error rates, say 0.1-0.01. Several example schemes are being considered for satellite and spacecraft downlink error control.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Blind One-Bit Compressive Sampling
2013-01-17
notation and recalling some background from convex analysis . For the d-dimensional Euclidean space Rd, the class of all lower semicontinuous convex...compressed sensing, Applied and Computational Harmonic Analysis , 27 (2009), pp. 265 – 274. [3] P. T. Boufounos and R. G. Baraniuk, 1-bit compressive sensing...Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Acquisition and Retaining Granular Samples via a Rotating Coring Bit
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart
2013-01-01
This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the
NSC 800, 8-bit CMOS microprocessor
NASA Technical Reports Server (NTRS)
Suszko, S. F.
1984-01-01
The NSC 800 is an 8-bit CMOS microprocessor manufactured by National Semiconductor Corp., Santa Clara, California. The 8-bit microprocessor chip with 40-pad pin-terminals has eight address buffers (A8-A15), eight data address -- I/O buffers (AD(sub 0)-AD(sub 7)), six interrupt controls and sixteen timing controls with a chip clock generator and an 8-bit dynamic RAM refresh circuit. The 22 internal registers have the capability of addressing 64K bytes of memory and 256 I/O devices. The chip is fabricated on N-type (100) silicon using self-aligned polysilicon gates and local oxidation process technology. The chip interconnect consists of four levels: Aluminum, Polysi 2, Polysi 1, and P(+) and N(+) diffusions. The four levels, except for contact interface, are isolated by interlevel oxide. The chip is packaged in a 40-pin dual-in-line (DIP), side brazed, hermetically sealed, ceramic package with a metal lid. The operating voltage for the device is 5 V. It is available in three operating temperature ranges: 0 to +70 C, -40 to +85 C, and -55 to +125 C. Two devices were submitted for product evaluation by F. Stott, MTS, JPL Microprocessor Specialist. The devices were pencil-marked and photographed for identification.
Lathe tool bit and holder for machining fiberglass materials
NASA Technical Reports Server (NTRS)
Winn, L. E. (Inventor)
1972-01-01
A lathe tool and holder combination for machining resin impregnated fiberglass cloth laminates is described. The tool holder and tool bit combination is designed to accommodate a conventional carbide-tipped, round shank router bit as the cutting medium, and provides an infinite number of cutting angles in order to produce a true and smooth surface in the fiberglass material workpiece with every pass of the tool bit. The technique utilizes damaged router bits which ordinarily would be discarded.
Method to manufacture bit patterned magnetic recording media
Raeymaekers, Bart; Sinha, Dipen N
2014-05-13
A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful method to improve the performance in near-term demonstrations of superdense coding.
Performing repetitive error detection in a superconducting quantum circuit
NASA Astrophysics Data System (ADS)
Kelly, J.; Barends, R.; Fowler, A.; Megrant, A.; Jeffrey, E.; White, T.; Sank, D.; Mutus, J.; Campbell, B.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Hoi, I.-C.; Neill, C.; O'Malley, P. J. J.; Roushan, P.; Quintana, C.; Vainsencher, A.; Wenner, J.; Cleland, A. N.; Martinis, J. M.
2015-03-01
Recently, there has been a large interest in the surface code error correction scheme, as gate and measurement fidelities are near the threshold. If error rates are sufficiently low, increased systems size leads to suppression of logical error. We have combined high fidelity gate and measurements in a single nine qubit device, and use it to perform up to eight rounds of repetitive bit error detection. We demonstrate suppression of environmentally-induced error as compared to a single physical qubit, as well as reduced logical error rates with increasing system size.
Laboratory and field testing of improved geothermal rock bits
Hendrickson, R.R.; Jones, A.H.; Winzenried, R.W.; Maish, A.B.
1980-07-01
The development and testing of 222 mm (8-3/4 inch) unsealed, insert type, medium hard formation, high-temperature bits are described. The new bits were fabricated by substituting improved materials in critical bit components. These materials were selected on bases of their high temperature properties, machinability, and heat treatment response. Program objectives required that both machining and heat treating could be accomplished with existing rock bit production equipment. Two types of experimental bits were subjected to laboratory air drilling tests at 250/sup 0/C (482/sup 0/F) in cast iron. These tests indicated field testing could be conducted without danger to the hole, and that bearing wear would be substantially reduced. Six additional experimental bits, and eight conventional bits were then subjected to air drilling a 240/sup 0/C (464/sup 0/F) in Francisan Graywacke at The Geysers, CA. The materials selected improved roller wear by 200%, friction-pin wear by 150%, and lug wear by 150%. Geysers drilling performances compared directly to conventional bits indicate that in-gage drilling life was increased by 70%. All bits at The Geysers are subjected to reaming out-of-gage hole prior to drilling. Under these conditions the experimental bits showed a 30% increase in usable hole over the conventional bits. These tests demonstrated a potential well cost reduction of 4 to 8%. Savings of 12% are considered possible with drilling procedures optimized for the experimental bits.
Evaluation of Error-Correcting Codes for Radiation-Tolerant Memory
NASA Astrophysics Data System (ADS)
Jeon, S.; Vijaya Kumar, B. V. K.; Hwang, E.; Cheng, M. K.
2010-05-01
In space, radiation particles can introduce temporary or permanent errors in memory systems. To protect against potential memory faults, either thick shielding or error-correcting codes (ECC) are used by memory modules. Thick shielding translates into increased mass, and conventional ECCs designed for memories are typically capable of correcting only a single error and detecting a double error. Decoding is usually performed through hard decisions where bits are treated as either correct or flipped in polarity. We demonstrate that low-density parity-check (LDPC) codes that are already prevalent in many communication applications can also be used to protect memories in space. Because the achievable code rate monotonically decreases with time due to the accumulation of permanent errors, the achievable rate serves as a useful metric in designing an appropriate ECC. We describe how to compute soft symbol reliabilities on our channel and compare the performance of soft-decision decoding LDPC codes against conventional hard-decision decoding of Reed-Solomon (RS) codes and Bose-Chaudhuri-Hocquenghem (BCH) codes for a specific memory structure.
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
Reversible n-Bit to n-Bit Integer Haar-Like Transforms
Senecal, J; Duchaineau, M; Joy, K I
2003-11-03
We introduce a wavelet-like transform similar to the Haar transform, but with the properties that it packs the results into the same number of bits as the original data, and is reversible. Our method, called TLHaar, uses table lookups to replace the averaging, differencing, and bit shifting performed in a Haar IntegerWavelet Transform (IWT). TLHaar maintains the same coefficient magnitude relationships for the low- and high-pass coefficients as true Haar, but reorders them to fit into the same number of bits as the input signal, thus eliminating the sign bit that is added to the Haar IWT output coefficients. Eliminating the sign bit avoids using extra memory and speeds the transform process. We tested TLHaar on a variety of image types, and when compared to the Haar IWT TLHaar is significantly faster. For image data with lines or hard edges TLHaar coefficients compress better than those of the Haar IWT. Due to its speed TLHaar is suitable for streaming hardware implementations with fixed data sizes, such as DVI channels.
Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors
Chan, Stanley H.; Elgendy, Omar A.; Wang, Xiran
2016-01-01
A quanta image sensor (QIS) is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD) cameras. PMID:27879687
A 16-bit cascaded sigma-delta pipeline A/D converter
NASA Astrophysics Data System (ADS)
Liang, Li; Ruzhang, Li; Zhou, Yu; Jiabin, Zhang; Jun'an, Zhang
2009-05-01
A low-noise cascaded multi-bit sigma-delta pipeline analog-to-digital converter (ADC) with a low over-sampling rate is presented. The architecture is composed of a 2-order 5-bit sigma-delta modulator and a cascaded 4-stage 12-bit pipelined ADC, and operates at a low 8X oversampling rate. The static and dynamic performances of the whole ADC can be improved by using dynamic element matching technique. The ADC operates at a 4 MHz clock rate and dissipates 300 mW at a 5 V/3 V analog/digital power supply. It is developed in a 0.35 μm CMOS process and achieves an SNR of 82 dB.
Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface
Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang
2017-01-01
Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements “00”, “01”, “10”, and “11”, respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source. PMID:28176870
Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface
NASA Astrophysics Data System (ADS)
Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang
2017-02-01
Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements “00”, “01”, “10”, and “11”, respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source.
Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface.
Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang
2017-02-08
Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements "00", "01", "10", and "11", respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source.
Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors.
Chan, Stanley H; Elgendy, Omar A; Wang, Xiran
2016-11-22
A quanta image sensor (QIS) is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD) cameras.
Spin-glass models as error-correcting codes
NASA Astrophysics Data System (ADS)
Sourlas, Nicolas
1989-06-01
DURING the transmission of information, errors may occur because of the presence of noise, such as thermal noise in electronic signals or interference with other sources of radiation. One wants to recover the information with the minimum error possible. In theory this is possible by increasing the power of the emitter source. But as the cost is proportional to the energy fed into the channel, it costs less to code the message before sending it, thus including redundant 'coding' bits, and to decode at the end. Coding theory provides rigorous bounds on the cost-effectiveness of any code. The explicit codes proposed so far for practical applications do not saturate these bounds; that is, they do not achieve optimal cost-efficiency. Here we show that theoretical models of magnetically disordered materials (spin glasses) provide a new class of error-correction codes. Their cost performance can be calculated using the methods of statistical mechanics, and is found to be excellent. These models can, under certain circumstances, constitute the first known codes to saturate Shannon's well-known cost-performance bounds.
NASA Astrophysics Data System (ADS)
Gao, Bindong; Zhang, Fangzheng; Pan, Shilong
2017-01-01
Arbitrary waveform generation by a serial photonic digital-to-analog converter (PDAC) is demonstrated in this paper. To construct the PDAC, an intensity weighted, time and wavelength interleaved optical pulse train is first generated by phase modulation and fiber dispersion. Then, on-off keying modulation of the optical pulses is implemented according to the input serial digital bits. After proper dispersion compensation, a combined optical pulse is obtained with its total power proportional to the weighted sum of the input digital bits, and digital-to-analog conversion is achieved after optical-to-electronic conversion. By properly designing the input bits and using a low pass filter for signal smoothing, arbitrary waveforms can be generated. Performance of the PDAC is experimentally investigated by establishing a 2.5 GSa/s 4-bit PDAC. The established PDAC is found to have a good linear transfer function and the effective number of bits (ENOB) reaches as high as 3.49. Based on the constructed PDAC, generation of multiple waveforms including triangular, parabolic, square and sawtooth pulses are implemented with the generated waveforms very close to the ideal waveforms.
Multi-Bit Embedding in Asymmetric Digital Watermarking without Exposing Secret Information
NASA Astrophysics Data System (ADS)
Okada, Mitsuo; Kikuchi, Hiroaki; Okabe, Yasuo
A new method of multi-bit embedding based on a protocol of secure asymmetric digital watermarking detection is proposed. Secure watermark detection has been achieved by means of allowing watermark verifier to detect a message without any secret information exposed in extraction process. Our methodology is based on an asymmetric property of a watermark algorithm which hybridizes a statistical watermark algorithm and a public-key algorithm. In 2004, Furukawa proposed a secure watermark detection scheme using patchwork watermarking and Paillier encryption, but the feasibility had not tested in his work. We have examined it and have shown that it has a drawback in heavy overhead in processing time. We overcome the issue by replacing the cryptosystem with the modified El Gamal encryption and improve performance in processing time. We have developed software implementation for both methods and have measured effective performance. The obtained result shows that the performance of our method is better than Frukawa's method under most of practical conditions. In our method, multiple bits can be embedded by assigning distinct generators in each bit, while the embedding algorithm of Frukawa's method assumes a single-bit message. This strongly enhances capability of multi-bit information embedding, and also improves communication and computation cost.
A novel bit-quad-based Euler number computing algorithm.
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.
Detachable shoe plates for large diameter drill bits
Bardwell, A.E.
1984-08-21
Shoe members and drill shank members for large diameter cable drilling bits are provided with a tongue on one of the members that projects axially relative to the drill shank member and with an arcuate lip and projecting stop on the other of the members to trap the tongue and prevent radial movement of the shoe member in response to radially directed forces caused by the spinning of the bit in drilling operations. Such forces would impose shear stresses on the fastening members that extend through the shoe member and axially into the drill shank. Four embodiments are disclosed: a spudding bit, two star bits and a scow bit.
NASA Astrophysics Data System (ADS)
Aarthi, G.; Prabu, K.; Reddy, G. Ramachandra
2017-02-01
The average spectral efficiency (ASE) is investigated for the free space optical (FSO) communications employing On-Off keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems with and without pointing errors over the Gamma-Gamma (GG) channels. Additionally, the impact of aperture averaging on the ASE is explored. The influence of different turbulence conditions along with varying receiver aperture has been studied and analyzed. For the considered system, the exact average channel capacity (ACC) expressions are derived using Meijer G function. Results reveal that when pointing errors are introduced, there is a significant reduction in the ASE performance. The enhancement in the ASE can be achieved with an increase in the receiver aperture across various turbulence regimes and reducing the beam radius in the presence of pointing errors, but the rate of increment of ASE reduces with a larger diameter and it is saturated finally. The coherent OWC system provides better ASE performance of 49 bits/s/Hz at the average transmitted optical power of 5 dBm with an aperture diameter of 10 cm and 34 bits/s/Hz without and with pointing errors under strong turbulence respectively.
Suboptimal greedy power allocation schemes for discrete bit loading.
Al-Hanafy, Waleed; Weiss, Stephan
2013-01-01
We consider low cost discrete bit loading based on greedy power allocation (GPA) under the constraints of total transmit power budget, target BER, and maximum permissible QAM modulation order. Compared to the standard GPA, which is optimal in terms of maximising the data throughput, three suboptimal schemes are proposed, which perform GPA on subsets of subchannels only. These subsets are created by considering the minimum SNR boundaries of QAM levels for a given target BER. We demonstrate how these schemes can significantly reduce the computational complexity required for power allocation, particularly in the case of a large number of subchannels. Two of the proposed algorithms can achieve near optimal performance including a transfer of residual power between subsets at the expense of a very small extra cost. By simulations, we show that the two near optimal schemes, while greatly reducing complexity, perform best in two separate and distinct SNR regions.
Suboptimal Greedy Power Allocation Schemes for Discrete Bit Loading
2013-01-01
We consider low cost discrete bit loading based on greedy power allocation (GPA) under the constraints of total transmit power budget, target BER, and maximum permissible QAM modulation order. Compared to the standard GPA, which is optimal in terms of maximising the data throughput, three suboptimal schemes are proposed, which perform GPA on subsets of subchannels only. These subsets are created by considering the minimum SNR boundaries of QAM levels for a given target BER. We demonstrate how these schemes can significantly reduce the computational complexity required for power allocation, particularly in the case of a large number of subchannels. Two of the proposed algorithms can achieve near optimal performance including a transfer of residual power between subsets at the expense of a very small extra cost. By simulations, we show that the two near optimal schemes, while greatly reducing complexity, perform best in two separate and distinct SNR regions. PMID:24501578
New Mechanisms of rock-bit wear in geothermal wells
Macini, Paolo
1996-01-24
This paper presents recent results of an investigation on failure mode and wear of rock-bits used to drill geothermal wells located in the area of Larderello (Italy). A new wear mechanism, conceived from drilling records and dull bit evaluation analysis, has been identified and a particular configuration of rock-bit has been developed and tested in order to reduce drilling costs. The role of high Bottom Hole Temperature (BHT) on rock-bit performances seems not yet very well understood: so far, only drillability and formation abrasiveness are generally considered to account for poor drilling performances. In this paper, the detrimental effects of high BHT on sealing and reservoir system of Friction Bearing Rock-bits (FBR) have been investigated, and a new bearing wear pattern for FBR's run in high BHT holes has been identified and further verified via laboratory inspections on dull bits. A novel interpretation of flat worn cutting structure has been derived from the above wear pattern, suggesting the design of a particular bit configuration. Test bits, designed in the light of the above criteria, have been prepared and field tested successfully. The paper reports the results of these tests, which yielded a new rock-bit application, today considered as a standad practice in Italian geothermal fields. This application suggests that the correct evaluation of rock-bit wear can help to improve the overall drilling performances and to minimize drilling problems through a better interpretation of the relationships amongst rock-bits, formation properties and downhole temperature.
Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem
NASA Astrophysics Data System (ADS)
Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad
2013-12-01
In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
A Tunable, Software-based DRAM Error Detection and Correction Library for HPC
Fiala, David J; Ferreira, Kurt Brian; Mueller, Frank; Engelmann, Christian
2012-01-01
Proposed exascale systems will present a number of considerable resiliency challenges. In particular, DRAM soft-errors, or bit-flips, are expected to greatly increase due to the increased memory density of these systems. Current hardware-based fault-tolerance methods will be unsuitable for addressing the expected soft error frequency rate. As a result, additional software will be needed to address this challenge. In this paper we introduce LIBSDC, a tunable, transparent silent data corruption detection and correction library for HPC applications. LIBSDC provides comprehensive SDC protection for program memory by implementing on-demand page integrity verification. Experimental benchmarks with Mantevo HPCCG show that once tuned, LIBSDC is able to achieve SDC protection with 50\\% overhead of resources, less than the 100\\% needed for double modular redundancy.
NASA Technical Reports Server (NTRS)
Folkner, W. M.; Finger, M. H.
1990-01-01
Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.
New EEPROM concept for single bit operation
NASA Astrophysics Data System (ADS)
Raguet, J. R.; Laffont, R.; Bouchakour, R.; Bidal, V.; Regnier, A.; Mirabel, J. M.
2008-10-01
A new 0.56 μm 2 dual-gate EEPROM transistor is presented in this paper. To optimize the cell layout, a new model based on previous work has been developed. This concept allows single bit memory operations with high density; new cell programming conditions has been defined to optimize electrical behavior. Concept has been validated in an EEPROM standard technology from STMicroelectronics and allows a cell area reduction of above 50%. With appropriate potentials, the cell produces a programming window of 4 V. Moreover, this dual-gate transistor in static mode becomes an adjustable threshold voltage transistor which can be used in logic circuit or RFID applications.
Modeste Nguimdo, Romain; Tchitnga, Robert; Woafo, Paul
2013-12-15
We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bit rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.
A two-dimensional coding design for staggered islands bit-patterned media recording
NASA Astrophysics Data System (ADS)
Arrayangkool, A.; Warisarn, C.
2015-05-01
This paper proposes a two dimensional (2D) staggered recorded-bit patterning (SRBP) coding scheme for staggered array bit-patterned media recording channel to alleviate the severe 2D interference, which requires no redundant bits at the expense of increased an additional memories. Specifically, a data sequence is first split into three tracks. Then, each data track is circularly shifted to find the best data pattern based on a look-up table before recording such that the shifted data tracks cause the lowest 2D interference in the readback signal. Simulation results indicate that the system with our proposed SRBP scheme outperforms that without any 2D coding, especially when an areal density (AD) is high and/or the position jitter is large. Specifically, for the system without position jitter at bit-error rate of 10-4, the proposed scheme can provide about 1.8 and 2.3 dB gains at the AD of 2.5 and 3.0 Tb/in.2, respectively.
Object tracking based on bit-planes
NASA Astrophysics Data System (ADS)
Li, Na; Zhao, Xiangmo; Liu, Ying; Li, Daxiang; Wu, Shiqian; Zhao, Feng
2016-01-01
Visual object tracking is one of the most important components in computer vision. The main challenge for robust tracking is to handle illumination change, appearance modification, occlusion, motion blur, and pose variation. But in surveillance videos, factors such as low resolution, high levels of noise, and uneven illumination further increase the difficulty of tracking. To tackle this problem, an object tracking algorithm based on bit-planes is proposed. First, intensity and local binary pattern features represented by bit-planes are used to build two appearance models, respectively. Second, in the neighborhood of the estimated object location, a region that is most similar to the models is detected as the tracked object in the current frame. In the last step, the appearance models are updated with new tracking results in order to deal with environmental and object changes. Experimental results on several challenging video sequences demonstrate the superior performance of our tracker compared with six state-of-the-art tracking algorithms. Additionally, our tracker is more robust to low resolution, uneven illumination, and noisy video sequences.
Progress in the Advanced Synthetic-Diamond Drill Bit Program
Glowka, D.A.; Dennis, T.; Le, Phi; Cohen, J.; Chow, J.
1995-11-01
Cooperative research is currently underway among five drill bit companies and Sandia National Laboratories to improve synthetic-diamond drill bits for hard-rock applications. This work, sponsored by the US Department of Energy and individual bit companies, is aimed at improving performance and bit life in harder rock than has previously been possible to drill effectively with synthetic-diamond drill bits. The goal is to extend to harder rocks the economic advantages seen in using synthetic-diamond drill bits in soft and medium rock formations. Four projects are being conducted under this research program. Each project is investigating a different area of synthetic diamond bit technology that builds on the current technology base and market interests of the individual companies involved. These projects include: optimization of the PDC claw cutter; optimization of the Track-Set PDC bit; advanced TSP bit development; and optimization of impregnated-diamond drill bits. This paper describes the progress made in each of these projects to date.
A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments
Loebner, Keith T. K. Underwood, Thomas C.; Cappelli, Mark A.
2015-06-15
A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenum pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated.
Image Steganography using Karhunen-Loève Transform and Least Bit Substitution
NASA Astrophysics Data System (ADS)
Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray
2013-10-01
As communication channels are increasing in number, reliability of faithful communication is reducing. Hacking and tempering of data are two major issues for which security should be provided by channel. This raises the importance of steganography. In this paper, a novel method to encode the message information inside a carrier image has been described. It uses Karhunen-Lo\\`eve Transform for compression of data and Least Bit Substitution for data encryption. Compression removes redundancy and thus also provides encoding to a level. It is taken further by means of Least Bit Substitution. The algorithm used for this purpose uses pixel matrix which serves as a best tool to work on. Three different sets of images were used with three different numbers of bits to be substituted by message information. The experimental results show that algorithm is time efficient and provides high data capacity. Further, it can decrypt the original data effectively. Parameters such as carrier error and message error were calculated for each set and were compared for performance analysis.
van der Palen, Job; Thomas, Mike; Chrystyn, Henry; Sharma, Raj K; van der Valk, Paul DLPM; Goosens, Martijn; Wilkinson, Tom; Stonham, Carol; Chauhan, Anoop J; Imber, Varsha; Zhu, Chang-Qing; Svedsater, Henrik; Barnes, Neil C
2016-01-01
Errors in the use of different inhalers were investigated in patients naive to the devices under investigation in a multicentre, single-visit, randomised, open-label, cross-over study. Patients with chronic obstructive pulmonary disease (COPD) or asthma were assigned to ELLIPTA vs DISKUS (Accuhaler), metered-dose inhaler (MDI) or Turbuhaler. Patients with COPD were also assigned to ELLIPTA vs Handihaler or Breezhaler. Patients demonstrated inhaler use after reading the patient information leaflet (PIL). A trained investigator assessed critical errors (i.e., those likely to result in the inhalation of significantly reduced, minimal or no medication). If the patient made errors, the investigator demonstrated the correct use of the inhaler, and the patient demonstrated inhaler use again. Fewer COPD patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS, 9/171 (5%) vs 75/171 (44%); MDI, 10/80 (13%) vs 48/80 (60%); Turbuhaler, 8/100 (8%) vs 44/100 (44%); Handihaler, 17/118 (14%) vs 57/118 (48%); Breezhaler, 13/98 (13%) vs 45/98 (46%; all P<0.001). Most patients (57–70%) made no errors using ELLIPTA and did not require investigator instruction. Instruction was required for DISKUS (65%), MDI (85%), Turbuhaler (71%), Handihaler (62%) and Breezhaler (56%). Fewer asthma patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS (3/70 (4%) vs 9/70 (13%), P=0.221); MDI (2/32 (6%) vs 8/32 (25%), P=0.074) and significantly fewer vs Turbuhaler (3/60 (5%) vs 20/60 (33%), P<0.001). More asthma and COPD patients preferred ELLIPTA over the other devices (all P⩽0.002). Significantly, fewer COPD patients using ELLIPTA made critical errors after reading the PIL vs other inhalers. More asthma and COPD patients preferred ELLIPTA over comparator inhalers. PMID:27883002
van der Palen, Job; Thomas, Mike; Chrystyn, Henry; Sharma, Raj K; van der Valk, Paul Dlpm; Goosens, Martijn; Wilkinson, Tom; Stonham, Carol; Chauhan, Anoop J; Imber, Varsha; Zhu, Chang-Qing; Svedsater, Henrik; Barnes, Neil C
2016-11-24
Errors in the use of different inhalers were investigated in patients naive to the devices under investigation in a multicentre, single-visit, randomised, open-label, cross-over study. Patients with chronic obstructive pulmonary disease (COPD) or asthma were assigned to ELLIPTA vs DISKUS (Accuhaler), metered-dose inhaler (MDI) or Turbuhaler. Patients with COPD were also assigned to ELLIPTA vs Handihaler or Breezhaler. Patients demonstrated inhaler use after reading the patient information leaflet (PIL). A trained investigator assessed critical errors (i.e., those likely to result in the inhalation of significantly reduced, minimal or no medication). If the patient made errors, the investigator demonstrated the correct use of the inhaler, and the patient demonstrated inhaler use again. Fewer COPD patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS, 9/171 (5%) vs 75/171 (44%); MDI, 10/80 (13%) vs 48/80 (60%); Turbuhaler, 8/100 (8%) vs 44/100 (44%); Handihaler, 17/118 (14%) vs 57/118 (48%); Breezhaler, 13/98 (13%) vs 45/98 (46%; all P<0.001). Most patients (57-70%) made no errors using ELLIPTA and did not require investigator instruction. Instruction was required for DISKUS (65%), MDI (85%), Turbuhaler (71%), Handihaler (62%) and Breezhaler (56%). Fewer asthma patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS (3/70 (4%) vs 9/70 (13%), P=0.221); MDI (2/32 (6%) vs 8/32 (25%), P=0.074) and significantly fewer vs Turbuhaler (3/60 (5%) vs 20/60 (33%), P<0.001). More asthma and COPD patients preferred ELLIPTA over the other devices (all P⩽0.002). Significantly, fewer COPD patients using ELLIPTA made critical errors after reading the PIL vs other inhalers. More asthma and COPD patients preferred ELLIPTA over comparator inhalers.
Computer Series, 17: Bits and Pieces, 5.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1981-01-01
Contains short descriptions of computer programs or hardware that simulate laboratory instruments or results of kinetics experiments, including ones that include experiment error, numerical simulation, first-order kinetic mechanisms, a game for decisionmaking, and simulated mass spectrophotometers. (CS)
Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.
Huang, Shih-Chia; Chen, Bo-Hao
2013-12-01
Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76
An error control system with multiple-stage forward error corrections
NASA Technical Reports Server (NTRS)
Takata, Toyoo; Fujiwara, Toru; Kasami, Tadao; Lin, Shu
1990-01-01
A robust error-control coding system is presented. This system is a cascaded FEC (forward error control) scheme supported by parity retransmissions for further error correction in the erroneous data words. The error performance and throughput efficiency of the system are analyzed. Two specific examples of the error-control system are studied. The first example does not use an inner code, and the outer code, which is not interleaved, is a shortened code of the NASA standard RS code over GF(28). The second example, as proposed for NASA, uses the same shortened RS code as the base outer code C2, except that it is interleaved to a depth of 2. It is shown that both examples provide high reliability and throughput efficiency even for high channel bit-error rates in the range of 0.01.
Proposed first-generation WSQ bit allocation procedure
Bradley, J.N.; Brislawn, C.M.
1993-09-08
The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.
Rearrangement and Grouping of Data Bits for Efficient Lossless Encoding
NASA Astrophysics Data System (ADS)
B, Ajitha Shenoy K.; Ajith, Meghana; Mantoor, Vinayak M.
2017-01-01
This paper describes the efficacy of rearranging and grouping of data bits. Lossless encoding techniques like Huffman Coding, Arithmetic Coding etc., works well on data which contains redundant information. The idea behind these techniques is to encode more frequently occurring symbols with less number of bits and more seldom occurring symbols with more number of bits. Most of the methods fail if there is a non-redundant data. We propose a method to re arrange and group data bits there by making the data redundant and then different lossless encoding techniques can be applied. In this paper we propose three different methods to rearrange the data bits, and efficient way of grouping them. This is first such attempt. We also justify the need of rearranging and grouping data bits for efficient lossless encoding.
Temperature-compensated 8-bit column driver for AMLCD
NASA Astrophysics Data System (ADS)
Dingwall, Andrew G. F.; Lin, Mark L.
1995-06-01
An all-digital, 5 V input, 50 Mhz bandwidth, 10-bit resolution, 128- column, AMLCD column driver IC has been designed and tested. The 10-bit design can enhance display definition over 6-bit nd 8-bit column drivers. Precision is realized with on-chip, switched-capacitor DACs plus transparently auto-offset-calibrated, opamp outputs. Increased resolution permits multiple 10-bit digital gamma remappings in EPROMs over temperature. Driver IC features include externally programmable number of output column, bi-directional digital data shifting, user- defined row/column/pixel/frame inversion, power management, timing control for daisy-chained column drivers, and digital bit inversion. The architecture uses fewer reference power supplies.
Performance of 1D quantum cellular automata in the presence of error
NASA Astrophysics Data System (ADS)
McNally, Douglas M.; Clemens, James P.
2016-09-01
This work expands a previous block-partitioned quantum cellular automata (BQCA) model proposed by Brennen and Williams [Phys. Rev. A. 68, 042311 (2003)] to incorporate physically realistic error models. These include timing errors in the form of over- and under-rotations of quantum states during computational gate sequences, stochastic phase and bit flip errors, as well as undesired two-bit interactions occurring during single-bit gate portions of an update sequence. A compensation method to counteract the undesired pairwise interactions is proposed and investigated. Each of these error models is implemented using Monte Carlo simulations for stochastic errors and modifications to the prescribed gate sequences to account for coherent over-rotations. The impact of these various errors on the function of a QCA gate sequence is evaluated using the fidelity of the final state calculated for four quantum information processing protocols of interest: state transfer, state swap, GHZ state generation, and entangled pair generation.
NASA Astrophysics Data System (ADS)
Schmanske, Brian M.; Loew, Murray H.
2003-05-01
A technique for assessing the impact of lossy wavelet-based image compression on signal detection tasks is presented. A medical image"s value is based on its ability to support clinical decisions such as detecting and diagnosing abnormalities. Image quality of compressed images is, however, often stated in terms of mathematical metrics such as mean square error. The presented technique provides a more suitable measure of image degradation by building on the channelized Hotelling observer model, which has been shown to predict human performance of signal detection tasks in noise-limited images. The technique first decomposes an image into its constituent wavelet subband coefficient bit-planes. Channel responses for the individual subband bit-planes are computed, combined,and processed with a Hotelling observer model to provide a measure of signal detectability versus compression ratio. This allows a user to determine how much compression can be tolerated before signal detectability drops below a certain threshold.
Duong, T A; Stubberud, A R
2000-06-01
In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade correlation learning algorithm can be found. Furthermore, it becomes apparent that the cascade correlation scheme is a special case from mathematical analysis in which an efficient hardware learning algorithm called Cascade Error Projection(CEP) is proposed. The CEP provides efficient learning in hardware and it is faster to train, because part of the weights are deterministically obtained, and the learning of the remaining weights from the inputs to the hidden unit is performed as a single-layer perceptron learning with previously determined weights kept frozen. In addition, one can start out with zero weight values (rather than random finite weight values) when the learning of each layer is commenced. Further, unlike cascade correlation algorithm (where a pool of candidate hidden units is added), only a single hidden unit is added at a time. Therefore, the simplicity in hardware implementation is also achieved. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.
Second quantization in bit-string physics
NASA Technical Reports Server (NTRS)
Noyes, H. Pierre
1993-01-01
Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.
Quantum Bit Commitment with a Composite Evidence
NASA Astrophysics Data System (ADS)
Srikanth, R.
2004-01-01
Entanglement-based attacks, which are subtle and powerful, are usually believed to render quantum bit commitment insecure. We point out that the no-go argument leading to this view implicitly assumes the evidence-of-commitment to be a monolithic quantum system. We argue that more general evidence structures, allowing for a composite, hybrid (classical quantum) evidence, conduce to improved security. In particular, we present and prove the security of the following protocol Bob sends Alice an anonymous state. She inscribes her commitment b by measuring part of it in the + (for b = 0) or × (for b = 1) basis. She then communicates to him the (classical) measurement outcome Rx and the partmeasured anonymous state interpolated into other, randomly prepared qubits as her evidence-of-commitment.
A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures
NASA Astrophysics Data System (ADS)
Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran
2007-02-01
H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.
Development and testing of a Mudjet-augmented PDC bit.
Black, Alan; Chahine, Georges; Raymond, David Wayne; Matthews, Oliver; Grossman, James W.; Bertagnolli, Ken (US Synthetic); Vail, Michael
2006-01-01
This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.
Quantum bit commitment with cheat sensitive binding and approximate sealing
NASA Astrophysics Data System (ADS)
Li, Yan-Bing; Xu, Sheng-Wei; Huang, Wei; Wan, Zong-Jie
2015-04-01
This paper proposes a cheat-sensitive quantum bit commitment scheme based on single photons, in which Alice commits a bit to Bob. Here, Bob’s probability of success at cheating as obtains the committed bit before the opening phase becomes close to \\frac{1}{2} (just like performing a guess) as the number of single photons used is increased. And if Alice alters her committed bit after the commitment phase, her cheating will be detected with a probability that becomes close to 1 as the number of single photons used is increased. The scheme is easy to realize with present day technology.
Improved seal for geothermal drill bit. Final technical report
Evans, R.F.
1984-07-06
Each of the two field test bits showed some promise though their performances were less than commercially acceptable. The Ohio test bit ran just over 3000 feet where about 4000 is considered a good run but it was noted that a Varel bit of the same type having a standard O ring seal was completely worn out after 8-1/2 hours (1750 feet drilled). The Texas test bit had good seal-bearing life but was the wrong cutting structure type for the formation being drilled and the penetration rate was low.
PDC (polycrystalline diamond compact) bit research at Sandia National Laboratories
Finger, J.T.; Glowka, D.A.
1989-06-01
From the beginning of the geothermal development program, Sandia has performed and supported research into polycrystalline diamond compact (PDC) bits. These bits are attractive because they are intrinsically efficient in their cutting action (shearing, rather than crushing) and they have no moving parts (eliminating the problems of high-temperature lubricants, bearings, and seals.) This report is a summary description of the analytical and experimental work done by Sandia and our contractors. It describes analysis and laboratory tests of individual cutters and complete bits, as well as full-scale field tests of prototype and commercial bits. The report includes a bibliography of documents giving more detailed information on these topics. 26 refs.
BitPredator: A Discovery Algorithm for BitTorrent Initial Seeders and Peers
Borges, Raymond; Patton, Robert M; Kettani, Houssain; Masalmah, Yahya
2011-01-01
There is a large amount of illegal content being replicated through peer-to-peer (P2P) networks where BitTorrent is dominant; therefore, a framework to profile and police it is needed. The goal of this work is to explore the behavior of initial seeds and highly active peers to develop techniques to correctly identify them. We intend to establish a new methodology and software framework for profiling BitTorrent peers. This involves three steps: crawling torrent indexers for keywords in recently added torrents using Really Simple Syndication protocol (RSS), querying torrent trackers for peer list data and verifying Internet Protocol (IP) addresses from peer lists. We verify IPs using active monitoring methods. Peer behavior is evaluated and modeled using bitfield message responses. We also design a tool to profile worldwide file distribution by mapping IP-to-geolocation and linking to WHOIS server information in Google Earth.
Experimental test of Landauer’s principle in single-bit operations on nanomagnetic memory bits
Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey
2016-01-01
Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least kBT ln(2) of heat be dissipated from the memory into the environment, where kB is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between “information thermodynamics” and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology. PMID:26998519
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
Trellis coded modulation for 4800-9600 bits/s transmission over a fading mobile satellite channel
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1987-01-01
The combination of trellis coding and multiple phase-shift-keyed (MPSK) signaling with the addition of asymmetry to the signal set is discussed with regard to its suitability as a modulation/coding scheme for the fading mobile satellite channel. For MPSK, introducing nonuniformity (asymmetry) into the spacing between signal points in the constellation buys a further improvement in performance over that achievable with trellis coded symmetric MPSK, all this without increasing average or peak power, or changing the bandwidth constraints imposed on the system. Whereas previous contributions have considered the performance of trellis coded modulation transmitted over an additive white Gaussian noise (AWGN) channel, the emphasis in the paper is on the performance of trellis coded MPSK in the fading environment. The results will be obtained by using a combination of analysis and simulation. It will be assumed that the effect of the fading on the phase of the received signal is fully compensated for either by tracking it with some form of phase-locked loop or with pilot tone calibration techniques. Thus, results will reflect only the degradation due to the effect of the fading on the amplitude of the received signal. Also, we shall consider only the case where interleaving/deinterleaving is employed to further combat the fading. This allows for considerable simplification of the analysis and is of great practical interest. Finally, the impact of the availability of channel state information on average bit error probability performance is assessed.
A CMOS switch-capacitor 14-bit 100 Msps pipeline ADC with over 90 dB SFDR
NASA Astrophysics Data System (ADS)
Cai, Hua; Li, Ping
2013-01-01
This article presents a design of 14-bit 100 Msamples/s pipelined analog-to-digital converter (ADC) implemented in 0.18 µm CMOS. A charge-sharing correction (CSC) is proposed to remove the input-dependent charge-injection, along with a floating-well bulk-driven technique, a fast-settling reference generator and a low-jitter clock circuit, guaranteeing the high dynamic performance of the ADC. A scheme of background calibration minimises the error due to the capacitor mismatch and opamp non-ideality, ensuring the overall linearity. The measured results show that the prototype ADC achieves spurious-free dynamic range (SFDR) of 91 dB, signal-to-noise-and-distortion ratio (SNDR) of 73.1 dB, differential nonlinearity (DNL) of +0.61/-0.57 LSB and integrated nonlinearity (INL) of +1.1/-1.0 LSB at 30 MHz input and maintains over 78 dB SFDR and 65 dB SNDR up to 425 MHz, consuming 223 mW totally.
Design and implementation of low power clock gated 64-bit ALU on ultra scale FPGA
NASA Astrophysics Data System (ADS)
Gupta, Ashutosh; Murgai, Shruti; Gulati, Anmol; Kumar, Pradeep
2016-03-01
64-bit energy efficient Arithmetic and Logic Unit using negative latch based clock gating technique is designed in this paper. The 64-bit ALU is designed using multiplexer based full adder cell. We have designed a 64-bit ALU with a gated clock. We have used negative latch based circuit for generating gated clock. This gated clock is used to control the multiplexer based 64-bit ALU. The circuit has been synthesized on kintex FPGA through Xilinx ISE Design Suite 14.7 using 28 nm technology in Verilog HDL. The circuit has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment. We have achieved 74.07%, 92. 93% and 95.53% reduction in total clock power, 89.73%, 91.35% and 92.85% reduction in I/Os power, 67.14%, 62.84% and 74.34% reduction in dynamic power and 25.47%, 29.05% and 46.13% reduction in total supply power at 20 MHz, 200 MHz and 2 GHz frequency respectively. The power has been calculated using XPower Analyzer tool of Xilinx ISE Design Suite 14.3.
10-bit segmented current steering DAC in 90nm CMOS technology
NASA Astrophysics Data System (ADS)
Bringas, R., Jr.; Dy, F.; Gerasta, O. J.
2015-06-01
This special project presents a 10-Bit 1Gs/s 1.2V/3.3V Digital-to-Analog Converter using1 Poly 9 Metal SAED 90-nm CMOS Technology intended for mixed-signal and power IC applications. To achieve maximum performance with minimum area, the DAC has been implemented in 6+4 Segmentation. The simulation results show a static performance of ±0.56 LSB INL and ±0.79 LSB DNL with a total layout chip area of 0.683 mm2.The segmented architecture is implemented using two sub DAC's, which are the LSB and MSB section with certain number bits. The DAC is designed using 4-BitBinary Weighted DAC for the LSB section and 6-BitThermometer-coded DAC for the MSB section. The thermometer-coded architecture provides the most optimized results in terms of linearity through reducing the clock feed-through effect especially in hot switching between multiple transistors. The binary- weighted architecture gives better linearity output in higher frequencies with better saturation in current sources.
A Contourlet-Based Embedded Image Coding Scheme on Low Bit-Rate
NASA Astrophysics Data System (ADS)
Song, Haohao; Yu, Songyu
Contourlet transform (CT) is a new image representation method, which can efficiently represent contours and textures in images. However, CT is a kind of overcomplete transform with a redundancy factor of 4/3. If it is applied to image compression straightforwardly, the encoding bit-rate may increase to meet a given distortion. This fact baffles the coding community to develop CT-based image compression techniques with satisfactory performance. In this paper, we analyze the distribution of significant contourlet coefficients in different subbands and propose a new contourlet-based embedded image coding (CEIC) scheme on low bit-rate. The well-known wavelet-based embedded image coding (WEIC) algorithms such as EZW, SPIHT and SPECK can be easily integrated into the proposed scheme by constructing a virtual low frequency subband, modifying the coding framework of WEIC algorithms according to the structure of contourlet coefficients, and adopting a high-efficiency significant coefficient scanning scheme for CEIC scheme. The proposed CEIC scheme can provide an embedded bit-stream, which is desirable in heterogeneous networks. Our experiments demonstrate that the proposed scheme can achieve the better compression performance on low bit-rate. Furthermore, thanks to the contourlet adopted in the proposed scheme, more contours and textures in the coded images are preserved to ensure the superior subjective quality.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Errors in thermochromic liquid crystal thermometry
NASA Astrophysics Data System (ADS)
Wiberg, Roland; Lior, Noam
2004-09-01
This article experimentally investigates and assesses the errors that may be incurred in the hue-based thermochromic liquid crystal thermochromic liquid crystal (TLC) method, and their causes. The errors include response time, hysteresis, aging, surrounding illumination disturbance, direct illumination and viewing angle, amount of light into the camera, TLC thickness, digital resolution of the image conversion system, and measurement noise. Some of the main conclusions are that: (1) The 3×8 bits digital representation of the red green and blue TLC color values produces a temperature measurement error of typically 1% of the TLC effective temperature range, (2) an eight-fold variation of the light intensity into the camera produced variations, which were not discernable from the digital resolution error, (3) this temperature depends on the TLC film thickness, and (4) thicker films are less susceptible to aging and thickness nonuniformities.
Errors in thermochromic liquid crystal thermometry
Wiberg, Roland; Lior, Noam
2004-09-01
This article experimentally investigates and assesses the errors that may be incurred in the hue-based thermochromic liquid crystal thermochromic liquid crystal (TLC) method, and their causes. The errors include response time, hysteresis, aging, surrounding illumination disturbance, direct illumination and viewing angle, amount of light into the camera, TLC thickness, digital resolution of the image conversion system, and measurement noise. Some of the main conclusions are that: (1) The 3x8 bits digital representation of the red green and blue TLC color values produces a temperature measurement error of typically 1% of the TLC effective temperature range, (2) an eight-fold variation of the light intensity into the camera produced variations, which were not discernable from the digital resolution error, (3) this temperature depends on the TLC film thickness, and (4) thicker films are less susceptible to aging and thickness nonuniformities.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1995-01-01
This report focuses on the results obtained during the PI's recent sabbatical leave at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, from January 1, 1995 through June 30, 1995. Two projects investigated various properties of TURBO codes, a new form of concatenated coding that achieves near channel capacity performance at moderate bit error rates. The performance of TURBO codes is explained in terms of the code's distance spectrum. These results explain both the near capacity performance of the TURBO codes and the observed 'error floor' for moderate and high signal-to-noise ratios (SNR's). A semester project, entitled 'The Realization of the Turbo-Coding System,' involved a thorough simulation study of the performance of TURBO codes and verified the results claimed by previous authors. A copy of the final report for this project is included as Appendix A. A diploma project, entitled 'On the Free Distance of Turbo Codes and Related Product Codes,' includes an analysis of TURBO codes and an explanation for their remarkable performance. A copy of the final report for this project is included as Appendix B.
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
CAMAC based 4-channel 12-bit digitizer
NASA Astrophysics Data System (ADS)
Srivastava, Amit K.; Sharma, Atish; Raval, Tushar; Reddy, D. Chenna
2010-02-01
With the development in Fusion research a large number of diagnostics are being used to understand the complex behaviour of plasma. During discharge, several diagnostics demand high sampling rate and high bit resolution to acquire data for rapid changes in plasma parameters. For the requirements of such fast diagnostics, a 4-channel simultaneous sampling, high-speed, 12-bit CAMAC digitizer has been designed and developed which has several important features for application in CAMAC based nuclear instrumentation. The module has independent ADC per channel for simultaneous sampling and digitization, and 512 Ksamples RAM per channel for on-board storage. The digitizer has been designed for event based acquisition and the acquisition window gives post-trigger as well as pre-trigger (software selectable) data that is useful for analysis. It is a transient digitizer and can be operated either in pre/post trigger mode or in burst mode. The record mode and the active memory size are selected through software commands to satisfy the current application. The module can be used to acquire data at high sampling rate for short time discharge e.g. 512 ms at 1MSPS. The module can also be used for long time discharge at low sampling rate e.g. 512 seconds at 1KSPS. This paper describes the design of digitizer module, development of VHDL code for hardware logic, Graphical User Interface (GUI) and important features of module from application point of view. The digitizer has CPLD based hardware logic, which provides flexibility in configuring the module for different sampling rates and different pre/post trigger samples through GUI. The digitizer can be operated with either internal (for testing/acquisition) or external (synchronized acquisition) clock and trigger. The digitizer has differential inputs with bipolar input range ±5V and it is being used with sampling rate of 1 MSamples Per Second (MSPS) per channel but it also supports higher sampling rate up to 3MSPS per channel. A
TriBITS (Tribal Build, Integrate, and Test System)
2013-05-16
TriBITS is a configuration, build, test, and reporting system that uses the Kitware open-source CMake/CTest/CDash system. TriBITS contains a number of custom CMake/CTest scripts and python scripts that extend the functionality of the out-of-the-box CMake/CTest/CDash system.
8-, 16-, and 32-Bit Processors: Characteristics and Appropriate Applications.
ERIC Educational Resources Information Center
Williams, James G.
1984-01-01
Defines and describes the components and functions that constitute a microcomputer--bits, bytes, address register, cycle time, data path, and bus. Characteristics of 8-, 16-, and 32-bit machines are explained in detail, and microprocessor evolution, architecture, and implementation are discussed. Application characteristics or types for each bit…
Report on ignitability testing of ''no-flow'' push bit
Witwer, K.S.
1997-04-23
Testing was done to determine if an ignition occurs during a sixty foot drop of a Universal Sampler onto a push-mode bit in a flammable gas environment. Ten drops each of the sampler using both a push-mode and rotary mode insert onto a push-mode bit were completed. No ignition occurred during any of the drops.
NASA Astrophysics Data System (ADS)
Krupinski, Elizabeth A.; Siddiqui, Khan; Siegel, Eliot; Shrestha, Rasu; Grant, Edward; Roehrig, Hans; Fan, Jiahua
2007-03-01
Monochrome monitors typically display 8 bits of data (256 shades of gray) at one time. This study determined if monitors that can display a wider range of grayscale information (11-bit) can improve observer performance and decrease the use of window/level in detecting pulmonary nodules. Three sites participated using 8 and 11-bit displays from three manufacturers. At each site, six radiologists reviewed 100 DR chest images on both displays. There was no significant difference in ROC Az (F = 0.0374, p = 0.8491) as a function of 8 vs 11 bit-depth. Average Az across all observers with 8-bits was 0.8284 and with 11-bits was 0.8253. There was a significant difference in overall viewing time (F = 10.209, p = 0.0014) favoring the 11-bit displays. Window/level use did not differ significantly for the two types of displays. Eye position recording on a subset of images at one site showed that cumulative dwell times for each decision category were lower with the 11-bit than with the 8-bit display. T-tests for paired observations showed that the TP (t = 1.452, p = 0.1507), FN (t = 0.050, p = 0.9609) and FP (t = 0.042, p = 0.9676) were not statistically significant. The difference for the TN decisions was statistically significant (t = 1.926, p = 0.05). 8-bit displays will not impact negatively diagnostic accuracy, but using 11-bit displays may improve workflow efficiency.
Bounds on achievable accuracy in analog optical linear-algebra processors
NASA Astrophysics Data System (ADS)
Batsell, Stephen G.; Walkup, John F.; Krile, Thomas F.
1990-07-01
Upper arid lower bounds on the number of bits of accuracy achievable are determined by applying a seconth-ortler statistical model to the linear algebra processor. The use of bounds was found necessary due to the strong signal-dependence of the noise at the output of the optical linear algebra processor (OLAP). 1 1. ACCURACY BOUNDS One of the limiting factors in applying OLAPs to real world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication ard addition operations spatial variations across arrays and crosstalk. We have previously examined these noise sources and determined a general model for the output noise mean and variance. The model demonstrates a strony signaldependency in the noise at the output of the processor which has been confirmed by our experiments. 1 We define accuracy similar to its definition for an analog signal input to an analog-to-digital (ND) converter. The number of bits of accuracy achievable is related to the log (base 2) of the number of separable levels at the P/D converter output. The number of separable levels is fouri by dividing the dynamic range by m times the standard deviation of the signal a. 2 Here m determines the error rate in the P/D conversion. The dynamic range can be expressed as the
Multiple-Particle Interference and Quantum Error Correction
NASA Astrophysics Data System (ADS)
Steane, Andrew
1996-11-01
The concept of multiple-particle interference is discussed, using insights provided by the classical theory of error correcting codes. This leads to a discussion of error correction in a quantum communication channel or a quantum computer. Methods of error correction in the quantum regime are presented, and their limitations assessed. A quantum channel can recover from arbitrary decoherence of x qubits if K bits of quantum information are encoded using n quantum bits, where K/n can be greater than 1 - 2H (2x/n), but must be less than 1 - 2H (x/n). This implies exponential reduction of decoherence with only a polynomial increase in the computing resources required. Therefore quantum computation can be made free of errors in the presence of physically realistic levels of decoherence. The methods also allow isolation of quantum communication from noise and evesdropping (quantum privacy amplification).
Single Abrikosov vortices as quantized information bits
Golod, T.; Iovan, A.; Krasnov, V. M.
2015-01-01
Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592
Continuous chain bit with downhole cycling capability
Ritter, Don F.; St. Clair, Jack A.; Togami, Henry K.
1983-01-01
A continuous chain bit for hard rock drilling is capable of downhole cycling. A drill head assembly moves axially relative to a support body while the chain on the head assembly is held in position so that the bodily movement of the chain cycles the chain to present new composite links for drilling. A pair of spring fingers on opposite sides of the chain hold the chain against movement. The chain is held in tension by a spring-biased tensioning bar. A head at the working end of the chain supports the working links. The chain is centered by a reversing pawl and piston actuated by the pressure of the drilling mud. Detent pins lock the head assembly with respect to the support body and are also operated by the drilling mud pressure. A restricted nozzle with a divergent outlet sprays drilling mud into the cavity to remove debris. Indication of the centered position of the chain is provided by noting a low pressure reading indicating proper alignment of drilling mud slots on the links with the corresponding feed branches.
Multiple-Bit Differential Detection of OQPSK
NASA Technical Reports Server (NTRS)
Simon, Marvin
2005-01-01
A multiple-bit differential-detection method has been proposed for the reception of radio signals modulated with offset quadrature phase-shift keying (offset QPSK or OQPSK). The method is also applicable to other spectrally efficient offset quadrature modulations. This method is based partly on the same principles as those of a multiple-symbol differential-detection method for M-ary QPSK, which includes QPSK (that is, non-offset QPSK) as a special case. That method was introduced more than a decade ago by the author of the present method as a means of improving performance relative to a traditional (two-symbol observation) differential-detection scheme. Instead of symbol-by-symbol detection, both that method and the present one are based on a concept of maximum-likelihood sequence estimation (MLSE). As applied to the modulations in question, MLSE involves consideration of (1) all possible binary data sequences that could have been received during an observation time of some number, N, of symbol periods and (2) selection of the sequence that yields the best match to the noise-corrupted signal received during that time. The performance of the prior method was shown to range from that of traditional differential detection for short observation times (small N) to that of ideal coherent detection (with differential encoding) for long observation times (large N).
NASA Astrophysics Data System (ADS)
Li, Hui; Moser, Philip; Wolf, Philip; Larisch, Gunter; Frasunkiewicz, Leszek; Dems, Maciej; Czyszanowski, Tomasz; Lott, James A.; Bimberg, Dieter
2014-02-01
Via experimental results supported by numerical modeling we report the energy-efficiency, bit rate, and modal properties of GaAs-based 980 nm vertical cavity surface emitting lasers (VCSELs). Using our newly established Principles for the design and operation of energy-efficient VCSELs as reported in the Invited paper by Moser et al. (SPIE 9001-02 ) [1] along with our high bit rate 980 nm VCSEL epitaxial designs that include a relatively large etalonto- quantum well gain-peak wavelength detuning of about 15 nm we demonstrate record error-free (bit error ratio below 10-12) data transmission performance of 38, 40, and 42 Gbit/s at 85, 75, and 25°C, respectively. At 38 Gbit/s in a back-toback test configuration from 45 to 85°C we demonstrate a record low and highly stable dissipated energy of only ~179 to 177 fJ per transmitted bit. We conclude that our 980 nm VCSELs are especially well suited for very-short-reach and ultra-short-reach optical interconnects where the data transmission distances are about 1 m or less, and about 10 mm or less, respectively.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
Zender, Charles S.
2016-09-19
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic
NASA Astrophysics Data System (ADS)
Zender, Charles S.
2016-09-01
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that
Microdensitometer errors: Their effect on photometric data reduction
NASA Technical Reports Server (NTRS)
Bozyan, E. P.; Opal, C. B.
1984-01-01
The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.
Suboptimal quantum-error-correcting procedure based on semidefinite programming
Yamamoto, Naoki; Hara, Shinji; Tsumura, Koji
2005-02-01
In this paper, we consider a simplified error-correcting problem: for a fixed encoding process, to find a cascade connected quantum channel such that the worst fidelity between the input and the output becomes maximum. With the use of the one-to-one parametrization of quantum channels, a procedure finding a suboptimal error-correcting channel based on a semidefinite programming is proposed. The effectiveness of our method is verified by an example of the bit-flip channel decoding.
A 14-bit 40-MHz analog front end for CCD application
NASA Astrophysics Data System (ADS)
Jingyu, Wang; Zhangming, Zhu; Shubin, Liu
2016-06-01
A 14-bit, 40-MHz analog front end (AFE) for CCD scanners is analyzed and designed. The proposed system incorporates a digitally controlled wideband variable gain amplifier (VGA) with nearly 42 dB gain range, a correlated double sampler (CDS) with programmable gain functionality, a 14-bit analog-to-digital converter and a programmable timing core. To achieve the maximum dynamic range, the VGA proposed here can linearly amplify the input signal in a gain range from -1.08 to 41.06 dB in 6.02 dB step with a constant bandwidth. A novel CDS takes image information out of noise, and further amplifies the signal accurately in a gain range from 0 to 18 dB in 0.035 dB step. A 14-bit ADC is adopted to quantify the analog signal with optimization in power and linearity. An internal timing core can provide flexible timing for CCD arrays, CDS and ADC. The proposed AFE was fabricated in SMIC 0.18 μm CMOS process. The whole circuit occupied an active area of 2.8 × 4.8 mm2 and consumed 360 mW. When the frequency of input signal is 6.069 MHz, and the sampling frequency is 40 MHz, the signal to noise and distortion (SNDR) is 70.3 dB, the effective number of bits is 11.39 bit. Project supported by the National Natural Science Foundation of China (Nos. 61234002, 61322405, 61306044, 61376033), the National High-Tech Program of China (No. 2013AA014103), and the Opening Project of Science and Technology on Reliability Physics and Application Technology of Electronic Component Laboratory (No. ZHD201302).
NASA Astrophysics Data System (ADS)
Garai, Sisir Kumar
2011-02-01
Optical data comparator is the part and parcel of arithmetic and logical unit of any optical data processor and it is working as a building block in a larger optical circuit, as an optical switch in all optical header processing and optical packet switching based all optical telecommunications system. In this article the author proposes a method of developing an all optical single bit comparator unit and subsequently extending the proposal to develop a n-bit comparator exploiting the nonlinear rotation of the state of polarization of the probe beam in semiconductor optical amplifier (SOA). Here the dataset to be compared are taken in frequency encoded/decoded form throughout the communication. The major advantages of frequency encoding over all other conventional techniques are that as the frequency of any signal is fundamental one so it can preserve its identity throughout the communication of optical signal and minimizes the probability of bit error problem. For frequency routing purpose optical add/drop multiplexer (ADM) is used which not only route the pump beams properly but also to amplify the pump beams efficiently. Switching speed of 'MZI-SOA switch' as well as SOA based switches are very fast with good on-off contrast ratio and as a result it is possible to obtain very fast action of optical data comparator.
Error analysis of real time and post processed or bit determination of GFO using GPS tracking
NASA Technical Reports Server (NTRS)
Schreiner, William S.
1991-01-01
The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.
Reducing Bits in Electrodeposition Process of Commercial Vehicle - A Case Study
NASA Astrophysics Data System (ADS)
Rahim, Nabiilah Ab; Hamedon, Zamzuri; Mohd Turan, Faiz; Iskandar, Ismed
2016-02-01
Painting process is critical in commercial vehicle manufacturing process for protection and decorative. The good quality on painted body is important to reduce repair cost and achieve customer satisfaction. In order to achieve the good quality, it is important to reduce the defect at the first process in painting process which is electrodeposition process. The Pareto graph and cause and effect diagram in the seven QC tools is utilized to reduce the electrodeposition defects. The main defects in the electrodeposition process in this case study are the bits. The 55% of the bits are iron filings. The iron filings which come from the metal assembly process at the body shop are minimised by controlling the spot welding parameter, defect control and standard body cleaning process. However the iron filings are still remained on the body and carry over to the paint shop. The remained iron filings on the body are settled inside the dipping tank and removed by filtration system and magnetic separation. The implementation of filtration system and magnetic separation improved 27% of bits and reduced 42% of sanding man hour with a total saving of RM38.00 per unit.
Seismic Investigations of the Zagros-Bitlis Thrust Zone
NASA Astrophysics Data System (ADS)
Gritto, R.; Sibol, M.; Caron, P.; Quigley, K.; Ghalib, H.; Chen, Y.
2009-05-01
We present results of crustal studies obtained with seismic data from the Northern Iraq Seismic Network (NISN). NISN has operated 10 broadband stations in north-eastern Iraq since late 2005. At present, over 800 GB of seismic waveform data have been analyzed. The aim of the present study is to derive models of the local and regional crustal structure of north and north-eastern Iraq, including the northern extension of the Zagros collision zone. This goal is, in part, achieved by estimating local and regional seismic velocity models using receiver function- and surface wave dispersion analyses and to use these velocity models to obtain accurate hypocenter locations and event focal mechanisms. Our analysis of hypocenter locations produces a clear picture of the seismicity associated with the tectonics of the region. The largest seismicity rate is confined to the active northern section of the Zagros thrust zone, while it decreases towards the southern end, before the intensity increases in the Bandar Abbas region again. Additionally, the rift zones in the Read Sea and the Gulf of Aden are clearly demarked by high seismicity rates. Our analysis of waveform data indicates clear propagation paths from the west or south-west across the Arabian shield as well as from the north and east into NISN. Phases including Pn, Pg, Sn, Lg, as well as LR are clearly observed on these seismograms. In contrast, blockage or attenuation of Pg and Sg-wave energy is observed for propagation paths across the Zagros-Bitlis zone from the south, while Pn and Sn phases are not affected. These findings are in support of earlier tectonic models that suggested the existence of multiple parallel listric faults splitting off the main Zagros fault zone in east-west direction. These faults appear to attenuate the crustal phases while the refracted phases, propagating across the mantle lid, remain unaffected. We will present surface wave analysis in support of these findings, indicating multi
Development of a near-bit MWD system. Quarterly report, October--December, 1994
McDonald, W.J.; Pittard, G.T.
1995-05-01
As horizontal drilling and completion technology has improved through evolution, the length of the horizontal sections has grown longer and the need for more accurate directional placement become more critical. The reliance on examining formation conditions and borehole directional data some 50 to 80 feet above the bit becomes less acceptable as turning radii decrease and target sands become thinner. The project objective is to develop a measurements-while-drilling module that can reliably provide real-time reports of drilling conditions at the bit. The module is to support multiple types of sensors and to sample and encode their outputs in digital form under microprocessor control. The assembled message will then be electronically transmitted along the drill string back to a standard mud-pulse or EM-MWD tool for data integration and relay to the surface. The development effort will consist of reconfiguring the AccuNav{reg_sign} EM-MWD Directional System manufactured by Guided boring Systems, Inc. of Houston, Texas for near-bit operation followed by the inclusion of additional sensor types (e.g., natural gamma ray, formation resistivity, etc.) in Phase 2. The near-bit MWD prototype fabrication was completed and the system assembled and calibrated. The unit was then subjected to vibration and shock testing for a period in excess of 200 hours. In addition, the unit was completely disassembled and inspected at the conclusion of the reliability tests to assess damage or wear. No fall off in performance or damage to the electronics or battery pack were found. The performance of the telemetry link was also assessed. The tests demonstrated the ability to transmit and receive error-free data over a transmitter-to-receiver separation distance of 100 feet for both liquid-filled and dry boreholes.
Overcoming erasure errors with multilevel systems
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Wen, Jianming; Jiang, Liang
2017-01-01
We investigate the usage of highly efficient error correcting codes of multilevel systems to protect encoded quantum information from erasure errors and implementation to repetitively correct these errors. Our scheme makes use of quantum polynomial codes to encode quantum information and generalizes teleportation based error correction for multilevel systems to correct photon losses and operation errors in a fault-tolerant manner. We discuss the application of quantum polynomial codes to one-way quantum repeaters. For various types of operation errors, we identify different parameter regions where quantum polynomial codes can achieve a superior performance compared to qubit based quantum parity codes.
Optimized entanglement-assisted quantum error correction
Taghavi, Soraya; Brun, Todd A.; Lidar, Daniel A.
2010-10-15
Using convex optimization, we propose entanglement-assisted quantum error-correction procedures that are optimized for given noise channels. We demonstrate through numerical examples that such an optimized error-correction method achieves higher channel fidelities than existing methods. This improved performance, which leads to perfect error correction for a larger class of error channels, is interpreted in at least some cases by quantum teleportation, but for general channels this interpretation does not hold.
Unconditionally secure bit commitment by transmitting measurement outcomes.
Kent, Adrian
2012-09-28
We propose a new unconditionally secure bit commitment scheme based on Minkowski causality and the properties of quantum information. The receiving party sends a number of randomly chosen Bennett-Brassard 1984 (BB84) qubits to the committer at a given point in space-time. The committer carries out measurements in one of the two BB84 bases, depending on the committed bit value, and transmits the outcomes securely at (or near) light speed in opposite directions to remote agents. These agents unveil the bit by returning the outcomes to adjacent agents of the receiver. The protocol's security relies only on simple properties of quantum information and the impossibility of superluminal signalling.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
Performance of a phase-conjugate-engine implementing a finite-bit phase correction
Baker, K; Stappaerts, E; Wilks, S; Young, P; Gavel, D; Tucker, J; Silva, D; Olivier, S
2003-10-23
This article examines the achievable Strehl ratio when a finite-bit correction to an aberrated wave-front is implemented. The phase-conjugate-engine (PCE) used to measure the aberrated wavefront consists of a quadrature interferometric wave-front sensor, a liquid-crystal spatial-light-modulator and computer hardware/software to calculate and apply the correction. A finite-bit approximation to the conjugate phase is calculated and applied to the spatial light modulator to remove the aberrations from the optical beam. The experimentally determined Strehl ratio of the corrected beam is compared with analytical expressions for the expected Strehl ratio and shown to be in good agreement with those predictions.
An 8-Bit 600-MSps Flash ADC Using Interpolating and Background Self-Calibrating Techniques
NASA Astrophysics Data System (ADS)
Paik, Daehwa; Asada, Yusuke; Miyahara, Masaya; Matsuzawa, Akira
This paper describes a flash ADC using interpolation (IP) and cyclic background self-calibrating techniques. The proposed IP technique that is cascade of capacitor IP and gate IP with dynamic double-tail latched comparator reduces non-linearity, power consumption, and occupied area. The cyclic background self-calibrating technique periodically suppresses offset mismatch voltages caused by static fluctuation and dynamic fluctuation due to temperature and supply voltage changes. The ADC has been fabricated in 90-nm 1P10M CMOS technology. Experimental results show that the ADC achieves SNDR of 6.07bits without calibration and 6.74bits with calibration up to 500MHz input signal at sampling rate of 600MSps. It dissipates 98.5mW on 1.2-V supply. FoM is 1.54pJ/conv.
Iterative rate-distortion optimization of H.264 with constant bit rate constraint.
An, Cheolhong; Nguyen, Truong Q
2008-09-01
In this paper, we apply the primal-dual decomposition and subgradient projection methods to solve the rate-distortion optimization problem with the constant bit rate constraint. The primal decomposition method enables spatial or temporal prediction dependency within a group of picture (GOP) to be processed in the master primal problem. As a result, we can apply the dual decomposition to minimize independently the Lagrangian cost of all the MBs using the reference software model of H.264. Furthermore, the optimal Lagrange multiplier lambda* is iteratively derived from the solution of the dual problem. As an example, we derive the optimal bit allocation condition with the consideration of temporal prediction dependency among the pictures. Experimental results show that the proposed method achieves better performance than the reference software model of H.264 with rate control.
A perceptual-based approach to bit allocation for H.264 encoder
NASA Astrophysics Data System (ADS)
Ou, Tao-Sheng; Huang, Yi-Hsin; Chen, Homer H.
2010-07-01
Since the ultimate receivers of encoded video are human eyes, the characteristics of human visual system should be taken into consideration in the design of bit allocation to improve the perceptual video quality. In this paper, we incorporate the structural similarity index as a distortion metric and propose a novel rate-distortion model to characterize the relationship between rate and the structural similarity index. Based on the model, we develop an optimum bit allocation and rate control scheme for H.264 encoders. Experimental results show that up to 25% bitrate reduction over the JM reference software can be achieved. Subjective evaluation further confirms that the proposed scheme preserves more structural information and improves the perceptual quality of the encoded video.
Two bit optical analog-to-digital converter based on photonic crystals.
Miao, Binglin; Chen, Caihua; Sharkway, Ahmed; Shi, Shouyuan; Prather, Dennis W
2006-08-21
In this paper, we demonstrate a 2-bit optical analog-to-digital (A/D) converter. This converter consists of three cascaded splitters constructed in a self-guiding photonic crystal through the perturbation of the uniform lattice. The A/D conversion is achieved by adjusting splitting ratios of the splitters through changing the degree of perturbation. In this way, output ports reach a state of '1' at different input power levels to generate unique states desired for an A/D converter. To validate this design concept, we first experimentally characterize the relation between the splitting ratio and the degree of lattice perturbation. Based on this understanding, we then fabricate the 2-bit A/D converter and successfully observe four unique states corresponding to different power levels of input analog signal.
A 1.8 V low-power 14-bit 20 Msps ADC with 11.2 ENOB
NASA Astrophysics Data System (ADS)
Hua, Cai
2012-11-01
This paper describes the design of a 14-bit 20 Msps analog-to-digital converter (ADC), implemented in 0.18 μm CMOS technology, achieving 11.2 effective number of bits at Nyquist rate. An improved SHA-less structure and op-amp sharing technique is adopted to significantly reduce the power. The proposed ADC consumes only 166 mW under 1.8 V supply. A fast background calibration is utilized to ensure the overall ADC linearity.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
Twenty questions about student errors
NASA Astrophysics Data System (ADS)
Fisher, Kathleen M.; Lipson, Joseph Isaac
Errors in science learning (errors in expression of organized, purposeful thought within the domain of science) provide a window through which glimpses of mental functioning can be obtained. Errors are valuable and normal occurrences in the process of learning science. A student can use his/her errors to develop a deeper understanding of a concept as long as the error can be recognized and appropriate, informative feedback can be obtained. A safe, non-threatening, and nonpunitive environment which encourages dialogue helps students to express their conceptions and to risk making errors. Pedagogical methods that systematically address common student errors produce significant gains in student learning. Just as the nature-nurture interaction is integral to the development of living things, so the individual-environment interaction is basic to thought processes. At a minimum, four systems interact: (1) the individual problem solver (who has a worldview, relatively stable cognitive characteristics, relatively malleable mental states and conditions, and aims or intentions), (2) task to be performed (including relative importance and nature of the task), (3) knowledge domain in which task is contained, and (4) the environment (including orienting conditions and the social and physical context).Several basic assumptions underlie research on errors and alternative conceptions. Among these are: Knowledge and thought involve active, constructive processes; there are many ways to acquire, organize, store, retrieve, and think about a given concept or event; and understanding is achieved by successive approximations. Application of these ideas will require a fundamental change in how science is taught.
Preliminary design for a standard 10 sup 7 bit Solid State Memory (SSM)
NASA Technical Reports Server (NTRS)
Hayes, P. J.; Howle, W. M., Jr.; Stermer, R. L., Jr.
1978-01-01
A modular concept with three separate modules roughly separating bubble domain technology, control logic technology, and power supply technology was employed. These modules were respectively the standard memory module (SMM), the data control unit (DCU), and power supply module (PSM). The storage medium was provided by bubble domain chips organized into memory cells. These cells and the circuitry for parallel data access to the cells make up the SMM. The DCU provides a flexible serial data interface to the SMM. The PSM provides adequate power to enable one DCU and one SMM to operate simultaneously at the maximum data rate. The SSM was designed to handle asynchronous data rates from dc to 1.024 Mbs with a bit error rate less than 1 error in 10 to the eight power bits. Two versions of the SSM, a serial data memory and a dual parallel data memory were specified using the standard modules. The SSM specification includes requirements for radiation hardness, temperature and mechanical environments, dc magnetic field emission and susceptibility, electromagnetic compatibility, and reliability.
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
World Oil`s 1995 drill bit classifier
1995-09-01
World Oil offers this comprehensive listing of major manufacturer`s drilling bits to aid drilling supervisors and engineers in field selection. While this listing has been published annually for several years, changes have been made in this year`s tables to reflect modern industry nomenclature. The tables are divided into six formation categories. Within these are listed most available drilling/coring bits by type and manufacturers. To use the listings, identify the formation to be drilled, decide which bit type is appropriate, i.e., roller, fixed cutter, steel tooth, insert, diamond, etc., and choose the manufacturer. Companies were asked to list bit data by: (1) new IADC code, (2) readily available sizes (special sizes are often available on request), (3) recommended WOB in lb/in. diameter, and (4) codes for special features and usage, a combination of new IADC and World Oil special codes, see Nomenclature.
Experimental bit commitment based on quantum communication and special relativity.
Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H
2013-11-01
Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented.
Adaptive bit truncation and compensation method for EZW image coding
NASA Astrophysics Data System (ADS)
Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao
2003-09-01
The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.
Eight-Bit-Slice GaAs General Processor Circuit
NASA Technical Reports Server (NTRS)
Weissman, John; Gauthier, Robert V.
1989-01-01
Novel GaAs 8-bit slice enables quick and efficient implementation of variety of fast GaAs digital systems ranging from central processing units of computers to special-purpose processors for communications and signal-processing applications. With GaAs 8-bit slice, designers quickly configure and test hearts of many digital systems that demand fast complex arithmetic, fast and sufficient register storage, efficient multiplexing and routing of data words, and ease of control.
8-Bit Gray Scale Images of Fingerprint Image Groups
National Institute of Standards and Technology Data Gateway
NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (PC database for purchase) The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.
Advanced DFM application for automated bit-line pattern dummy
NASA Astrophysics Data System (ADS)
Shin, Tae Hyun; Kim, Cheolkyun; Yang, Hyunjo; Bahr, Mohamed
2016-03-01
This paper presents an automated DFM solution to generate Bit Line Pattern Dummy (BLPD) for memory devices. Dummy shapes are aligned with memory functional bits to ensure uniform and reliable memory device. This paper will present a smarter approach that uses an analysis based technique for adding the dummy shapes that have different types according to the space available. Experimental results based on layout of Mobile dynamic random access memory (DRAM).
Automatic DFM methodology for bit line pattern dummy
NASA Astrophysics Data System (ADS)
Bahr, Mohamed
2015-03-01
This paper presents an automated DFM solution to generate Bit Line Pattern Dummy (BLPD) for memory chips. Dummy shapes are aligned with memory functional bits lines to ensure uniform and reliable memory device. This paper will present a smarter approach that uses an analysis based technique for adding the dummy fill shapes that have different types according to the space available. Experimental results based on layout of a memory test chip.
Proper bit selection improves ROP in coiled tubing drilling
King, W.W. )
1994-04-18
Using the correct type of bit can improve the rate of penetration (ROP) and therefore the economics of coiled tubing drilling operations. Key factors, based on studies of the coiled tubing jobs to date, are that the drilling system must be analyzed as a whole system and that both the drill bit type and the formation compressive strength are critical components in this analysis. Once a candidate job has been qualified technically for drilling with coiled tubing, the job will have to be justified economically compared to conventional drilling. A key part of the economic analysis is predicting the ROP in each formation to be drilled to establish a drilling time curve. This prediction should be based on the key components of the system, including the following: hydraulics, motor capabilities, weight on bit (WOB), rock compressive strength, and bit type. This analysis should not base expected ROPs and offset wells drilled with conventional rigs and equipment. Furthermore, a small-diameter bit should not be selected simply by using the International Association of Drilling Contractor (IADC) codes of large-diameter bits used in offset wells. Coiled tubing drilling is described, then key factors in the selection are discussed.
Performance of multi level error correction in binary holographic memory
NASA Technical Reports Server (NTRS)
Hanan, Jay C.; Chao, Tien-Hsin; Reyes, George F.
2004-01-01
At the Optical Computing Lab in the Jet Propulsion Laboratory (JPL) a binary holographic data storage system was designed and tested with methods of recording and retrieving the binary information. Levels of error correction were introduced to the system including pixel averaging, thresholding, and parity checks. Errors were artificially introduced into the binary holographic data storage system and were monitored as a function of the defect area fraction, which showed a strong influence on data integrity. Average area fractions exceeding one quarter of the bit area caused unrecoverable errors. Efficient use of the available data density was discussed. .
Louvel, Guillaume; Der Sarkissian, Clio; Hanghøj, Kristian; Orlando, Ludovic
2016-11-01
Micro-organisms account for most of the Earth's biodiversity and yet remain largely unknown. The complexity and diversity of microbial communities present in clinical and environmental samples can now be robustly investigated in record times and prices thanks to recent advances in high-throughput DNA sequencing (HTS). Here, we develop metaBIT, an open-source computational pipeline automatizing routine microbial profiling of shotgun HTS data. Customizable by the user at different stringency levels, it performs robust taxonomy-based assignment and relative abundance calculation of microbial taxa, as well as cross-sample statistical analyses of microbial diversity distributions. We demonstrate the versatility of metaBIT within a range of published HTS data sets sampled from the environment (soil and seawater) and the human body (skin and gut), but also from archaeological specimens. We present the diversity of outputs provided by the pipeline for the visualization of microbial profiles (barplots, heatmaps) and for their characterization and comparison (diversity indices, hierarchical clustering and principal coordinates analyses). We show that metaBIT allows an automatic, fast and user-friendly profiling of the microbial DNA present in HTS shotgun data sets. The applications of metaBIT are vast, from monitoring of laboratory errors and contaminations, to the reconstruction of past and present microbiota, and the detection of candidate species, including pathogens.
Robust characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph
2016-04-01
Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.
Supporting 64-bit global indices in Epetra and other Trilinos packages :
Jhurani, Chetan; Austin, Travis M.; Heroux, Michael Allen; Willenbring, James Michael
2013-06-01
The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries within an object-oriented framework. It is intended for large-scale, complex multiphysics engineering and scientific applications [2, 4, 3]. Epetra is one of its basic packages. It provides serial and parallel linear algebra capabilities. Before Trilinos version 11.0, released in 2012, Epetra used the C++ int data-type for storing global and local indices for degrees of freedom (DOFs). Since int is typically 32-bit, this limited the largest problem size to be smaller than approximately two billion DOFs. This was true even if a distributed memory machine could handle larger problems. We have added optional support for C++ long long data-type, which is at least 64-bit wide, for global indices. To save memory, maintain the speed of memory-bound operations, and reduce further changes to the code, the local indices are still 32-bit. We document the changes required to achieve this feature and how the new functionality can be used. We also report on the lessons learned in modifying a mature and popular package from various perspectives design goals, backward compatibility, engineering decisions, C++ language features, effects on existing users and other packages, and build integration.
Patterned media towards Nano-bit magnetic recording: fabrication and challenges.
Sbiaa, Rachid; Piramanayagam, Seidikkurippu N
2007-01-01
During the past decade, magnetic recording density of HDD has doubled almost every 18 months. To keep increasing the recording density, there is a need to make the small bits thermally stable. The most recent method using perpendicular recording media (PMR) will lose its fuel in a few years time and alternatives are sought. Patterned media, where the bits are magnetically separated from each other, offer the possibility to solve many issues encountered by PMR technology. However, implementation of patterned media would involve developing processing methods which offer high resolution (small bits), regular patterns, and high density. All these need to be achieved without sacrificing a high throughput and low cost. In this article, we review some of the ideas that have been proposed in this subject. However, the focus of the paper is on nano-imprint lithography (NIL) as it fulfills most of the needs of HDD as compared to conventional lithography using electron beam, EUV or X-Rays. The latest development of NIL and related technologies and their future prospects for patterned media are also discussed.
Medical image compression using cubic spline interpolation with bit-plane compensation
NASA Astrophysics Data System (ADS)
Truong, Trieu-Kien; Chen, Shi-Huang; Lin, Tsung-Ching
2007-03-01
In this paper, a modified medical image compression algorithm using cubic spline interpolation (CSI) is presented for telemedicine applications. The CSI is developed in order to subsample image data with minimal distortion and to achieve compression. It has been shown in literatures that the CSI can be combined with the JPEG algorithms to develop a modified JPEG codec, which obtains a higher compression ratio and a better quality of reconstructed image than the standard JPEG. However, this modified JPEG codec will lose some high-frequency components of medical images during compression process. To minimize the drawback arose from loss of these high-frequency components, this paper further makes use of bit-plane compensation to the modified JPEG codec. The bit-plane compensation algorithm used in this paper is modified from JBIG2 standard. Experimental results show that the proposed scheme can increase 20~30% compression ratio of original JPEG medical data compression system with similar visual quality. This system can reduce the loading of telecommunication networks and is quite suitable for low bit-rate telemedicine applications.
Errors of measurement by laser goniometer
NASA Astrophysics Data System (ADS)
Agapov, Mikhail Y.; Bournashev, Milhail N.
2000-11-01
The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.
NASA Astrophysics Data System (ADS)
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
Bias and spread in extreme value theory measurements of probability of error
NASA Technical Reports Server (NTRS)
Smith, J. G.
1972-01-01
Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.
Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph; Badescu, Mircea; Iskenderian, Theodore; Sherrit, Stewart; Bao, Xiaoqi; Linderman, Randel
2010-01-01
Long tool bits are undergoing development that can be stowed compactly until used as rock- or ground-penetrating probes actuated by ultrasonic/sonic mechanisms. These bits are designed to be folded or rolled into compact form for transport to exploration sites, where they are to be connected to their ultrasonic/ sonic actuation mechanisms and unfolded or unrolled to their full lengths for penetrating ground or rock to relatively large depths. These bits can be designed to acquire rock or soil samples and/or to be equipped with sensors for measuring properties of rock or soil in situ. These bits can also be designed to be withdrawn from the ground, restowed, and transported for reuse at different exploration sites. Apparatuses based on the concept of a probe actuated by an ultrasonic/sonic mechanism have been described in numerous prior NASA Tech Briefs articles, the most recent and relevant being "Ultrasonic/ Sonic Impacting Penetrators" (NPO-41666) NASA Tech Briefs, Vol. 32, No. 4 (April 2008), page 58. All of those apparatuses are variations on the basic theme of the earliest ones, denoted ultrasonic/sonic drill corers (USDCs). To recapitulate: An apparatus of this type includes a lightweight, low-power, piezoelectrically driven actuator in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that the size of the axial force needed to make the tool bit advance into soil, rock, or another material of interest is much smaller than in ordinary twist drilling, ordinary hammering, or ordinary steady pushing. Examples of properties that could be measured by use of an instrumented tool bit include electrical conductivity, permittivity, magnetic
A 100 MS/s 9 bit 0.43 mW SAR ADC with custom capacitor array
NASA Astrophysics Data System (ADS)
Jingjing, Wang; Zemin, Feng; Rongjin, Xu; Chixiao, Chen; Fan, Ye; Jun, Xu; Junyan, Ren
2016-05-01
A low power 9 bit 100 MS/s successive approximation register analog-to-digital converter (SAR ADC) with custom capacitor array is presented. A brand-new 3-D MOM unit capacitor is used as the basic capacitor cell of this capacitor array. The unit capacitor has a capacitance of 1 fF. Besides, the advanced capacitor array structure and switch mode decrease the power consumption a lot. To verify the effectiveness of this low power design, the 9 bit 100 MS/s SAR ADC is implemented in TSMC IP9M 65 nm LP CMOS technology. The measurement results demonstrate that this design achieves an effective number of bits (ENOB) of 7.4 bit, a signal-to-noise plus distortion ratio (SNDR) of 46.40 dB and a spurious-free dynamic range (SFDR) of 62.31 dB at 100 MS/s with 1 MHz input. The SAR ADC core occupies an area of 0.030 mm2 and consumes 0.43 mW under a supply voltage of 1.2 V. The figure of merit (FOM) of the SAR ADC achieves 23.75 fJ/conv. Project supported by the National High-Tech Research and Development Program of China (No. 2013AA014101).
Soft-decision forward error correction for 100 Gb/s digital coherent systems
NASA Astrophysics Data System (ADS)
Onohara, Kiyoshi; Sugihara, Takashi; Miyata, Yoshikuni; Sugihara, Kenya; Kubo, Kazuo; Yoshida, Hideo; Koguchi, Kazuumi; Mizuochi, Takashi
2011-10-01
Soft-decision forward error correction (SD-FEC) and its practical implementation for 100 Gb/s digital coherent systems are discussed. In applying SD-FEC to a digital coherent transponder, the configuration of the frame structure of the FEC becomes a key issue. We present a triple-concatenated FEC, with a pair of concatenated hard-decision FEC (HD-FEC) further concatenated with an SD-based low-density parity-check (LDPC) code for 20.5% redundancy. In order to evaluate error correcting performance of SD-based LDPC code. We implement the entire 100 Gb/s throughput of LDPC code on field programmable gate arrays (FPGAs) based hardware emulator. The proposed triple-concatenated FEC can achieve a Q-limit of 6.4 dB and a net coding gain (NCG) of 10.8 dB at a post-FEC bit error ratio (BER) of 10 -15 is expected. In addition, we raise an important question for the definition of NCG in digital coherent systems with and without differential quadrature phase-shift keying (QPSK) coding, which is generally used to avoid phase slip caused by the practical limitations in processing the phase recovery algorithms.
An adaptive error modeling scheme for the lossless compression of EEG signals.
Sriraam, N; Eswaran, C
2008-09-01
Lossless compression of EEG signal is of great importance for the neurological diagnosis as the specialists consider the exact reconstruction of the signal as a primary requirement. This paper discusses a lossless compression scheme for EEG signals that involves a predictor and an adaptive error modeling technique. The prediction residues are arranged based on the error count through an histogram computation. Two optimal regions are identified in the histogram plot through a heuristic search such that the bit requirement for encoding the two regions is minimum. Further improvement in the compression is achieved by removing the statistical redundancy that is present in the residue signal by using a context-based bias cancellation scheme. Three neural network predictors, namely, single-layer perceptron, multilayer perceptron, and Elman network and two linear predictors, namely, autoregressive model and finite impulse response filter are considered. Experiments are conducted using EEG signals recorded under different physiological conditions and the performances of the proposed methods are evaluated in terms of the compression ratio. It is shown that the proposed adaptive error modeling schemes yield better compression results compared to other known compression methods.
Bit-array alignment effect of perpendicular SOMA media
NASA Astrophysics Data System (ADS)
Xiao, Peiying; Yuan, Zhimin; Kuan Lee, Hwee; Guo, Guoxiao
2006-08-01
One effective way to overcome the superparamagnetic limit of magnetic recording system is to reduce the grain number per bit at given signal-to-noise ratio (SNR) level by using uniformed media grains. The self organized magnetic array (SOMA) is designed to have uniform grains with perfect grain array structure. It is believed that high enough SNR with small number of grains per bit can be acheived. But in the engineering application, the recorded bit on SOMA media may align with the regular array at different locations and angles due to non-grain synchronized writing, skew angle, and circular track. This induces the bit-array alignment effect and degrades system performance of SOMA media. In this paper, the micromagnetic simulation results show that the bit array alignment effect causes large level SNR fluctuation on the same media. The SOMA media is not preferred to be used in the conventional recording configuration. It is only suitable for the configuration of patterned media.
A 16-bit sigma-delta ADC applied in micro-machined inertial sensor
NASA Astrophysics Data System (ADS)
Qiang, Li; Xiaowei, Liu
2015-04-01
This paper presents a low-distortion sigma-delta (Σ-Δ) ADC for micro-machined inertial sensors. The design adopts a single-loop, fourth-order low-pass single-bit modulator with feedforward paths which can ensure the signal transfer lossless and reduce the nonlinearity and power consumption. The chip is manufactured in standard 0.5µm CMOS process, and the area is 2.2mm2. The ADC achieves 108dB signal to noise ratio (SNR) and 110dB dynamic range (DR). Total power consumption is less than 15mW with 5V supply.
Single-Bit All Digital Frequency Synthesis Using Homodyne Sigma-Delta Modulation.
Sotiriadis, Paul
2016-10-05
All-digital frequency synthesis using band-pass sigma-delta modulation to achieve spectrally clean single-bit output is presented and mathematically analyzed resulting in a complete model to predict stability and output spectrum. The quadrature homodyne filter architecture is introduced resulting in efficient implementations of carrier-frequency centred bandpass filters for the modulator. A multiplier-less version of the quadrature homodyne filter architecture is also introduced to reduce complexity maintaining clean in-band spectrum. MATLAB and SIMULINK simulation results present the potential capabilities of the synthesizer architectures and validate the accuracy of the developed theoretical framework.
Image steganography based on 2k correction and coherent bit length
NASA Astrophysics Data System (ADS)
Sun, Shuliang; Guo, Yongning
2014-10-01
In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.
Causes of wear of PDC bits and ways of improving their wear resistance
NASA Astrophysics Data System (ADS)
Timonin, VV; Smolentsev, AS; Shakhtorin, I. O.; Polushin, NI; Laptev, AI; Kushkhabiev, AS
2017-02-01
The scope of the paper encompasses basic factors that influence PDC bit efficiency. Feasible ways of eliminating the negatives are illustrated. The wash fluid flow in a standard bit is modeled, the resultant pattern of the bit washing is analyzed, and the recommendations are made on modification of the PDC bit design.
Zhang, Ruimao; Lin, Liang; Zhang, Rui; Zuo, Wangmeng; Zhang, Lei
2015-12-01
Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.
Phase-shifting error and its elimination in phase-shifting digital holography.
Guo, Cheng-Shan; Zhang, Li; Wang, Hui-Tian; Liao, Jun; Zhu, Y Y
2002-10-01
We investigate the influence of phase-shifting error on the quality of the reconstructed image in digital holography and propose a method of error elimination for a perfect image. In this method the summation of the intensity bit errors of the reconstructed image is taken as an evaluation function for an iterative algorithm to find the exact phase-shifting value. The feasibility of this method is demonstrated by computer simulation.
Can relativistic bit commitment lead to secure quantum oblivious transfer?
NASA Astrophysics Data System (ADS)
He, Guang Ping
2015-05-01
While unconditionally secure bit commitment (BC) is considered impossible within the quantum framework, it can be obtained under relativistic or experimental constraints. Here we study whether such BC can lead to secure quantum oblivious transfer (QOT). The answer is not completely negative. In one hand, we provide a detailed cheating strategy, showing that the "honest-but-curious adversaries" in some of the existing no-go proofs on QOT still apply even if secure BC is used, enabling the receiver to increase the average reliability of the decoded value of the transferred bit. On the other hand, it is also found that some other no-go proofs claiming that a dishonest receiver can always decode all transferred bits simultaneously with reliability 100% become invalid in this scenario, because their models of cryptographic protocols are too ideal to cover such a BC-based QOT.
Security bound of cheat sensitive quantum bit commitment.
He, Guang Ping
2015-03-23
Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.
Security bound of cheat sensitive quantum bit commitment
NASA Astrophysics Data System (ADS)
He, Guang Ping
2015-03-01
Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.
Decision-fusion-based automated drill bit toolmark correlator
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Press, Michael J.; Guerci, Joseph R.
1999-02-01
This paper describes a recent study conducted to investigate the reproducibility of toolmarks left by drill bits. This paper focuses on the automated analysis aspect of the study, and particularly the advantages of using decision fusion methods in the comparisons. To enable the study to encompass a large number of samples, existing technology was adapted to the task of automatically comparing the test impressions. Advanced forensic pattern recognition algorithms that had been developed for the comparison of ballistic evidence in the DRUGFIRETM system were modified for use in this test. The results of the decision fusion architecture closely matched those obtained by expert visual examination. The study, aided by the improved pattern recognition algorithm, showed that drill bit impressions do contain reproducible marks. In a blind test, the DRUGFIRE pattern recognition algorithm, enhanced with the decision fusion architecture, consistently identified the correct bit as the source of the test impressions.
BitCube: A Bottom-Up Cubing Engineering
NASA Astrophysics Data System (ADS)
Ferro, Alfredo; Giugno, Rosalba; Puglisi, Piera Laura; Pulvirenti, Alfredo
Enhancing on line analytical processing through efficient cube computation plays a key role in Data Warehouse management. Hashing, grouping and mining techniques are commonly used to improve cube pre-computation. BitCube, a fast cubing method which uses bitmaps as inverted indexes for grouping, is presented. It horizontally partitions data according to the values of one dimension and for each resulting fragment it performs grouping following bottom-up criteria. BitCube allows also partial materialization based on iceberg conditions to treat large datasets for which a full cube pre-computation is too expensive. Space requirement of bitmaps is optimized by applying an adaption of the WAH compression technique. Experimental analysis, on both synthetic and real datasets, shows that BitCube outperforms previous algorithms for full cube computation and results comparable on iceberg cubing.
Security bound of cheat sensitive quantum bit commitment
He, Guang Ping
2015-01-01
Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities. PMID:25796977
Use of single-cutter data in the analysis of PDC bit designs
Glowka, D.A.
1986-10-10
A method is developed for predicting cutter forces, temperatures, and wear on PDC bits as well as integrated bit performance parameters such as weight-on-bit (WOB), drilling torque, and bit imbalance. A computer code called PDCWEAR has been developed to make this method available as a tool for general bit design. The method uses single-cutter data to provide a measure of rock drillability and employs theoretical considerations to account for interaction among closely spaced cutters on the bit. Experimental data are presented to establish the effects of cutter size and wearflat area on the forces that develop during rock cutting. Waterjet assistance is shown to significantly reduce cutting forces, thereby extending bit life and reducing WOB and torque requirements in hard rock. The effects of bit profile, cutter placement density, bit rotary speed, and wear mode on bit life and drilling performance are investigated. 21 refs., 34 figs., 4 tabs.
Multiple-bit-rate clock recovery circuit: theory
NASA Astrophysics Data System (ADS)
Kaplunenko, V.
1999-11-01
The multiple-bit-rate clock recovery circuit has been recently proposed as a part of the communications packet switch. All packets must be the same length and be preceded by the frequency header, which is a number of consecutive ones (return-to-zero mode). The header is compared with the internal clock, and the result is used to set output clock frequency. The clock rate is defined by a number of fluxons propagating in ring oscillator, which is a close circular Josephson transmission line. The theory gives a bit rate bandwidth as a function of internal clock frequency, header length and silence time (maximum number of consecutive zeros in the packet).
Micro-electromechanical memory bit based on magnetic repulsion
NASA Astrophysics Data System (ADS)
López-Suárez, Miquel; Neri, Igor
2016-09-01
A bistable micro-mechanical system based on magnetic repulsion is presented exploring its applicability as memory unit where the state of the bit is encoded in the rest position of a deflected cantilever. The non-linearity induced on the cantilever can be tuned through the magnetic interaction intensity between the cantilever magnet and the counter magnet in terms of geometrical parameters. A simple model provides a sound prediction of the behavior of the system. Finally, we measured the energy required to store a bit of information on the system that, for the considered protocols, is bounded by the energy barrier separating the two stable states.
Hanford coring bit temperature monitor development testing results report
Rey, D.
1995-05-01
Instrumentation which directly monitors the temperature of a coring bit used to retrieve core samples of high level nuclear waste stored in tanks at Hanford was developed at Sandia National Laboratories. Monitoring the temperature of the coring bit is desired to enhance the safety of the coring operations. A unique application of mature technologies was used to accomplish the measurement. This report documents the results of development testing performed at Sandia to assure the instrumentation will withstand the severe environments present in the waste tanks.
Development of a jet-assisted polycrystalline diamond drill bit
Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.
1997-12-31
A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.
Entanglment assisted zero-error codes
NASA Astrophysics Data System (ADS)
Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan
2011-03-01
Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.
Optimization of Trade-offs in Error-free Image Transmission
NASA Astrophysics Data System (ADS)
Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.
1989-05-01
The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.
Serialized quantum error correction protocol for high-bandwidth quantum repeaters
NASA Astrophysics Data System (ADS)
Glaudell, A. N.; Waks, E.; Taylor, J. M.
2016-09-01
Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2
Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems
Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio
2014-01-01
Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2012-04-01
Simplified multi-track detection schemes using a priori information for bit patterned magnetic recording (BPMR) are proposed in this paper. The proposed detection schemes adopt the simplified trellis diagram, use a priori information, and detect the main-track data in the along- and cross-track directions. The simplified trellis diagram, which has 4 states and 8 branches, can be obtained by setting the corner entries of the generalized partial response (GPR) target to zero and replacing the four parallel branches with a single branch. However, these simplified techniques seriously suffer from performance degradation in high density BPMR channels. To overcome the performance degradation, a priori information is used to give higher reliability to the branch metric. In addition, to fully use the characteristics of channel detection with a two-dimensional (2D) GPR target, the proposed schemes estimate a priori information and detect the main-track data in the along- and cross-track directions by using a 2D equalizer with a 2D GPR target. The bit error rate performances of the proposed schemes are compared with the previous detection schemes when areal density is 3 Tb/in2. Simulation results show that the proposed schemes with simpler structures have more than 2 dB gains compared with the other detection schemes.
Security of two-state and four-state practical quantum bit-commitment protocols
NASA Astrophysics Data System (ADS)
Loura, Ricardo; Arsenović, Dušan; Paunković, Nikola; Popović, Duška B.; Prvanović, Slobodan
2016-12-01
We study cheating strategies against a practical four-state quantum bit-commitment protocol [A. Danan and L. Vaidman, Quant. Info. Proc. 11, 769 (2012)], 10.1007/s11128-011-0284-4 and its two-state variant [R. Loura et al., Phys. Rev. A 89, 052336 (2014)], 10.1103/PhysRevA.89.052336 when the underlying quantum channels are noisy and the cheating party is constrained to using single-qubit measurements only. We show that simply inferring the transmitted photons' states by using the Breidbart basis, optimal for ambiguous (minimum-error) state discrimination, does not directly produce an optimal cheating strategy for this bit-commitment protocol. We introduce a strategy, based on certain postmeasurement processes and show it to have better chances at cheating than the direct approach. We also study to what extent sending forged geographical coordinates helps a dishonest party in breaking the binding security requirement. Finally, we investigate the impact of imperfect single-photon sources in the protocols. Our study shows that, in terms of the resources used, the four-state protocol is advantageous over the two-state version. The analysis performed can be straightforwardly generalized to any finite-qubit measurement, with the same qualitative results.
Error Analysis: Past, Present, and Future
ERIC Educational Resources Information Center
McCloskey, George
2017-01-01
This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…
An 11 μW Sub-pJ/bit Reconfigurable Transceiver for mm-Sized Wireless Implants.
Yakovlev, Anatoly; Jang, Ji Hoon; Pivonka, Daniel
2016-02-01
A wirelessly powered 11 μW transceiver for implantable devices has been designed and demonstrated through 35 mm of porcine heart tissue. The prototype was implemented in 65 nm CMOS occupying 1 mm × 1 mm with a 2 mm × 2 mm off-chip antenna. The IC consists of a rectifier, regulator, demodulator, modulator, controller, and sensor interface. The forward link transfers power and data on a 1.32 GHz carrier using low-depth ASK modulation that minimizes impact on power delivery and achieves from 4 to 20 Mbps with 0.3 pJ/bit at 4 Mbps. The backscattering link modulates the antenna impedance with a configurable load for operation in diverse biological environments and achieves up to 2 Mbps at 0.7 pJ/bit. The device supports TDMA, allowing for operation of multiple devices from a single external transceiver.
Compiler-Assisted Detection of Transient Memory Errors
Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-06-09
The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.
Reducing Soft-error Vulnerability of Caches using Data Compression
Mittal, Sparsh; Vetter, Jeffrey S
2016-01-01
With ongoing chip miniaturization and voltage scaling, particle strike-induced soft errors present increasingly severe threat to the reliability of on-chip caches. In this paper, we present a technique to reduce the vulnerability of caches to soft-errors. Our technique uses data compression to reduce the number of vulnerable data bits in the cache and performs selective duplication of more critical data-bits to provide extra protection to them. Microarchitectural simulations have shown that our technique is effective in reducing architectural vulnerability factor (AVF) of the cache and outperforms another technique. For single and dual-core system configuration, the average reduction in AVF is 5.59X and 8.44X, respectively. Also, the implementation and performance overheads of our technique are minimal and it is useful for a broad range of workloads.
NASA Astrophysics Data System (ADS)
Macuda, Jan
2012-11-01
In Poland all lignite mines are dewatered with the use of large-diameter wells. Drilling of such wells is inefficient owing to the presence of loose Quaternary and Tertiary material and considerable dewatering of rock mass within the open pit area. Difficult geological conditions significantly elongate the time in which large-diameter dewatering wells are drilled, and various drilling complications and break-downs related to the caving may occur. Obtaining higher drilling rates in large-diameter wells can be achieved only when new cutter bits designs are worked out and rock drillability tests performed for optimum mechanical parameters of drilling technology. Those tests were performed for a bit ø 1.16 m in separated macroscopically homogeneous layers of similar drillability. Depending on the designed thickness of the drilled layer, there were determined measurement sections from 0.2 to 1.0 m long, and each of the sections was drilled at constant rotary speed and weight on bit values. Prior to drillability tests, accounting for the technical characteristic of the rig and strength of the string and the cutter bit, there were established limitations for mechanical parameters of drilling technology: P ∈ (Pmin; Pmax) n ∈ (nmin; nmax) where: Pmin; Pmax - lowest and highest values of weight on bit, nmin; nmax - lowest and highest values of rotary speed of bit, For finding the dependence of the rate of penetration on weight on bit and rotary speed of bit various regression models have been analyzed. The most satisfactory results were obtained for the exponential model illustrating the influence of weight on bit and rotary speed of bit on drilling rate. The regression coefficients and statistical parameters prove the good fit of the model to measurement data, presented in tables 4-6. The average drilling rate for a cutter bit with profiled wings has been described with the form: Vśr= Z ·Pa· nb where: Vśr- average drilling rate, Z - drillability coefficient, P
Error control coding for satellite and space communications
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.; Shu, Lin
1987-01-01
The optical direct detection channel is discussed. It is shown how simple trellis coded modulation can be used to improve performance or increase throughput (in bits per second) without a bandwidth expansion and no performance loss. In fact, a modest performance gain can be achieved. The concentration is on signals derived from the pulse-position modulation format by allowing overlap.
A Planar Approximation for the Least Reliable Bit Log-likelihood Ratio of 8-PSK Modulation
NASA Technical Reports Server (NTRS)
Thesling, William H.; Vanderaar, Mark J.
1994-01-01
The optimum decoding of component codes in block coded modulation (BCM) schemes requires the use of the log-likelihood ratio (LLR) as the signal metric. An approximation to the LLR for the least reliable bit (LRB) in an 8-PSK modulation based on planar equations with fixed point arithmetic is developed that is both accurate and easily realizable for practical BCM schemes. Through an error power analysis and an example simulation it is shown that the approximation results in 0.06 dB in degradation over the exact expression at an E(sub s)/N(sub o) of 10 dB. It is also shown that the approximation can be realized in combinatorial logic using roughly 7300 transistors. This compares favorably to a look up table approach in typical systems.
Radiation-hardened 16K-bit MNOS EAROM
Knoll, M.G.; Dellin, T.A.; Jones, R.V.
1983-01-01
A radiation-hardened silicon-gate CMOS/NMNOS 16K-bit EAROM has been designed, fabricated, and evaluated. This memory has been designed to be used as a ROM replacement in radiation-hardened microprocessor-based systems.
A radiation-hardened 16/32-bit microprocessor
Hass, K.J.; Treece, R.K.; Giddings, A.E.
1989-01-01
A radiation-hardened 16/32-bit microprocessor has been fabricated and tested. Our initial evaluation has demonstrated that it is functional after a total gamma dose of 5Mrad(Si) and is immune to SEU from Krypton ions. 3 refs., 2 figs.
Characterization of a 16-Bit Digitizer for Lidar Data Acquisition
NASA Technical Reports Server (NTRS)
Williamson, Cynthia K.; DeYoung, Russell J.
2000-01-01
A 6-MHz 16-bit waveform digitizer was evaluated for use in atmospheric differential absorption lidar (DIAL) measurements of ozone. The digitizer noise characteristics were evaluated, and actual ozone DIAL atmospheric returns were digitized. This digitizer could replace computer-automated measurement and control (CAMAC)-based commercial digitizers and improve voltage accuracy.
Rock bit requires no flushing medium to maintain drilling speed
NASA Technical Reports Server (NTRS)
1965-01-01
Steel drill bit having terraces of teeth intersected by spiral grooves with teeth permits the boring of small holes through rock with low power. The cuttings are stored in a chamber behind the cutting head. Could be used as sampling device.
Critical Investigation of Wear Behaviour of WC Drill Bit Buttons
NASA Astrophysics Data System (ADS)
Gupta, Anurag; Chattopadhyaya, Somnath; Hloch, Sergej
2013-01-01
Mining and petroleum drill bits are subjected to highly abrasive rock and high-velocity fluids that cause severe wear and erosion in service. To augment the rate of penetration and minimize the cost per foot, such drill bits are subjected to increasing rotary speeds and weight. A rotary/percussive drill typically hits the rock 50 times per second with hydraulic impact pressure of about 170-200 bar and feed pressure of about 90-100 bar, while rotating at 75-200 rpm. The drill rig delivers a high-velocity flow of drilling fluid onto the rock surface to dislodge cuttings and cool the bit. The impingement of high-velocity drilling fluid with entrained cuttings accelerates the erosion rate of the bit. Also, high service temperature contributes to softening of the rock for increased penetration. Hence, there is a need to optimize the drilling process and balance the wear rate and penetration rate simultaneously. This paper presents an experimental scanning electron microscopy (SEM) study of electroplated (nickel-bonded) diamond drills for different wear modes.
A 10-bit 50-MS/s subsampling pipelined ADC based on SMDAC and opamp sharing
NASA Astrophysics Data System (ADS)
Lijie, Chen; Yumei, Zhou; Baoyue, Wei
2010-11-01
This paper describes a 10-bit, 50-MS/s pipelined A/D converter (ADC) with proposed area- and power-efficient architecture. The conventional dedicated sample-hold-amplifier (SHA) is eliminated and the matching requirement between the first multiplying digital-to-analog converter (MDAC) and sub-ADC is also avoided by using the SHA merged with the first MDAC (SMDAC) architecture, which features low power and stabilization. Further reduction of power and area is achieved by sharing an opamp between two successive pipelined stages, in which the effect of opamp offset and crosstalk between stages is decreased. So the 10-bit pipelined ADC is realized using just four opamps. The ADC demonstrates a maximum signal-to-noise distortion ratio and spurious free dynamic range of 52.67 dB and 59.44 dB, respectively, with a Nyquist input at full sampling rate. Constant dynamic performance for input frequencies up to 49.7 MHz, which is the twofold Nyquist rate, is achieved at 50 MS/s. The ADC prototype only occupies an active area of 1.81 mm2 in a 0.35 μm CMOS process, and consumes 133 mW when sampling at 50 MHz from a 3.3-V power supply.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Cheat-sensitive commitment of a classical bit coded in a block of m × n round-trip qubits
NASA Astrophysics Data System (ADS)
Shimizu, Kaoru; Fukasaka, Hiroyuki; Tamaki, Kiyoshi; Imoto, Nobuyuki
2011-08-01
This paper proposes a quantum protocol for a cheat-sensitive commitment of a classical bit. Alice, the receiver of the bit, can examine dishonest Bob, who changes or postpones his choice. Bob, the sender of the bit, can examine dishonest Alice, who violates concealment. For each round-trip case, Alice sends one of two spin states |S±⟩ by choosing basis S at random from two conjugate bases X and Y. Bob chooses basis C ∈ {X,Y} to perform a measurement and returns a resultant state |C±⟩. Alice then performs a measurement with the other basis R (≠S) and obtains an outcome |R±⟩. In the opening phase, she can discover dishonest Bob, who unveils a wrong basis with a faked spin state, or Bob can discover dishonest Alice, who infers basis C but destroys |C±⟩ by setting R to be identical to S in the commitment phase. If a classical bit is coded in a block of m × n qubit particles, impartial examinations and probabilistic security criteria can be achieved.
Cheat-sensitive commitment of a classical bit coded in a block of mxn round-trip qubits
Shimizu, Kaoru; Fukasaka, Hiroyuki; Tamaki, Kiyoshi; Imoto, Nobuyuki
2011-08-15
This paper proposes a quantum protocol for a cheat-sensitive commitment of a classical bit. Alice, the receiver of the bit, can examine dishonest Bob, who changes or postpones his choice. Bob, the sender of the bit, can examine dishonest Alice, who violates concealment. For each round-trip case, Alice sends one of two spin states |S{+-}> by choosing basis S at random from two conjugate bases X and Y. Bob chooses basis C is an element of {l_brace}X,Y{r_brace} to perform a measurement and returns a resultant state |C{+-}>. Alice then performs a measurement with the other basis R ({ne}S) and obtains an outcome |R{+-}>. In the opening phase, she can discover dishonest Bob, who unveils a wrong basis with a faked spin state, or Bob can discover dishonest Alice, who infers basis C but destroys |C{+-}> by setting R to be identical to S in the commitment phase. If a classical bit is coded in a block of mxn qubit particles, impartial examinations and probabilistic security criteria can be achieved.
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii
2015-01-01
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
A 2-bit/Cell Gate-All-Around Flash Memory of Self-Assembled Silicon Nanocrystals
NASA Astrophysics Data System (ADS)
Chen, Hung-Bin; Chang, Chun-Yen; Hung, Min-Feng; Tang, Zih-Yun; Cheng, Ya-Chi; Wu, Yung-Chun
2013-02-01
This work presents gate-all-around (GAA) polycrystalline silicon (poly-Si) nanowires (NWs) channel poly-Si/SiO2/Si3N4/SiO2/poly-Si (SONOS) nonvolatile memory (NVM) with a self-assembled Si nanocrystal (Si-NC) embedded charge trapping (CT) layer. Fabrication of the Si-NCs is simple and compatible with the current flash process. The 2-bit operations based on channel hot electrons injection for programming and channel hot holes injection for erasing are clearly achieved by the localized discrete trap. In the programming and erasing characteristics studies, the GAA structure can effectively reduce operation voltage and shorten pulse time. One-bit programming or erasing does not affect the other bit. In the high-temperature retention characteristics studies, the cell embedded with Si-NCs shows excellent electrons confinement vertically and laterally. With respect to endurance characteristics, the memory window does not undergo closure after 104 program/erase (P/E) cycle stress. The 2-bit operation for GAA Si-NCs NVM provides scalability, reliability and flexibility in three-dimensional (3D) high-density flash memory applications.
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
The effects of long delay and transmission errors on the performance of TP-4 implementations
NASA Technical Reports Server (NTRS)
Durst, Robert C.; Evans, Eric L.; Mitchell, Randy C.
1991-01-01
A set of tools that allows us to measure and examine the effects of transmission delay and errors on the performance of TP-4 implementations has been developed. The tools give insight into both the large- and small-scale behaviors of an implementation. These tools have been systematically applied to a commercial implementation of TP-4. Measurements show, among other things, that a 2-second one-way transmission delay and an effective bit-error rate of 1 error per 100,000 bits can result in a 95 percent reduction in TP-4 throughput. The detailed statistics give insight into why transmission delay and errors affect this implementations so significantly and support a number of 'lessons learned' that could be applied to TP-4 implementations that operate more robustly across networks with long transmission delays and transmission errors.
Estimating Hardness from the USDC Tool-Bit Temperature Rise
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph; Sherrit, Stewart
2008-01-01
A method of real-time quantification of the hardness of a rock or similar material involves measurement of the temperature, as a function of time, of the tool bit of an ultrasonic/sonic drill corer (USDC) that is being used to drill into the material. The method is based on the idea that, other things being about equal, the rate of rise of temperature and the maximum temperature reached during drilling increase with the hardness of the drilled material. In this method, the temperature is measured by means of a thermocouple embedded in the USDC tool bit near the drilling tip. The hardness of the drilled material can then be determined through correlation of the temperature-rise-versus-time data with time-dependent temperature rises determined in finite-element simulations of, and/or experiments on, drilling at various known rates of advance or known power levels through materials of known hardness. The figure presents an example of empirical temperature-versus-time data for a particular 3.6-mm USDC bit, driven at an average power somewhat below 40 W, drilling through materials of various hardness levels. The temperature readings from within a USDC tool bit can also be used for purposes other than estimating the hardness of the drilled material. For example, they can be especially useful as feedback to control the driving power to prevent thermal damage to the drilled material, the drill bit, or both. In the case of drilling through ice, the temperature readings could be used as a guide to maintaining sufficient drive power to prevent jamming of the drill by preventing refreezing of melted ice in contact with the drill.
Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei
2016-11-10
Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.
Gain and noise characteristics of high-bit-rate silicon parametric amplifiers.
Sang, Xinzhu; Boyraz, Ozdal
2008-08-18
We report a numerical investigation on parametric amplification of high-bit-rate signals and related noise figure inside silicon waveguides in the presence of two-photon absorption (TPA), TPA-induced free-carrier absorption, free-carrier-induced dispersion and linear loss. Different pump parameters are considered to achieve net gain and low noise figure. We show that the net gain can only be achieved in the anomalous dispersion regime at the high-repetition-rate, if short pulses are used. An evaluation of noise properties of parametric amplification in silicon waveguides is presented. By choosing pulsed pump in suitably designed silicon waveguides, parametric amplification can be a chip-scale solution in the high-speed optical communication and optical signal processing systems.
Elliott, C.J.; McVey, B. ); Quimby, D.C. )
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Balancing the Lifetime and Storage Overhead on Error Correction for Phase Change Memory.
An, Ning; Wang, Rui; Gao, Yuan; Yang, Hailong; Qian, Depei
2015-01-01
As DRAM is facing the scaling difficulty in terms of energy cost and reliability, some nonvolatile storage materials were proposed to be the substitute or supplement of main memory. Phase Change Memory (PCM) is one of the most promising nonvolatile memory that could be put into use in the near future. However, before becoming a qualified main memory technology, PCM should be designed reliably so that it can ensure the computer system's stable running even when errors occur. The typical wear-out errors in PCM have been well studied, but the transient errors, that caused by high-energy particles striking on the complementary metal-oxide semiconductor (CMOS) circuit of PCM chips or by resistance drifting in multi-level cell PCM, have attracted little focus. In this paper, we propose an innovative mechanism, Local-ECC-Global-ECPs (LEGE), which addresses both soft errors and hard errors (wear-out errors) in PCM memory systems. Our idea is to deploy a local error correction code (ECC) section to every data line, which can detect and correct one-bit errors immediately, and a global error correction pointers (ECPs) buffer for the whole memory chip, which can be reloaded to correct more hard error bits. The local ECC is used to detect and correct the unknown one-bit errors, and the global ECPs buffer is used to store the corrected value of hard errors. In comparison to ECP-6, our method provides almost identical lifetimes, but reduces approximately 50% storage overhead. Moreover, our structure reduces approximately 3.55% access latency overhead by increasing 1.61% storage overhead compared to PAYG, a hard error only solution.
ERIC Educational Resources Information Center
Kearsley, Greg P.
This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…
Family of image compression algorithms which are robust to transmission errors
NASA Astrophysics Data System (ADS)
Creusere, Charles D.
1996-10-01
In this work, we present a new family of image compression algorithms derived from Shapiro's embedded zerotree wavelet (EZW) coder. These new algorithms introduce robustness to transmission errors into the bit stream while still preserving its embedded structure. This is done by partitioning the wavelet coefficients into groups, coding each group independently, and interleaving the bit streams for transmission, thus if one bit is corrupted, then only one of these bit streams will be truncated in the decoder. If each group of wavelet coefficients uniformly spans the entire image, then the objective and subjective qualities of the reconstructed image are very good. To illustrate the advantages of this new family, we compare it to the conventional EZW coder. For example, one variation has a peak signal to noise ratio (PSNR) slightly lower than that of the conventional algorithm when no errors occur, but when a single error occurs at bit 1000, the PSNR of the new coder is well over 5 dB higher for both test images. Finally, we note that the new algorithms do not increase the complexity of the overall system and, in fact, they are far more easily parallelized than the conventional EZW coder.
Fully Distrustful Quantum Bit Commitment and Coin Flipping
NASA Astrophysics Data System (ADS)
Silman, J.; Chailloux, A.; Aharon, N.; Kerenidis, I.; Pironio, S.; Massar, S.
2011-06-01
In the distrustful quantum cryptography model the parties have conflicting interests and do not trust one another. Nevertheless, they trust the quantum devices in their labs. The aim of the device-independent approach to cryptography is to do away with the latter assumption, and, consequently, significantly increase security. It is an open question whether the scope of this approach also extends to protocols in the distrustful cryptography model, thereby rendering them “fully” distrustful. In this Letter, we show that for bit commitment—one of the most basic primitives within the model—the answer is positive. We present a device-independent (imperfect) bit-commitment protocol, where Alice’s and Bob’s cheating probabilities are ≃0.854 and (3)/(4), which we then use to construct a device-independent coin flipping protocol with bias ≲0.336.
A 128K-bit CCD buffer memory system
NASA Technical Reports Server (NTRS)
Siemens, K. H.; Wallace, R. W.; Robinson, C. R.
1976-01-01
A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. 8K-bit CCD shift register memories were used to construct a feasibility model 128K-bit buffer memory system. Peak power dissipation during a data transfer is less than 7 W., while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. Descriptions are provided of both the buffer memory system and a custom tester that was used to exercise the memory. The testing procedures and testing results are discussed. Suggestions are provided for further development with regards to the utilization of advanced versions of CCD memory devices to both simplified and expanded memory system applications.
Efficient biased random bit generation for parallel processing
Slone, Dale M.
1994-09-28
A lattice gas automaton was implemented on a massively parallel machine (the BBN TC2000) and a vector supercomputer (the CRAY C90). The automaton models Burgers equation ρt + ρρ_{x} = vρ_{xx} in 1 dimension. The lattice gas evolves by advecting and colliding pseudo-particles on a 1-dimensional, periodic grid. The specific rules for colliding particles are stochastic in nature and require the generation of many billions of random numbers to create the random bits necessary for the lattice gas. The goal of the thesis was to speed up the process of generating the random bits and thereby lessen the computational bottleneck of the automaton.
b.i.t. Bremerhaven: Thin Clients entlasten Schulen
NASA Astrophysics Data System (ADS)
Das Schulamt Bremerhaven zentralisiert die Verwaltungs-IT und schafft dadurch Freiräume für pädagogische und organisatorische Herausforderungen. Pflege und Support der neuen Infrastruktur übernimmt der Dienstleister b.i.t. Bremerhaven, die Thin Clients kommen vom Bremer Hersteller IGEL Technology. Ganztagsschulen, das 12-jährige Abitur, PISA, der Wegfall der Orientierungsstufe - deutsche Schulen müssen derzeit zahlreiche organisatorische und pädagogische Herausforderungen bewältigen. Um die neuen Strukturen umsetzen zu können, werden zusätzliche Ressourcen benötigt. Das Schulamt Bremerhaven hat gemeinsam mit dem Dienstleister b.i.t. Bremerhaven (Betrieb für Informationstechnologie) eine intelligente Lösung gefunden, wie sich die benötigten finanziellen Freiräume schaffen lassen.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1973-01-01
The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.
An improved pi/4-QPSK with nonredundant error correction for satellite mobile broadcasting
NASA Technical Reports Server (NTRS)
Feher, Kamilo; Yang, Jiashi
1991-01-01
An improved pi/4-quadrature phase-shift keying (QPSK) receiver that incorporates a simple nonredundant error correction (NEC) structure is proposed for satellite and land-mobile digital broadcasting. The bit-error-rate (BER) performance of the pi/4-QPSK with NEC is analyzed and evaluated in a fast Rician fading and additive white Gaussian noise (AWGN) environment using computer simulation. It is demonstrated that with simple electronics the performance of a noncoherently detected pi/4-QPSK signal in both AWGN and fast Rician fading can be improved. When the K-factor (a ratio of average power of multipath signal to direct path power) of the Rician channel decreases, the improvement increases. An improvement of 1.2 dB could be obtained at a BER of 0.0001 in the AWGN channel. This performance gain is achieved without requiring any signal redundancy and additional bandwidth. Three types of noncoherent detection schemes of pi/4-QPSK with NEC structure, such as IF band differential detection, baseband differential detection, and FM discriminator, are discussed. It is concluded that the pi/4-QPSK with NEC is an attractive scheme for power-limited satellite land-mobile broadcasting systems.
Spin Quantum Bit with Ferromagnetic Contacts for Circuit QED
Cottet, Audrey; Kontos, Takis
2010-10-15
We theoretically propose a scheme for a spin quantum bit based on a double quantum dot contacted to ferromagnetic elements. Interface exchange effects enable an all electric manipulation of the spin and a switchable strong coupling to a superconducting coplanar waveguide cavity. Our setup does not rely on any specific band structure and can in principle be realized with many different types of nanoconductors. This allows us to envision on-chip single spin manipulation and readout using cavity QED techniques.
Spin quantum bit with ferromagnetic contacts for circuit QED.
Cottet, Audrey; Kontos, Takis
2010-10-15
We theoretically propose a scheme for a spin quantum bit based on a double quantum dot contacted to ferromagnetic elements. Interface exchange effects enable an all electric manipulation of the spin and a switchable strong coupling to a superconducting coplanar waveguide cavity. Our setup does not rely on any specific band structure and can in principle be realized with many different types of nanoconductors. This allows us to envision on-chip single spin manipulation and readout using cavity QED techniques.
Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A
2010-05-01
Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.
Development of a near-bit MWD system. Quarterly report, April 1994--June 1994
McDonald, W.J.; Pittard, G.T.
1994-11-01
Horizontal drilling utilized in the oil and gas fields has need of accurate directional placement and drilling conditions at the bit. The preliminary design of a drill bit with a measuring instrument/telemetry system attached is briefly described.
Omiya, Tatsunori; Yoshida, Masato; Nakazawa, Masataka
2013-02-11
We demonstrate 400 Gbit/s frequency-division-multiplexed and polarization-division-multiplexed 256 QAM-OFDM transmission over 720 km with a spectral efficiency of 14 bit/s/Hz by using high-resolution frequency domain equalization (FDE) and digital back-propagation (DBP) methods. A detailed analytical evaluation of the 256 QAM-OFDM transmission is also provided, which clarifies the influence of quantization error in the digital coherent receiver on the waveform distortion compensation with DBP.
Color encoding for gamut extension and bit-depth extension
NASA Astrophysics Data System (ADS)
Zeng, Huanzhao
2005-02-01
Monitor oriented RGB color spaces (e.g. sRGB) are widely applied for digital image representation for the simplicity in displaying images on monitor displays. However, the physical gamut limits its ability to encode colors accurately for color images that are not limited to the display RGB gamut. To extend the encoding gamut, non-physical RGB primaries may be used to define the color space, or the RGB tone ranges may be extended beyond the physical range. An out-of-gamut color has at least one of the R, G, and B channels that are smaller than 0 or higher than 100%. Instead of using wide-gamut RGB primaries for gamut expansion, we may extend the tone ranges to expand the encoding gamut. Negative tone values and tone values over 100% are allowed. Methods to efficiently and accurately encode out-of-gamut colors are discussed in this paper. Interpretation bits are added to interpret the range of color values or to encode color values with a higher bit-depth. The interpretation bits of R, G, and B primaries can be packed and stored in an alpha channel in some image formats (e.g. TIFF) or stored in a data tag (e.g. in JEPG format). If a color image does not have colors that are out of a regular RGB gamut, a regular program (e.g. Photoshop) is able to manipulate the data correctly.
On the undetected error probability of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Deng, H.; Costello, D. J., Jr.
1984-01-01
Consider a concatenated coding scheme for error control on a binary symmetric channel, called the inner channel. The bit error rate (BER) of the channel is correspondingly called the inner BER, and is denoted by Epsilon (sub i). Two linear block codes, C(sub f) and C(sub b), are used. The inner code C(sub f), called the frame code, is an (n,k) systematic binary block code with minimum distance, d(sub f). The frame code is designed to correct + or fewer errors and simultaneously detect gamma (gamma +) or fewer errors, where + + gamma + 1 = to or d(sub f). The outer code C(sub b) is either an (n(sub b), K(sub b)) binary block with a n(sub b) = mk, or an (n(sub b), k(Sub b) maximum distance separable (MDS) code with symbols from GF(q), where q = 2(b) and the code length n(sub b) satisfies n(sub)(b) = mk. The integerim is the number of frames. The outercode is designed for error detection only.
Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.
Maani, Ehsan; Katsaggelos, Aggelos K
2009-09-01
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.
PEALL4: a 4-channel, 12-bit, 40-MSPS, Power Efficient and Low Latency SAR ADC
NASA Astrophysics Data System (ADS)
Rarbi, F.; Dzahini, D.; Gallin-Martel, L.; Bouvier, J.; Zeloufi, M.; Trocme, B.; Gabaldon Ruiz, C.
2015-01-01
The PEALL4 chip is a Power Efficient And Low Latency 4-channels, 12-bit and 40-MSPS successive approximation register (SAR) ADC. It was designed featuring a very short latency time in the context of ATLAS Liquid Argon Calorimeter phase I upgrade. Moreover this design could be a good option for ATLAS phase II and other High Energy Physics (HEP) projects. The full functionality of the converter is achieved by an embedded high-speed clock frequency conversion generated by the ADC itself. The design and testing results of the PEALL4 chip implemented in a commercial 130nm CMOS process are presented. The size of this 4-channel ADC with embedded voltage references and sLVS output serializer is 2.8x3.4 mm2. The chip presents a short latency time less than 25 ns defined from the very beginning of the sampling to the last conversion bit made available. A total power consumption below 27mW per channel is measured including the reference buffer and the sLVS serializer.
Optical communications research to demonstrate 2.5 bits/detected photon
NASA Technical Reports Server (NTRS)
Lesh, J. R.
1982-01-01
The transmission of information by optical signals over a space channel with a power efficiency of 2.5 bits/detected photon markedly increases the amount of information that can be transmitted to satellites. An account is given of the research program at the Jet Propulsion Laboratory that is attempting to demonstrate that optical signals can be used to transmit information over a space channel with this power efficiency. It is noted, however, that the ability to attain 2.5 bits/detected photon (or higher) depends heavily on the validity of the mathematical models used in the performance analysis. Therefore, verification of the channel dark current noise models is a crucial first step. Another prerequisite is a high-brightness, single-spatial mode laser emitter. It is believed that single spatial model devices with power outputs of about 1W can be achieved by coherently combining a number of GaAs lasers in what effectively amounts to a phased array.
Performance comparison of HEVC reference SW, x265 and VPX on 8-bit 1080p content
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu
2016-09-01
This paper presents a study comparing the coding efficiency performance of three software codecs: (a) the HEVC Main Profile Reference Software; (b) the x265 codec; and (c) VP10. Note here that we are specifically testing only 8-bit performance. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance (PSNR), (2) informal subjective assessment. Finally, two approaches to coding were used: (i) constant quality; and (ii) fixed bit rate. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Whereas target bitrate coding is done to study the compression efficiency achieved with rate control, which can and does have a significant impact. Our general conclusion is that under constant quality coding, the HEVC reference software appears to be superior to the other two, whereas with rate control and fixed rate coding, these codecs are more on an equal footing. We remark that this latter result may be partly or mainly due to the maturity of the various rate control mechanisms in these codecs.
Repeated quantum error correction on a continuously encoded qubit by real-time feedback.
Cramer, J; Kalb, N; Rol, M A; Hensen, B; Blok, M S; Markham, M; Twitchen, D J; Hanson, R; Taminiau, T H
2016-05-05
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.
Computer-Aided Design for Built-In-Test (CADBIT) - BIT Library. Volume 2
1989-10-01
description including introduction, automated procedure, data base, menus, CAD and BIT survey , and recommendati,ns. Volume II contains a description...applications were also surveyed to determine standards required for the CAD-BI I module implemen- tation and to establish requirements for and define...I include Menus, the CAD-BIT Feasibility Demonstration. BIT and CAD workstation surveys and Standards Recommendations, SLART-BIT Appli- cations, and a
Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou
2013-05-20
We propose an efficient protocol for optimizing the physical implementation of three-qubit quantum error correction with spatially separated quantum dot spins via virtual-photon-induced process. In the protocol, each quantum dot is trapped in an individual cavity and each two cavities are connected by an optical fiber. We propose the optimal quantum circuits and describe the physical implementation for correcting both the bit flip and phase flip errors by applying a series of one-bit unitary rotation gates and two-bit quantum iSWAP gates that are produced by the long-range interaction between two distributed quantum dot spins mediated by the vacuum fields of the fiber and cavity. The protocol opens promising perspectives for long distance quantum communication and distributed quantum computation networks.
Aircraft system modeling error and control error
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)
2012-01-01
A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.
NASA Astrophysics Data System (ADS)
El-Shafai, Walid
2015-09-01
3D multi-view video (MVV) is multiple video streams shot by several cameras around a single scene simultaneously. Therefore it is an urgent task to achieve high 3D MVV compression to meet future bandwidth constraints while maintaining a high reception quality. 3D MVV coded bit-streams that are transmitted over wireless network can suffer from error propagation in the space, time and view domains. Error concealment (EC) algorithms have the advantage of improving the received 3D video quality without any modifications in the transmission rate or in the encoder hardware or software. To improve the quality of reconstructed 3D MVV, we propose an efficient adaptive EC algorithm with multi-hypothesis modes to conceal the erroneous Macro-Blocks (MBs) of intra-coded and inter-coded frames by exploiting the spatial, temporal and inter-view correlations between frames and views. Our proposed algorithm adapts to 3D MVV motion features and to the error locations. The lost MBs are optimally recovered by utilizing motion and disparity matching between frames and views on pixel-by-pixel matching basis. Our simulation results show that the proposed adaptive multi-hypothesis EC algorithm can significantly improve the objective and subjective 3D MVV quality.
NASA Astrophysics Data System (ADS)
Kumar, Santosh; Bisht, Ashish; Singh, Gurdeep; Choudhary, Kuldeep; Raina, K. K.; Amphawan, Angela
2015-12-01
The Mach-Zehnder interferometer (MZI) structures collectively show powerful capability in switching an input optical signal to a desired output port from a collection of output ports. Hence, it is possible to construct complex optical combinational digital circuits using the electro-optic effect constituting MZI structure as a basic building block. Optical switches have been designed for 1-bit and 2-bit magnitude comparators based on electro-optic effect using Mach-Zehnder interferometers. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. Analysis of some factors influencing the performances of proposed device has been discussed properly. The study is verified using beam propagation method.
Quality Improvement Method using Double Error Correction in Burst Transmission Systems
NASA Astrophysics Data System (ADS)
Tsuchiya, Naosuke; Tomiyama, Shigenori; Tanaka, Kimio
Recently, it has a tendency to reduce an error correction and flow control in order to realize a high speed transmission in a burst transmission systems such as ATM network, IP (Internet Protocol) network, frame relay and so on. Therefore a degradation of network quality, an information loss caused by buffer overflow and decrease of average bit error rate, are occurred, especially for high speed information such as high definition television signals, it is necessary to improve these degradations. This paper proposes one of the typical reconstruction methods of lost information and an improvement of average bit error rate. In order to analyse the degradation phenomena, the Gilbert model is introduced for burst errors and the Fluid flow model for buffer overflow. This method is applied to ATM network which mainly transmit a video signals and it makes clear that proposed method is useful for high speed transmission.
Error-Correcting 6/8 Modulation Code for Reducing Two-Dimensional Intersymbol Interference
NASA Astrophysics Data System (ADS)
Kim, Jinyoung; Lee, Jaejin
2011-09-01
We introduce error-correcting 6/8 modulation codes for reducing two-dimensional intersymbol interference in holographic data storage. The proposed modulation codes have a trellis-like structure in which the data is encoded, enabling their error-correcting capability, and are more complex than previous error-correcting 4/6 modulation codes because of increasing number of symbols and states compared to the conventional 6/8 modulation codes. The bit error rate (BER) performances are improved for the proposed modulation codes compared with those for 4/6 modulation codes, moreover the proposed codes make 12.5% times more efficient.
NASA Astrophysics Data System (ADS)
Kim, Jinyoung; Lee, Jaejin
2012-08-01
In this paper, we investigate a simplified decoding method for the trellis-based error-correcting modulation codes using the M-algorithm for holographic data storage. The M-algorithm, which sacrifices the bit error rate performance, can reduce the Viterbi algorithm's complexity. When the M-algorithm is used in the trellis-based error-correcting modulation codes, common delay and complexity problems can be reduced.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
Soft Error Vulnerability of Iterative Linear Algebra Methods
Bronevetsky, G; de Supinski, B
2007-12-15
Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.
Strategy for hierarchical error correction in smart memories
Murphy, S.A.
1986-01-01
Yield, reliability, current, and power considerations impose severe constraints on the architecture of systems implemented with Water-Scale Integration (WSI). Not all systems are amenable to WSI, but some are and, where that is the case, benefits are overwhelming. Memories for raster-scan displays are excellent examples of integrable systems; they are highly regular, inherently fault-tolerant, and the duty cycle can be short. The feasibility question hinges on yield and reliability. To increase yield and reliability, hierarchical redundancy is used, static redundancy for yield and dynamic redundancy for reliability. In memories a few spare elements go far. In processors, however, expensive triple redundancy is best. Therefore, a WSI smart memory comprises a large memory with a small processor. The smart memory concept is adopted on an anti-Rent strategy. For efficiency in error correction, words are long --64 bits. The placement of bits in the memory is dictated by alpha-particle considerations; a single alpha-particle should precipitate no more than one error per word. Both lasers and EPROMs are used for repair; lasers eliminate blocks that have shorting faults, EPROM's fix logic faults. This thesis presents the architecture and selected circuit details of a 64-bit processor and 512K-bytes of static RAM.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
A 0.23 pJ 11.05-bit ENOB 125-MS/s pipelined ADC in a 0.18 μm CMOS process
NASA Astrophysics Data System (ADS)
Yong, Wang; Jianyun, Zhang; Rui, Yin; Yuhang, Zhao; Wei, Zhang
2015-05-01
This paper describes a 12-bit 125-MS/s pipelined analog-to-digital converter (ADC) that is implemented in a 0.18 μm CMOS process. A gate-bootstrapping switch is used as the bottom-sampling switch in the first stage to enhance the sampling linearity. The measured differential and integral nonlinearities of the prototype are less than 0.79 least significant bit (LSB) and 0.86 LSB, respectively, at the full sampling rate. The ADC exhibits an effective number of bits (ENOB) of more than 11.05 bits at the input frequency of 10.5 MHz. The ADC also achieves a 10.5 bits ENOB with the Nyquist input frequency at the full sample rate. In addition, the ADC consumes 62 mW from a 1.9 V power supply and occupies 1.17 mm2, which includes an on-chip reference buffer. The figure-of-merit of this ADC is 0.23 pJ/step. Project supported by the Foundation of Shanghai Municipal Commission of Economy and Informatization (No. 130311).
Universality and clustering in {bold 1+1} dimensional superstring-bit models
Bergman, O.; Thorn, C.B.
1996-03-01
We construct a 1+1 dimensional superstring-bit model for {ital D}=3 type IIB superstring. This low dimension model escapes the problems encountered in higher dimension models: (1) It possesses full Galilean supersymmetry. (2) For noninteracting polymers of bits, the exactly soluble linear superpotential describing bit interactions is in a large universality class of superpotentials which includes ones bounded at spatial infinity. (3) The latter are used to construct a superstring-bit model with the clustering properties needed to define an {ital S} matrix for closed polymers of superstring bits. {copyright} {ital 1996 The American Physical Society.}
Universality and clustering in 1 + 1 dimensional superstring-bit models
Bergman, O.; Thorn, C.B.
1996-03-01
We construct a 1+1 dimensional superstring-bit model for D=3 Type IIB superstring. This low dimension model escapes the problem encountered in higher dimension models: (1) It possesses full Galilean supersymmetry; (2) For noninteracting Polymers of bits, the exactly soluble linear superpotential describing bit interactions is in a large universality class of superpotentials which includes ones bounded at spatial infinity; (3) The latter are used to construct a superstring-bit model with the clustering properties needed to define an S-matrix for closed polymers of superstring-bits.
NASA Astrophysics Data System (ADS)
Souto, A.; Mateus, P.; Adão, P.; Paunković, N.
2015-10-01
In the Comment the author states that the proposed all-or-nothing oblivious transfer (OT) protocol in our paper is insecure against a dishonest Alice and, as a corollary, derives an attack to Crépeau's construction of 1-out-of-2 OT. The security criterion used in the Comment is indeed stronger than the one used in our paper. However, we argue that the criterion used in our paper is in the spirit of the original idea of the OT protocol proposed by Rabin. Moreover, a protocol that satisfies the criterion in our paper can be used to construct useful multiparty protocols. Finally, the protocol in our paper can be used, together with a secure bit commitment scheme, to construct a 1-out-of-2 OT secure against malicious Alice, achieving the security requirement considered in the Comment.
High-temperature seals and lubricants for geothermal rock bits. Final report
Hendrickson, R.R.; Winzenried, R.W.; Jones A.H.
1981-04-01
High temperature seals (elastomeric and mechanical) and lubricants were developed specifically for journal-type rock bits to be used in geothermal well drilling. Results at simulated downhole conditions indicate that five selected elastomeric seals (L'Garde No. 267, Utex Nos. 227, 231 and HTCR, and Sandia Glow Discharge Coated Viton) are capable of 288/sup 0/C (500/sup 0/F) service. Two prototype mechanical seals did not achieve the life determined for the elastomeric seals. Six lubricants (Pacer PLX-024 oil, PLX-043 oil, PLX-045 oil, Geobond Oil, and Geobond Grease) demonstrated 316/sup 0/C (600/sup 0/F) capability. Recommendation is made for full-scale simulated geothermal drilling tests utilizing the improved elastomeric seals and lubricants.
Storage-efficient 16-Bit Hybrid IP traceback with Single Packet.
Yang, Ming Hour
2014-01-01
Since adversaries may spoof their source IPs in the attacks, traceback schemes have been proposed to identify the attack source. However, some of these schemes' storage requirements increase with packet numbers. Some even have false positives because they use an IP header's fragment offset for marking. Thus, we propose a 16-bit single packet hybrid IP traceback scheme that combines packet marking and packet logging with high accuracy and low storage requirement. The size of our log tables can be bounded by route numbers. We also set a threshold to determine whether an upstream interface number is stored in a log table or in a marking field, so as to balance the logging frequency and our computational loads. Because we store user interface information on small-degree routers, compared with current single packet traceback schemes, ours can have the lowest storage requirements. Besides, our traceback achieves zero false positive/negative rates and guarantees reassembly of fragmented packets at the destination.
Design of CNTFET-based 2-bit ternary ALU for nanoelectronics
NASA Astrophysics Data System (ADS)
Lata Murotiya, Sneh; Gupta, Anu
2014-09-01
This article presents a hardware-efficient design of 2-bit ternary arithmetic logic unit (ALU) using carbon nanotube field-effect transistors (CNTFETs) for nanoelectronics. The proposed structure introduces a ternary adder-subtractor functional module to optimise ALU architecture. The full adder-subtractor (FAS) cell uses nearly 72% less transistors than conventional architecture, which contains separate ternary cells for addition as well as subtraction. The presented ALU also minimises ternary function expressions with utilisation of binary gates for optimisation at the circuit level, thus attaining a simple design. Hspice simulations results demonstrate that the ALU ternary circuits achieve great improvement in terms of power delay product with respect to their CMOS counterpart at 32 nm.
NASA Astrophysics Data System (ADS)
Truong, Trieu-Kien; Chen, Shi-Huang
2006-03-01
In this paper, a new medical image compression algorithm using cubic spline interpolation (CSI) is presented for telemedicine applications. The CSI is developed in order to subsample image data with minimal distortion and to achieve image compression. It has been shown in literatures that the CSI can be combined with the JPEG or JPEG2000 algorithm to develop a modified JPEG or JPEG2000 codec, which obtains a higher compression ratio and a better quality of reconstructed image than the standard JPEG and JPEG2000 codecs. This paper further makes use of the modified JPEG codec to medical image compression. Experimental results show that the proposed scheme can increase 25~30% compression ratio of original JPEG medical data compression system with similar visual quality. This system can reduce the loading of telecommunication networks and is quite suitable for low bit-rate telemedicine applications.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Lazopoulos, Achilleas
2006-07-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Chaotic laser based physical random bit streaming system with a computer application interface
NASA Astrophysics Data System (ADS)
Shinohara, Susumu; Arai, Kenichi; Davis, Peter; Sunada, Satoshi; Harayama, Takahisa
2017-03-01
We demonstrate a random bit streaming system that uses a chaotic laser as its physical entropy source. By performing real-time bit manipulation for bias reduction, we were able to provide the memory of a personal computer with a constant supply of ready-to-use physical random bits at a throughput of up to 4 Gbps. We pay special attention to the end-to-end entropy source model describing how the entropy from physical sources is converted into bit entropy. We confirmed the statistical quality of the generated random bits by revealing the pass rate of the NIST SP800-22 test suite to be 65 % to 75 %, which is commonly considered acceptable for a reliable random bit generator. We also confirmed the stable operation of our random bit steaming system with long-term bias monitoring.
Multi-bit quantum random number generation by measuring positions of arrival photons
Yan, Qiurong; Zhao, Baosheng; Liao, Qinghong; Zhou, Nanrun
2014-10-15
We report upon the realization of a novel multi-bit optical quantum random number generator by continuously measuring the arrival positions of photon emitted from a LED using MCP-based WSA photon counting imaging detector. A spatial encoding method is proposed to extract multi-bits random number from the position coordinates of each detected photon. The randomness of bits sequence relies on the intrinsic randomness of the quantum physical processes of photonic emission and subsequent photoelectric conversion. A prototype has been built and the random bit generation rate could reach 8 Mbit/s, with random bit generation efficiency of 16 bits per detected photon. FPGA implementation of Huffman coding is proposed to reduce the bias of raw extracted random bits. The random numbers passed all tests for physical random number generator.
Multi-bit quantum random number generation by measuring positions of arrival photons
NASA Astrophysics Data System (ADS)
Yan, Qiurong; Zhao, Baosheng; Liao, Qinghong; Zhou, Nanrun
2014-10-01
We report upon the realization of a novel multi-bit optical quantum random number generator by continuously measuring the arrival positions of photon emitted from a LED using MCP-based WSA photon counting imaging detector. A spatial encoding method is proposed to extract multi-bits random number from the position coordinates of each detected photon. The randomness of bits sequence relies on the intrinsic randomness of the quantum physical processes of photonic emission and subsequent photoelectric conversion. A prototype has been built and the random bit generation rate could reach 8 Mbit/s, with random bit generation efficiency of 16 bits per detected photon. FPGA implementation of Huffman coding is proposed to reduce the bias of raw extracted random bits. The random numbers passed all tests for physical random number generator.
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Implications of Error Analysis Studies for Academic Interventions
ERIC Educational Resources Information Center
Mather, Nancy; Wendling, Barbara J.
2017-01-01
We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…
Twenty Questions about Student Errors.
ERIC Educational Resources Information Center
Fisher, Kathleen M.; Lipson, Joseph Isaac
1986-01-01
Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Simplified quantum bit commitment using single photon nonlocality
NASA Astrophysics Data System (ADS)
He, Guang Ping
2014-10-01
We simplified our previously proposed quantum bit commitment (QBC) protocol based on the Mach-Zehnder interferometer, by replacing symmetric beam splitters with asymmetric ones. It eliminates the need for random sending time of the photons; thus, the feasibility and efficiency are both improved. The protocol is immune to the cheating strategy in the Mayers-Lo-Chau no-go theorem of unconditionally secure QBC, because the density matrices of the committed states do not satisfy a crucial condition on which the no-go theorem holds.
Design of high-bit-rate coherent communication links
NASA Astrophysics Data System (ADS)
Konyshev, V. A.; Leonov, A. V.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.
2016-12-01
We report an analysis of the problems encountered in the design of modern high-bit-rate coherent communication links. A phenomenological communication link model is described, which is suitable for solving applied tasks of the network design with nonlinear effects taken into account. We propose an engineering approach to the design that is based on the use of fundamental nonlinearity coefficients calculated in advance for the experimental configurations of communication links. An experimental method is presented for calculating the nonlinearity coefficient of communication links. It is shown that the proposed approach allows one to successfully meet the challenges in designing communication networks.
Cryptographic Properties of the Hidden Weighted Bit Function
2013-12-23
40 and revisited by D. Knuth in Vol. 4 of The Art of Computer Programming, is a function that seems to be the simplest one with exponential Binary...SUPPLEMENTARY NOTES 14. ABSTRACT The hidden weighted bit function (HWBF), introduced by R. Bryant in IEEE Trans. Comp. 40 and revisited by D. Knuth in...but has a VLSI implementation with low area-time complexity [2]. In [19], Knuth reproved Bryant’s theorem stating that the HWBF has a large BDD size
Floating-point function generation routines for 16-bit microcomputers
NASA Technical Reports Server (NTRS)
Mackin, M. A.; Soeder, J. F.
1984-01-01
Several computer subroutines have been developed that interpolate three types of nonanalytic functions: univariate, bivariate, and map. The routines use data in floating-point form. However, because they are written for use on a 16-bit Intel 8086 system with an 8087 mathematical coprocessor, they execute as fast as routines using data in scaled integer form. Although all of the routines are written in assembly language, they have been implemented in a modular fashion so as to facilitate their use with high-level languages.
All-optical pseudorandom bit sequences generator based on TOADs
NASA Astrophysics Data System (ADS)
Sun, Zhenchao; Wang, Zhi; Wu, Chongqing; Wang, Fu; Li, Qiang
2016-03-01
A scheme for all-optical pseudorandom bit sequences (PRBS) generator is demonstrated with optical logic gate 'XNOR' and all-optical wavelength converter based on cascaded Tera-Hertz Optical Asymmetric Demultiplexer (TOADs). Its feasibility is verified by generation of return-to-zero on-off keying (RZ-OOK) 263-1 PRBS at the speed of 1 Gb/s with 10% duty radio. The high randomness of ultra-long cycle PRBS is validated by successfully passing the standard benchmark test.
Fast computational scheme of image compression for 32-bit microprocessors
NASA Technical Reports Server (NTRS)
Kasperovich, Leonid
1994-01-01
This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.
BIT/External Test Figures of Merit and Demonstration Techniques
1979-12-01
review indicates that all the FOMs fall into seventeen generic groupings . The ’specific FOMs within each group vary in numerical value and exact... group of tests, or all tests. TB can be represented by the following model- 3o N (active running time of the tth BIT/ETE test routine, T TB the...FOMs may be needed. Categorization of the FOMs was also used in Section 6.0 for determining which FOMe are inter- relaied and for determining.an
Dandona, R.; Dandona, L.
2001-01-01
Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669
ERIC Educational Resources Information Center
Richmond, Kent C.
Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…
Phasing piston error in segmented telescopes.
Jiang, Junlun; Zhao, Weirui
2016-08-22
To achieve a diffraction-limited imaging, the piston errors between the segments of the segmented primary mirror telescope should be reduced to λ/40 RMS. We propose a method to detect the piston error by analyzing the intensity distribution on the image plane according to the Fourier optics principle, which can capture segments with the piston errors as large as the coherence length of the input light and reduce these to 0.026λ RMS (λ = 633nm). This method is adaptable to any segmented and deployable primary mirror telescope. Experiments have been carried out to validate the feasibility of the method.
Studies of Error Sources in Geodetic VLBI
NASA Technical Reports Server (NTRS)
Rogers, A. E. E.; Niell, A. E.; Corey, B. E.
1996-01-01
Achieving the goal of millimeter uncertainty in three dimensional geodetic positioning on a global scale requires significant improvement in the precision and accuracy of both random and systematic error sources. For this investigation we proposed to study errors due to instrumentation in Very Long Base Interferometry (VLBI) and due to the atmosphere. After the inception of this work we expanded the scope to include assessment of error sources in GPS measurements, especially as they affect the vertical component of site position and the measurement of water vapor in the atmosphere. The atmosphere correction 'improvements described below are of benefit to both GPS and VLBI.
Two-step single slope/SAR ADC with error correction for CMOS image sensor.
Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin
2014-01-01
Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μ m(2) · cycles/sample.
Gillespy, T; Rowberg, A H
1994-02-01
Most digital radiologic images have an extended contrast range of 9 to 13 bits, and are stored in memory and disk as 16-bit integers. Consequently, it is difficult to view such images on computers with 8-bit red-green-blue (RGB) graphic systems. Two approaches have traditionally been used: (1) perform a one-time conversion of the 16-bit image data to 8-bit gray-scale data, and then adjust the brightness and contrast of the image by manipulating the color palette (palette animation); and (2) use a software lookup table to interactively convert the 16-bit image data to 8-bit gray-scale values with different window width and window level parameters. The first method can adjust image appearance in real time, but some image features may not be visible because of the lack of access to the full contrast range of the image and any region of interest measurements may be inaccurate. The second method allows "windowing" and "leveling" through the full contrast range of the image, but there is a delay after each adjustment that some users may find objectionable. We describe a method that combines palette animation and the software lookup table conversion method that optimizes the changes in image contrast and brightness on computers with standard 8-bit RGB graphic hardware--the dual lookup table algorithm. This algorithm links changes in the window/level control to changes in image contrast and brightness via palette animation.(ABSTRACT TRUNCATED AT 250 WORDS)
Hiwasa, Takeshi; Morishita, Junji; Hatanaka, Shiro; Ohki, Masafumi; Toyofuku, Fukai; Higashida, Yoshiharu
2009-01-01
Our purpose in this study was to examine the potential usefulness of liquid-crystal display (LCD) monitors having the capability of rendering higher than 8 bits in display-bit depth. An LCD monitor having the capability of rendering 8, 10, and 12 bits was used. It was calibrated to the grayscale standard display function with a maximum luminance of 450 cd/m(2) and a minimum of 0.75 cd/m(2). For examining the grayscale resolution reported by ten observers, various simple test patterns having two different combinations of luminance in 8, 10, and 12 bits were randomly displayed on the LCD monitor. These patterns were placed on different uniform background luminance levels, such as 0, 50, and 100%, for maximum luminance. All observers participating in this study distinguished a smaller difference in luminance than one gray level in 8 bits irrespective of background luminance levels. As a result of the adaptation processes of the human visual system, observers distinguished a smaller difference in luminance as the luminance level of the test pattern was closer to the background. The smallest difference in luminance that observers distinguished was four gray levels in 12 bits, i.e., one gray level in 10 bits. Considering the results obtained by use of simple test patterns, medical images should ideally be displayed on LCD monitors having 10 bits or greater so that low-contrast objects with small differences in luminance can be detected and for providing a smooth gradation of grayscale.
Electron-Beam Detection of Bits Reversibly Recorded on Epitaxial InSe/GaSe/Si Phase-Change Diodes
NASA Astrophysics Data System (ADS)
Chaiken, Alison; Gibson, Gary A.; Chen, John; Yeh, Bao S.; Jasinski, J. B.; Liliental‑Weber, Z.; Nauka, K.; Yang, C. C.; Lindig, D. D.; Subramanian, S.
2006-04-01
We demonstrate a data read-back scheme based on electron-beam induced current in a data storage device that utilizes thermal recording onto a phase-change medium. The phase-change medium is part of a heterojunction diode whose local charge-collection efficiency depends on the crystalline or amorphous state of a bit. Current gains up to 65 at 2 keV electron beam energy have been demonstrated using InSe/GaSe/Si epitaxial diodes. Fifteen write-erase cycles are obtained without loss of signal contrast by using a protective cap layer and short write pulses. 100 write-erase cycles have been achieved with some loss of contrast. Erasure times for the bits are longer than in similar polycrystalline In-Se media films. Possible reasons for the long erasure times are discussed in terms of a nucleation- or growth-dominated recrystallization. Prospects for extension to smaller bit sizes using electron-beam writing are considered.
Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P. K. A.
2014-01-01
All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W−1/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems. PMID:25417847
Design and simulation of a 12-bit, 40 MSPS asynchronous SAR ADC for the readout of PMT signals
NASA Astrophysics Data System (ADS)
Liu, Jian-Feng; Zhao, Lei; Qin, Jia-Jun; Yang, Yun-Fan; Yu, Li; Liang, Yu; Liu, Shu-Bin; An, Qi
2016-11-01
High precision and large dynamic range measurement are required in the readout systems for the Water Cherenkov Detector Array (WCDA) in the Large High Altitude Air Shower Observatory (LHAASO). This paper presents a prototype of a 12-bit 40 MSPS Analog-to-Digital Converter (ADC) Application Specific Integrated Circuit (ASIC) designed for the readout of the LHAASO WCDA. Combining this ADC and the front-end ASIC finished in our previous work, high precision charge measurement can be achieved based on the digital peak detection method. This ADC is implemented based on a power-efficient Successive Approximation Register (SAR) architecture, which incorporates key parts such as a Capacitive Digital-to-Analog Converter (CDAC), dynamic comparator and asynchronous SAR control logic. The simulation results indicate that the Effective Number Of Bits (ENOB) with a sampling rate of 40 MSPS is better than 10 bits in an input frequency range below 20 MHz, while its core power consumption is 6.6 mW per channel. The above results are good enough for the readout requirements of the WCDA. Supported by Knowledge Innovation Program of the Chinese Academy of Sciences (KJCX2-YW-N27), CAS Center for Excellence in Particle Physics (CCEPP)
NASA Astrophysics Data System (ADS)
Dey, Sukomal; Koul, Shiban K.
2014-09-01
A radio frequency micro-electro-mechanical system (RF-MEMS) 5 bit phase shifter based on a distributed MEMS transmission line concept with excellent phase accuracy and good repeatability is presented in this paper. The phase shifter is built with three fixed-fixed beams; one is switchable with electrostatic actuation and the other two are fixed for a metal-air-metal (MAM) capacitor. The design is based on a coplanar waveguide (CPW) configuration using alumina substrate. Gold-based surface micromachining is used to develop the individual primary phase bits (11.25°/22.5°/45°/90°/180°), which are fundamental building blocks of the complete 5 bit phase shifter. All of the primary phase bits are cascaded together to build the complete phase shifter. Detailed design methodology and performance analysis of the unit cell phase shifter has been carried out with structural and parametric optimization using an in-line bridge and MAM capacitors. The mechanical, electrical, transient, intermodulation distortion (IMD), temperature distribution, power handling and loss performances of the MEMS bridge have been experimentally obtained and validated using simulations up to reasonable extent. A single unit cell is able to provide 31 dB return loss, maximum insertion loss of 0.085 dB and a differential phase shift of 5.95° (at 10 GHz) over the band of interest. Furthermore, all primary phase bits are individually tested to ensure overall optimum phase shifter performance. The complete 5 bit phase shifter demonstrates an average insertion loss of 4.72 dB with return loss of better than 12 dB within 8-12 GHz using periodic placement of 62 unit cells and a maximum phase error of ±3.2° has been obtained at 10 GHz. Finally, the x-band 5 bit phase shifter is compared with the present state-of-the-art. The performance of the 5 bit phase shifter when mounted inside a test jig has been experimentally investigated and the results are presented. The total area of
Reexamination of quantum bit commitment: The possible and the impossible
D'Ariano, Giacomo Mauro; Kretschmann, Dennis; Schlingemann, Dirk; Werner, Reinhard F.
2007-09-15
Bit commitment protocols whose security is based on the laws of quantum mechanics alone are generally held to be impossible. We give a strengthened and explicit proof of this result. We extend its scope to a much larger variety of protocols, which may have an arbitrary number of rounds, in which both classical and quantum information is exchanged, and which may include aborts and resets. Moreover, we do not consider the receiver to be bound to a fixed 'honest' strategy, so that 'anonymous state protocols', which were recently suggested as a possible way to beat the known no-go results, are also covered. We show that any concealing protocol allows the sender to find a cheating strategy, which is universal in the sense that it works against any strategy of the receiver. Moreover, if the concealing property holds only approximately, the cheat goes undetected with a high probability, which we explicitly estimate. The proof uses an explicit formalization of general two-party protocols, which is applicable to more general situations, and an estimate about the continuity of the Stinespring dilation of a general quantum channel. The result also provides a natural characterization of protocols that fall outside the standard setting of unlimited available technology and thus may allow secure bit commitment. We present such a protocol whose security, perhaps surprisingly, relies on decoherence in the receiver's laboratory.