Science.gov

Sample records for achievable bit error

  1. Instantaneous bit-error-rate meter

    NASA Astrophysics Data System (ADS)

    Slack, Robert A.

    1995-06-01

    An instantaneous bit error rate meter provides an instantaneous, real time reading of bit error rate for digital communications data. Bit error pulses are input into the meter and are first filtered in a buffer stage to provide input impedance matching and desensitization to pulse variations in amplitude, rise time and pulse width. The bit error pulses are transformed into trigger signals for a timing pulse generator. The timing pulse generator generates timing pulses for each transformed bit error pulse, and is calibrated to generate timing pulses having a preselected pulse width corresponding to the baud rate of the communications data. An integrator generates a voltage from the timing pulses that is representative of the bit error rate as a function of the data transmission rate. The integrated voltage is then displayed on a meter to indicate the bit error rate.

  2. Reading boundless error-free bits using a single photon

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Shapiro, Jeffrey H.

    2013-06-01

    We address the problem of how efficiently information can be encoded into and read out reliably from a passive reflective surface that encodes classical data by modulating the amplitude and phase of incident light. We show that nature imposes no fundamental upper limit to the number of bits that can be read per expended probe photon and demonstrate the quantum-information-theoretic trade-offs between the photon efficiency (bits per photon) and the encoding efficiency (bits per pixel) of optical reading. We show that with a coherent-state (ideal laser) source, an on-off (amplitude-modulation) pixel encoding, and shot-noise-limited direct detection (an overly optimistic model for commercial CD and DVD drives), the highest photon efficiency achievable in principle is about 0.5 bits read per transmitted photon. We then show that a coherent-state probe can read unlimited bits per photon when the receiver is allowed to make joint (inseparable) measurements on the reflected light from a large block of phase-modulated memory pixels. Finally, we show an example of a spatially entangled nonclassical light probe and a receiver design—constructible using a single-photon source, beam splitters, and single-photon detectors—that can in principle read any number of error-free bits of information. The probe is a single photon prepared in a uniform coherent superposition of multiple orthogonal spatial modes, i.e., a W state. The code and joint-detection receiver complexity required by a coherent-state transmitter to achieve comparable photon efficiency performance is shown to be much higher in comparison to that required by the W-state transceiver, although this advantage rapidly disappears with increasing loss in the system.

  3. Multiple-Bit Errors Caused By Single Ions

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Edmonds, Larry D.; Smith, Laurence S.

    1991-01-01

    Report describes experimental and computer-simulation study of multiple-bit errors caused by impingement of single energetic ions on 256-Kb dynamic random-access memory (DRAM) integrated circuit. Studies illustrate effects of different mechanisms for transport of charge from ion tracks to various elements of integrated circuits. Shows multiple-bit errors occur in two different types of clusters about ion tracks causing them.

  4. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    SciTech Connect

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  5. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  6. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  7. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  8. Optical refractive synchronization: bit error rate analysis and measurement

    NASA Astrophysics Data System (ADS)

    Palmer, James R.

    1999-11-01

    The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

  9. The effect of bandlimiting of a PCM/NRZ signal on the bit-error probability.

    NASA Technical Reports Server (NTRS)

    Tu, K.; Shehadeh, N. M.

    1971-01-01

    The explicit expressions for the intersymbol interference as a function of bandwidth-bit duration product and bit positions for PCM/NRZ systems operating in the presence of Gaussian noise and in a bandlimited channel are determined. Two types of linear bit detectors are considered, integrate and dump, and bandlimit and sample. Restriction of bandwidth results in a performance degradation. The degradation of signal-to-noise ratio is presented as a function of bandwidth-bit duration product and bit patterns. The average probability of bit errors is computed for various bandwidths. The calculations of the upper bound and lower bound on the error probability are also presented.

  10. Multi-bit upset aware hybrid error-correction for cache in embedded processors

    NASA Astrophysics Data System (ADS)

    Jiaqi, Dong; Keni, Qiu; Weigong, Zhang; Jing, Wang; Zhenzhen, Wang; Lihua, Ding

    2015-11-01

    For the processor working in the radiation environment in space, it tends to suffer from the single event effect on circuits and system failures, due to cosmic rays and high energy particle radiation. Therefore, the reliability of the processor has become an increasingly serious issue. The BCH-based error correction code can correct multi-bit errors, but it introduces large latency overhead. This paper proposes a hybrid error correction approach that combines BCH and EDAC to correct both multi-bit and single-bit errors for caches with low cost. The proposed technique can correct up to four-bit error, and correct single-bit error in one cycle. Evaluation results show that, the proposed hybrid error-correction scheme can improve the performance of cache accesses up to 20% compared to the pure BCH scheme.

  11. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  12. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  13. Reducing Measurement Error in Student Achievement Estimation

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2008-01-01

    The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…

  14. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  15. Reducing bit-error rate with optical phase regeneration in multilevel modulation formats.

    PubMed

    Hesketh, Graham; Horak, Peter

    2013-12-15

    We investigate theoretically the benefits of using all-optical phase regeneration in a long-haul fiber optic link. We also introduce a design for a device capable of phase regeneration without phase-to-amplitude noise conversion. We simulate numerically the bit-error rate of a wavelength division multiplexed optical communication system over many fiber spans with periodic reamplification and compare the results obtained with and without phase regeneration at half the transmission distance when using the new design or an existing design. Depending on the modulation format, our results suggest that all-optical phase regeneration can reduce the bit-error rate by up to two orders of magnitude and that the amplitude preserving design offers a 50% reduction in bit-error rate relative to existing technology.

  16. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  17. Bit error rate testing of a proof-of-concept model baseband processor

    NASA Technical Reports Server (NTRS)

    Stover, J. B.; Fujikawa, G.

    1986-01-01

    Bit-error-rate tests were performed on a proof-of-concept baseband processor. The BBP, which operates at an intermediate frequency in the C-Band, demodulates, demultiplexes, routes, remultiplexes, and remodulates digital message segments received from one ground station for retransmission to another. Test methods are discussed and test results are compared with the Contractor's test results.

  18. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.

  19. Concatenated block codes for unequal error protection of embedded bit streams.

    PubMed

    Arslan, Suayb S; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    A state-of-the-art progressive source encoder is combined with a concatenated block coding mechanism to produce a robust source transmission system for embedded bit streams. The proposed scheme efficiently trades off the available total bit budget between information bits and parity bits through efficient information block size adjustment, concatenated block coding, and random block interleavers. The objective is to create embedded codewords such that, for a particular information block, the necessary protection is obtained via multiple channel encodings, contrary to the conventional methods that use a single code rate per information block. This way, a more flexible protection scheme is obtained. The information block size and concatenated coding rates are judiciously chosen to maximize system performance, subject to a total bit budget. The set of codes is usually created by puncturing a low-rate mother code so that a single encoder-decoder pair is used. The proposed scheme is shown to effectively enlarge this code set by providing more protection levels than is possible using the code rate set directly. At the expense of complexity, average system performance is shown to be significantly better than that of several known comparison systems, particularly at higher channel bit error rates.

  20. Research and implementation of the burst-mode optical signal bit-error test

    NASA Astrophysics Data System (ADS)

    Huang, Qiu-yuan; Ma, Chao; Shi, Wei; Chen, Wei

    2009-08-01

    On the basis of the characteristic of TDMA uplink optical signal of PON system, this article puts forward a method of high-speed optical burst bit-error rate testing based on FPGA. The article proposes a new method of generating the burst signal pattern include user-defined pattern and pseudo-random pattern, realizes the slip synchronization, self-synchronization of error detection using data decomposition technique and the traditional irrigation code synchronization technology, completes high-speed burst signal clock synchronization using the rapid synchronization technology of phase-locked loop delay in the external circuit and finishes the bit-error rate test of high-speed burst optical signal.

  1. Detecting bit-flip errors in a logical qubit using stabilizer measurements.

    PubMed

    Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L

    2015-04-29

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.

  2. Detecting bit-flip errors in a logical qubit using stabilizer measurements.

    PubMed

    Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L

    2015-01-01

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318

  3. Bit Error Rate Performance of Partially Coherent Dual-Branch SSC Receiver over Composite Fading Channels

    NASA Astrophysics Data System (ADS)

    Milić, Dejan N.; Đorđević, Goran T.

    2013-01-01

    In this paper, we study the effects of imperfect reference signal recovery on the bit error rate (BER) performance of dual-branch switch and stay combining receiver over Nakagami-m fading/gamma shadowing channels with arbitrary parameters. The average BER of quaternary phase shift keying is evaluated under the assumption that the reference carrier signal is extracted from the received modulated signal. We compute numerical results illustrating simultaneous influence of average signal-to-noise ratio per bit, fading severity, shadowing, phase-locked loop bandwidth-bit duration (BLTb) product, and switching threshold on BER performance. The effects of BLTb on receiver performance under different channel conditions are emphasized. Optimal switching threshold is determined which minimizes BER performance under given channel and receiver parameters.

  4. Reproduced waveform and bit error rate analysis of a patterned perpendicular medium R/W channel

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.; Saito, H.; Aoi, H.; Muraoka, H.; Nakamura, Y.

    2005-05-01

    Patterned media were investigated as candidates for 1Tb/in.2 recording. In the case of recording with a patterned medium, the noise due to the irregularity of the pattern has to be taken into account instead of the medium noise due to grains. The bit error rate was studied for both continuous and patterned media to evaluate the advantages of patterning. The bit aspect ratio (BPI/TPI) was set to two for the patterned media and four for the continuous medium. The bit error rate (BER), calculated with a PR(1,1) channel simulator, indicated that for both double layered and single layered patterned media an improvement of the BER over conventional continuous media is expected when the patterning jitter is controlled to within 8%. When the system noise is large the BER of single layered patterned media deteriorates more rapidly than that of double layered media, due to the higher boost in the PR(1,1) channel. It was found that making the land length to bit length ratio large was quite effective at improving BER.

  5. Characterization of multiple-bit errors from single-ion tracks in integrated circuits

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Edmonds, L. D.; Smith, L. S.

    1989-01-01

    The spread of charge induced by an ion track in an integrated circuit and its subsequent collection at sensitive nodal junctions can cause multiple-bit errors. The authors have experimentally and analytically investigated this phenomenon using a 256-kb dynamic random-access memory (DRAM). The effects of different charge-transport mechanisms are illustrated, and two classes of ion-track multiple-bit error clusters are identified. It is demonstrated that ion tracks that hit a junction can affect the lateral spread of charge, depending on the nature of the pull-up load on the junction being hit. Ion tracks that do not hit a junction allow the nearly uninhibited lateral spread of charge.

  6. Error tolerance of topological codes with independent bit-flip and measurement errors

    NASA Astrophysics Data System (ADS)

    Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.

    2016-07-01

    Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.

  7. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  8. Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters

    NASA Astrophysics Data System (ADS)

    Yang, Ding-Xin; Gu, Feng-Shou; Feng, Guo-Jin; Yang, Yong-Min; Ball, Andrew

    2015-11-01

    The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications. Project supported by the National Natural Science Foundation of China (Grant No. 51379526).

  9. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  10. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Laboratory experiments performed at NASA Lewis measured the bit-error-rate (BER) degradation resulting from several types of amplitude response distortions. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory-simulated satellite channel. This paper presents the results of these experiments.

  11. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  12. Noise and measurement errors in a practical two-state quantum bit commitment protocol

    NASA Astrophysics Data System (ADS)

    Loura, Ricardo; Almeida, Álvaro J.; André, Paulo S.; Pinto, Armando N.; Mateus, Paulo; Paunković, Nikola

    2014-05-01

    We present a two-state practical quantum bit commitment protocol, the security of which is based on the current technological limitations, namely the nonexistence of either stable long-term quantum memories or nondemolition measurements. For an optical realization of the protocol, we model the errors, which occur due to the noise and equipment (source, fibers, and detectors) imperfections, accumulated during emission, transmission, and measurement of photons. The optical part is modeled as a combination of a depolarizing channel (white noise), unitary evolution (e.g., systematic rotation of the polarization axis of photons), and two other basis-dependent channels, namely the phase- and bit-flip channels. We analyze quantitatively the effects of noise using two common information-theoretic measures of probability distribution distinguishability: the fidelity and the relative entropy. In particular, we discuss the optimal cheating strategy and show that it is always advantageous for a cheating agent to add some amount of white noise—the particular effect not being present in standard quantum security protocols. We also analyze the protocol's security when the use of (im)perfect nondemolition measurements and noisy or bounded quantum memories is allowed. Finally, we discuss errors occurring due to a finite detector efficiency, dark counts, and imperfect single-photon sources, and we show that the effects are the same as those of standard quantum cryptography.

  13. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  14. Effect of audio bandwidth and bit error rate on PCM, ADPCM and LPC speech coding algorithm intelligibility

    NASA Astrophysics Data System (ADS)

    McKinley, Richard L.; Moore, Thomas J.

    1987-02-01

    The effects of audio bandwidth and bit error rate on speech intelligibility of voice coders in noise are described and quantified. Three different speech coding techniques were investigated, pulse code modulation (PCM), adaptive differential pulse code modulation (ADPCM), and linear predictive coding (LPC). Speech intelligibility was measured in realistic acoustic noise environs by a panel of 10 subjects performing the Modified Rhyme Test. Summary data is presented along with planned future research in optimization of audio bandwidth vs bit error rate tradeoff for best speech intelligibility.

  15. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  16. GaAlAs laser temperature effects on the BER performance of a gigabit PCM fiber system. [Bit Error Rate

    NASA Technical Reports Server (NTRS)

    Eng, S. T.; Bergman, L. A.

    1982-01-01

    The performance of a gigabit pulse-code modulation fiber system has been investigated as a function of laser temperature. The bit error rate shows an improvement for temperature in the range of -15 C to -35 C. A tradeoff seems possible between relaxation oscillation, rise time, and signal-to-noise ratio.

  17. Scintillation index and bit error rate of hollow Gaussian beams in atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Qiao, Na; Zhang, Bin; Pan, Pingping; Dan, Youquan

    2011-06-01

    Based on the Huygens-Fresnel principle and Rytov method, the on-axis scintillation index is derived for hollow Gaussian beams (HGBs) in weak turbulence. The relationship between bit error rate (BER) and scintillation index is found by only considering the effect of atmosphere turbulence based on the probability distribution of intensity fluctuation, and the expression of the BER is obtained. Furthermore, the scintillation and the BER properties of HGBs in turbulence are discussed in detail. The results show that the scintillation index and BER of HGBs depend on the propagation length, the structure constant of the refractive index fluctuations of turbulence, the wavelength, the beam order and the waist width of the fundamental Gaussian beam. The scintillation index, increasing with the propagation length in turbulence, for the HGB with higher beam order increases more slowly. The BER of the HGBs increases rapidly against the propagation length in turbulence. For propagating the same distance, the BER of the fundamental Gaussian beam is the greatest, and that of the HGB with higher order is smaller.

  18. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    NASA Astrophysics Data System (ADS)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  19. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations.

  20. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  1. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  2. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations. PMID:26125336

  3. Indirect measurement of a laser communications bit-error-rate reduction with low-order adaptive optics.

    PubMed

    Tyson, Robert K; Canning, Douglas E

    2003-07-20

    In experimental measurements of the bit-error rate for a laser communication system, we show improved performance with the implementation of low-order (tip/tilt) adaptive optics in a free-space link. With simulated atmospheric tilt injected by a conventional piezoelectric tilt mirror, an adaptive optics system with a Xinetics tilt mirror was used in a closed loop. The laboratory experiment replicated a monostatic propagation with a cooperative wave front beacon at the receiver. Owing to constraints in the speed of the processing hardware, the data is scaled to represent an actual propagation of a few kilometers under moderate scintillation conditions. We compare the experimental data and indirect measurement of the bit-error rate before correction and after correction, with a theoretical prediction.

  4. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  5. Corrected RMS Error and Effective Number of Bits for Sinewave ADC Tests

    SciTech Connect

    Jerome J. Blair

    2002-03-01

    A new definition is proposed for the effective number of bits of an ADC. This definition removes the variation in the calculated effective bits when the amplitude and offset of the sinewave test signal is slightly varied. This variation is most pronounced when test signals with amplitudes of a small number of code bin widths are applied to very low noise ADC's. The effectiveness of the proposed definition is compared with that of other proposed definitions over a range of signal amplitudes and noise levels.

  6. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.

  7. Alpha Dithering to Correct Low-Opacity 8 Bit Compositing Errors

    SciTech Connect

    Williams, P L; Frank, R J; LaMar, E C

    2003-03-31

    This paper describes and analyzes a dithering technique for accurately specifying small values of opacity ({alpha}) that would normally not be possible because of the limited number of bits available in the alpha channel of graphics hardware. This dithering technique addresses problems related to compositing numerous low-opacity semitransparent polygons to create volumetric effects with graphics hardware. The paper also describes the causes and a possible solution to artifacts that arise from parallel or distributed volume rendering using bricking on multiple GPU's.

  8. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  9. Influence of beam wander on bit-error rate in a ground-to-satellite laser uplink communication system.

    PubMed

    Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe

    2008-11-15

    Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.

  10. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  11. Error thresholds for Abelian quantum double models: Increasing the bit-flip stability of topological quantum memory

    NASA Astrophysics Data System (ADS)

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.

    2015-04-01

    Current approaches for building quantum computing devices focus on two-level quantum systems which nicely mimic the concept of a classical bit, albeit enhanced with additional quantum properties. However, rather than artificially limiting the number of states to two, the use of d -level quantum systems (qudits) could provide advantages for quantum information processing. Among other merits, it has recently been shown that multilevel quantum systems can offer increased stability to external disturbances. In this study we demonstrate that topological quantum memories built from qudits, also known as Abelian quantum double models, exhibit a substantially increased resilience to noise. That is, even when taking into account the multitude of errors possible for multilevel quantum systems, topological quantum error-correction codes employing qudits can sustain a larger error rate than their two-level counterparts. In particular, we find strong numerical evidence that the thresholds of these error-correction codes are given by the hashing bound. Considering the significantly increased error thresholds attained, this might well outweigh the added complexity of engineering and controlling higher-dimensional quantum systems.

  12. Bit error rate analysis of free-space optical system with spatial diversity over strong atmospheric turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Krishnan, Prabu; Sriram Kumar, D.

    2014-12-01

    Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.

  13. Extending the lifetime of a quantum bit with error correction in superconducting circuits.

    PubMed

    Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S M; Jiang, L; Mirrahimi, Mazyar; Devoret, M H; Schoelkopf, R J

    2016-08-25

    Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The 'break-even' point of QEC--at which the lifetime of a qubit exceeds the lifetime of the constituents of the system--has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system. PMID:27437573

  14. Extending the lifetime of a quantum bit with error correction in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.

    2016-08-01

    Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

  15. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  16. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  17. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami-[InlineEquation not available: see fulltext.] Fading Channels

    NASA Astrophysics Data System (ADS)

    Li, Zexian; Latva-aho, Matti

    2004-12-01

    Multicarrier code division multiple access (MC-CDMA) is a promising technique that combines orthogonal frequency division multiplexing (OFDM) with CDMA. In this paper, based on an alternative expression for the[InlineEquation not available: see fulltext.]-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER) of multiuser MC-CDMA systems in frequency-selective Nakagami-[InlineEquation not available: see fulltext.] fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC) or equal gain combining (EGC). The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  18. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  19. Evaluating the performance of the LPC (Linear Predictive Coding) 2.4 kbps (kilobits per second) processor with bit errors using a sentence verification task

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid; Kallman, Howard J.

    1987-11-01

    The comprehension of narrowband digital speech with bit errors was tested by using a sentence verification task. The use of predicates that were either strongly or weakly related to the subjects (e.g., A toad has warts./ A toad has eyes.) varied the difficulty of the verification task. The test conditions included unprocessed and processed speech using a 2.4 kb/s (kilobits per second) linear predictive coding (LPC) voice processing algorithm with random bit error rates of 0 percent, 2 percent, and 5 percent. In general, response accuracy decreased and reaction time increased with LPC processing and with increasing bit error rates. Weakly related true sentences and strongly related false sentences were more difficult than their counterparts. Interactions between sentence type and speech processing conditions are discussed.

  20. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-25

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  1. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  2. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  3. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    PubMed Central

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  4. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  5. A Comparison of the Kaufman Brief Intelligence Test (K-BIT) with the Stanford-Binet, a Two-Subtest Short Form, and the Kaufman Test of Educational Achievement (K-TEA) Brief Form.

    ERIC Educational Resources Information Center

    Prewett, Peter N.; McCaffery, Lucy K.

    1993-01-01

    Examined relationship between Kaufman Brief Intelligence Test (K-BIT), Stanford-Binet, two-subtests short form, and Kaufman Test of Educational Achievement (K-TEA) with population of 75 academically referred students. K-BIT correlated significantly with Stanford-Binet and K-TEA Math, Reading, and Spelling scores. Results support use of K-BIT as…

  6. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  7. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  8. Insight into error hiding: exploration of nursing students' achievement goal orientations.

    PubMed

    Dunn, Karee E

    2014-02-01

    An estimated 50% of medication errors go unreported, and error hiding is costly to hospitals and patients. This study explored one issue that may facilitate error hiding. Descriptive statistics were used to examine nursing students' achievement goal orientations in a high-fidelity simulation course. Results indicated that although this sample of nursing students held high mastery goal orientations, they also held moderate levels of performance-approach and performance-avoidance goal orientations. These goal orientations indicate that this sample is at high risk for error hiding, which places the benefits that are typically gleaned from a strong mastery orientation at risk. Understanding variables, such as goal orientation, that can be addressed in nursing education to reduce error hiding is an area of research that needs to be further explored. This article discusses the study results and evidence-based instructional practices for this sample's achievement goal orientation profile. PMID:24444007

  9. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  10. Bit-Error-Rate-Based Evaluation of Energy-Gap-Induced Super-Resolution Read-Only-Memory Disc in Blu-ray Disc Optics

    NASA Astrophysics Data System (ADS)

    Tajima, Hideharu; Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Mori, Go; Akiyama, Jun; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira

    2008-07-01

    Bit error rate (bER) of an energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with a zinc oxide (ZnO) film was measured in Blu-ray Disc (BD) optics by the partial response maximum likelihood (PRML) detection method. The experimental capacity was 40 GB in a single-layered 120 mm disc, which was about 1.6 times as high as the commercially available BD with 25 GB capacity. BER near 1 ×10-5 was obtained in an EG-SR ROM disc with a tantalum (Ta) reflective film. Practically available characteristics, including readout power margin, readout cyclability, environmental resistance, tilt margins, and focus offset margin, were also confirmed in the EG-SR ROM disc with 40 GB capacity.

  11. Bit-Error-Rate Evaluation of Energy-Gap-Induced Super-Resolution Read-Only-Memory Disc with Dual-Layer Structure

    NASA Astrophysics Data System (ADS)

    Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Tajima, Hideharu; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira

    2009-03-01

    Practically available readout characteristics were obtained in a dual-layer energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with an 80 gigabytes (GB) capacity. One of the dual layers consisted of zinc oxide and titanium films and the other layer consisted of zinc oxide and tantalum films. Bit error rates better than 3.0×10-4 were obtained with a minimum readout power of approximately 1.6 mW in both layers using a Blu-ray Disc tester by a partial response maximum likelihood (PRML) detection method. The dual-layer disc showed good tolerances in disc tilts and focus offset and also showed good readout cyclability in both layers.

  12. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system. PMID:15468703

  13. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  14. Bit-Error-Rate Evaluation of Super-Resolution Near-Field Structure Read-Only Memory Discs with Semiconductive Material InSb

    NASA Astrophysics Data System (ADS)

    Nakai, Kenya; Ohmaki, Masayuki; Takeshita, Nobuo; Hyot, Bérangère; André, Bernard; Poupinet, Ludovic

    2010-08-01

    Bit-error-rate (bER) evaluation using hardware (H/W) evaluation system is described for super-resolution near-field structure (super-RENS) read-only-memory (ROM) discs fabricated with a semiconductor material, In-Sb, as the super-resolution active layer. bER on the order of 10-5 below a criterion of 3.0×10-4 is obtained with the super-RENS ROM discs having random pattern data including a minimum pit length of 80 nm in partial response maximum likelihood of the (1,2,2,1) type. The disc tilt, focus offset, and read power offset margins based on bER of readout signals are measured for the super-RENS ROM discs and are almost acceptable for practical use. Significant improvement of read stability up to 40,000 cycles realized by introducing the ZrO2 interface layer is confirmed using the H/W evaluation system.

  15. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system.

  16. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913

  17. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  18. Design and Demonstration of a 4×4 SFQ Network Switch Prototype System and 10-Gbps Bit-Error-Rate Measurement

    NASA Astrophysics Data System (ADS)

    Kameda, Yoshio; Hashimoto, Yoshihito; Yorozu, Shinichi

    We developed a 4×4 SFQ network switch prototype system and demonstrated its operation at 10Gbps. The system's core is composed of two SFQ chips: a 4×4 switch and a 6-channel voltage driver. The 4×4 switch chip contained both a switch fabric (i. e. a data path) and a switch scheduler (i. e. a controller). Both chips were attached to a multichip-module (MCM) carrier, which was then installed in a cryocooled system with 32 10-Gbps ports. Each chip contained about 2100 Josephson junctions on a 5-mm×5-mm die. An NEC standard 2.5-kA/cm2 fabrication process was used for the switch chip. We increased the critical current density to 10kA/cm2 for the driver chip to improve speed while maintaining wide bias margins. MCM implementation enabled us to use a hybrid critical current density technology. Voltage pulses were transferred between two chips through passive transmission lines on the MCM carrier. The cryocooled system was cooled down to about 4K using a two-stage 1-W cryocooler. We correctly operated the whole system at 10Gbps. The switch scheduler, which is driven by an on-chip clock generator, operated at 40GHz. The speed gap between SFQ and room temperature devices was filled by on-chip SFQ FIFO buffers or shift registers. We measured the bit error rate at 10Gbps and found that it was on the order of 10-13 for the 4×4 SFQ switch fabric. In addition, using semiconductor interface circuitry, we built a four-port SFQ Ethernet switch. All the components except for a compressor were installed in a standard 19-inch rack, filling a space 21 U (933.5mm or 36.75 inches) in height. After four personal computers (PCs) were connected to the switch, we have successfully transferred video data between them.

  19. Displacement damage in bit error ratio performance of on-off keying, pulse position modulation, differential phase shift keying, and homodyne binary phase-shift keying-based optical intersatellite communication system.

    PubMed

    Liu, Yun; Zhao, Shanghong; Gong, Zizheng; Zhao, Jing; Dong, Chen; Li, Xuan

    2016-04-10

    Displacement damage (DD) effect induced bit error ratio (BER) performance degradations in on-off keying (OOK), pulse position modulation (PPM), differential phase-shift keying (DPSK), and homodyne binary phase shift keying (BPSK) based systems were simulated and discussed under 1 MeV neutron irradiation to a total fluence of 1×1012  n/cm2 in this paper. Degradation of main optoelectronic devices included in communication systems were analyzed on the basis of existing experimental data. The system BER degradation was subsequently simulated and the variations of BER with different neutron irradiation location were also achieved. The result shows that DD on an Er-doped fiber amplifier (EDFA) is the dominant cause of system degradation, and a BPSK-based system performs better than the other three systems against DD. In order to improve radiation hardness of communication systems against DD, protection and enhancement of EDFA are required, and the use of a homodyne BPSK modulation scheme is a considered choice.

  20. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  1. Enhancement of LED indoor communications using OPPM-PWM modulation and grouped bit-flipping decoding.

    PubMed

    Yang, Aiying; Li, Xiangming; Jiang, Tao

    2012-04-23

    Combination of overlapping pulse position modulation and pulse width modulation at the transmitter and grouped bit-flipping algorithm for low-density parity-check decoding at the receiver are proposed for visible Light Emitting Diode (LED) indoor communication system in this paper. The results demonstrate that, with the same Photodetector, the bit rate can be increased and the performance of the communication system can be improved by the scheme we proposed. Compared with the standard bit-flipping algorithm, the grouped bit-flipping algorithm can achieve more than 2.0 dB coding gain at bit error rate of 10-5. By optimizing the encoding of overlapping pulse position modulation and pulse width modulation symbol, the performance can be further improved. It is reasonably expected that the bit rate can be upgraded to 400 Mbit/s with a single available LED, thus transmission rate beyond 1 Gbit/s is foreseen by RGB LEDs.

  2. Achieving minimum-error discrimination of an arbitrary set of laser-light pulses

    NASA Astrophysics Data System (ADS)

    da Silva, Marcus P.; Guha, Saikat; Dutton, Zachary

    2013-05-01

    Laser light is widely used for communication and sensing applications, so the optimal discrimination of coherent states—the quantum states of light emitted by an ideal laser—has immense practical importance. Due to fundamental limits imposed by quantum mechanics, such discrimination has a finite minimum probability of error. While concrete optical circuits for the optimal discrimination between two coherent states are well known, the generalization to larger sets of coherent states has been challenging. In this paper, we show how to achieve optimal discrimination of any set of coherent states using a resource-efficient quantum computer. Our construction leverages a recent result on discriminating multicopy quantum hypotheses [Blume-Kohout, Croke, and Zwolak, arXiv:1201.6625]. As illustrative examples, we analyze the performance of discriminating a ternary alphabet and show how the quantum circuit of a receiver designed to discriminate a binary alphabet can be reused in discriminating multimode hypotheses. Finally, we show that our result can be used to achieve the quantum limit on the rate of classical information transmission on a lossy optical channel, which is known to exceed the Shannon rate of all conventional optical receivers.

  3. FIASCO II failure to achieve a satisfactory cardiac outcome study: the elimination of system errors

    PubMed Central

    Farid, Shakil; Page, Aravinda; Jenkins, David; Jones, Mark T.; Freed, Darren; Nashef, Samer A.M.

    2013-01-01

    OBJECTIVES Death in low-risk cardiac surgical patients provides a simple and accessible method by which modifiable causes of death can be identified. In the first FIASCO study published in 2009, local potentially modifiable causes of preventable death in low-risk patients with a logistic EuroSCORE of 0–2 undergoing cardiac surgery were inadequate myocardial protection and lack of clarity in the chain of responsibility. As a result, myocardial protection was improved, and a formalized system introduced to ensure clarity of the chain of responsibility in the care of all cardiac surgical patients. The purpose of the current study was to re-audit outcomes in low-risk patients to see if improvements have been achieved. METHODS Patients with a logistic EuroSCORE of 0–2 who had cardiac surgery from January 2006 to August 2012 were included. Data were prospectively collected and retrospectively analysed. The case notes of patients who died in hospital were subject to internal and external review and classified according to preventability. RESULTS Two thousand five hundred and forty-nine patients with a logistic EuroSCORE of 0–2 underwent cardiac surgery during the study period. Seven deaths occurred in truly low-risk patients, giving a mortality of 0.27%. Of the seven, three were considered preventable and four non-preventable. Mortality was marginally lower than in our previous study (0.37%), and no death occurred as a result of inadequate myocardial protection or communication failures. CONCLUSION We postulate that the regular study of such events in all institutions may unmask systemic errors that can be remedied to prevent or reduce future occurrences. We encourage all units to use this methodology to detect any similarly modifiable factors in their practice. PMID:23592726

  4. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    ERIC Educational Resources Information Center

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  5. A brief review on quantum bit commitment

    NASA Astrophysics Data System (ADS)

    Almeida, Álvaro J.; Loura, Ricardo; Paunković, Nikola; Silva, Nuno A.; Muga, Nelson J.; Mateus, Paulo; André, Paulo S.; Pinto, Armando N.

    2014-08-01

    In classical cryptography, the bit commitment scheme is one of the most important primitives. We review the state of the art of bit commitment protocols, emphasizing its main achievements and applications. Next, we present a practical quantum bit commitment scheme, whose security relies on current technological limitations, such as the lack of long-term stable quantum memories. We demonstrate the feasibility of our practical quantum bit commitment protocol and that it can be securely implemented with nowadays technology.

  6. Capped bit patterned media for high density magnetic recording

    NASA Astrophysics Data System (ADS)

    Li, Shaojing; Livshitz, Boris; Bertram, H. Neal; Inomata, Akihiro; Fullerton, Eric E.; Lomakin, Vitaliy

    2009-04-01

    A capped composite patterned medium design is described which comprises an array of hard elements exchange coupled to a continuous cap layer. The role of the cap layer is to lower the write field of the individual hard element and introduce ferromagnetic exchange interactions between hard elements to compensate the magnetostatic interactions. Modeling results show significant reduction in the reversal field distributions caused by the magnetization states in the array which is important to prevent bit errors and increase achievable recording densities.

  7. Bit error rate analysis of Gaussian, annular Gaussian, cos Gaussian, and cosh Gaussian beams with the help of random phase screens.

    PubMed

    Eyyuboğlu, Halil T

    2014-06-10

    Using the random phase screen approach, we carry out a simulation analysis of the probability of error performance of Gaussian, annular Gaussian, cos Gaussian, and cosh Gaussian beams. In our scenario, these beams are intensity-modulated by the randomly generated binary symbols of an electrical message signal and then launched from the transmitter plane in equal powers. They propagate through a turbulent atmosphere modeled by a series of random phase screens. Upon arriving at the receiver plane, detection is performed in a circuitry consisting of a pin photodiode and a matched filter. The symbols detected are compared with the transmitted ones, errors are counted, and from there the probability of error is evaluated numerically. Within the range of source and propagation parameters tested, the lowest probability of error is obtained for the annular Gaussian beam. Our investigation reveals that there is hardly any difference between the aperture-averaged scintillations of the beams used, and the distinctive advantage of the annular Gaussian beam lies in the fact that the receiver aperture captures the maximum amount of power when this particular beam is launched from the transmitter plane.

  8. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  9. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s{sup -1}) with polarisation division multiplexing

    SciTech Connect

    Gurkin, N V; Konyshev, V A; Novikov, A G; Treshchikov, V N; Ubaydullaev, R R

    2015-01-31

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)

  10. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s-1) with polarisation division multiplexing

    NASA Astrophysics Data System (ADS)

    Gurkin, N. V.; Konyshev, V. A.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.

    2015-01-01

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s-1 DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 - 50 km up to a maximum length of 250 km.

  11. Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers

    PubMed Central

    Mainsah, Boyla O.; Morton, Kenneth D.; Collins, Leslie M.; Sellers, Eric W.; Throckmorton, Chandra S.

    2016-01-01

    P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies (>70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35–185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (−47 – 0%). Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44–416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43–433%). PMID:25438320

  12. FMO-based H.264 frame layer rate control for low bit rate video transmission

    NASA Astrophysics Data System (ADS)

    Cajote, Rhandley D.; Aramvith, Supavadee; Miyanaga, Yoshikazu

    2011-12-01

    The use of flexible macroblock ordering (FMO) in H.264/AVC improves error resiliency at the expense of reduced coding efficiency with added overhead bits for slice headers and signalling. The trade-off is most severe at low bit rates, where header bits occupy a significant portion of the total bit budget. To better manage the rate and improve coding efficiency, we propose enhancements to the H.264/AVC frame layer rate control, which take into consideration the effects of using FMO for video transmission. In this article, we propose a new header bits model, an enhanced frame complexity measure, a bit allocation and a quantization parameter adjustment scheme. Simulation results show that the proposed improvements achieve better visual quality compared with the JM 9.2 frame layer rate control with FMO enabled using a different number of slice groups. Using FMO as an error resilient tool with better rate management is suitable in applications that have limited bandwidth and in error prone environments such as video transmission for mobile terminals.

  13. Positional information, in bits

    PubMed Central

    Dubuis, Julien O.; Tkačik, Gašper; Wieschaus, Eric F.; Gregor, Thomas; Bialek, William

    2013-01-01

    Cells in a developing embryo have no direct way of “measuring” their physical position. Through a variety of processes, however, the expression levels of multiple genes come to be correlated with position, and these expression levels thus form a code for “positional information.” We show how to measure this information, in bits, using the gap genes in the Drosophila embryo as an example. Individual genes carry nearly two bits of information, twice as much as would be expected if the expression patterns consisted only of on/off domains separated by sharp boundaries. Taken together, four gap genes carry enough information to define a cell’s location with an error bar of along the anterior/posterior axis of the embryo. This precision is nearly enough for each cell to have a unique identity, which is the maximum information the system can use, and is nearly constant along the length of the embryo. We argue that this constancy is a signature of optimality in the transmission of information from primary morphogen inputs to the output of the gap gene network. PMID:24089448

  14. Drag bit construction

    DOEpatents

    Hood, M.

    1986-02-11

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face. 4 figs.

  15. Drag bit construction

    DOEpatents

    Hood, Michael

    1986-01-01

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face.

  16. Bit rate transparent interferometric noise mitigation utilizing the nonlinear modulation curve of electro-absorption modulator.

    PubMed

    Feng, Hanlin; Xiao, Shilin; Fok, Mable P

    2015-08-24

    we propose a bit-rate transparent interferometric noise mitigation scheme utilizing the nonlinear modulation curve of electro-absorption modulator (EAM). Both the zero-slope region and the linear modulation region of the nonlinear modulation curve are utilized to suppress interferometric noise and enlarge noise margin of degraded eye diagrams. Using amplitude suppression effect of the zero-slope region, interferometric noise at low frequency range is suppressed successfully. Under different signal to noise ratio (SNR), we measured the power penalties at bit error rate (BER) of 10<(-9) with and without EAM interferometric noise suppression. By using our proposed scheme, power penalty improvement of 8.5 dB is achieved in a signal with signal-to-noise ratio of 12.5 dB. BER results at various bit rates are analyzed, error floors for each BER curves are removed, significantly improvement in receiver sensitivity and widely opened eye diagrams are resulted.

  17. Power-gated 32 bit microprocessor with a power controller circuit activated by deep-sleep-mode instruction achieving ultra-low power operation

    NASA Astrophysics Data System (ADS)

    Koike, Hiroki; Ohsawa, Takashi; Miura, Sadahiko; Honjo, Hiroaki; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo

    2015-04-01

    A spintronic-based power-gated micro-processing unit (MPU) is proposed. It includes a power control circuit activated by the newly supported power-off instruction for the deep-sleep mode. These means enable the power-off procedure for the MPU to be executed appropriately. A test chip was designed and fabricated using 90 nm CMOS and an additional 100 nm MTJ process; it was successfully operated. The guideline of the energy reduction effects for this MPU was presented, using the estimation based on the measurement results of the test chip. The result shows that a large operation energy reduction of 1/28 can be achieved when the operation duty is 10%, under the condition of a sufficient number of idle clock cycles.

  18. Deterministic relativistic quantum bit commitment

    NASA Astrophysics Data System (ADS)

    Adlam, Emily; Kent, Adrian

    2015-06-01

    We describe new unconditionally secure bit commitment schemes whose security is based on Minkowski causality and the monogamy of quantum entanglement. We first describe an ideal scheme that is purely deterministic, in the sense that neither party needs to generate any secret randomness at any stage. We also describe a variant that allows the committer to proceed deterministically, requires only local randomness generation from the receiver, and allows the commitment to be verified in the neighborhood of the unveiling point. We show that these schemes still offer near-perfect security in the presence of losses and errors, which can be made perfect if the committer uses an extra single random secret bit. We discuss scenarios where these advantages are significant.

  19. 24-Hour Relativistic Bit Commitment

    NASA Astrophysics Data System (ADS)

    Verbanis, Ephanielle; Martin, Anthony; Houlmann, Raphaël; Boso, Gianluca; Bussières, Félix; Zbinden, Hugo

    2016-09-01

    Bit commitment is a fundamental cryptographic primitive in which a party wishes to commit a secret bit to another party. Perfect security between mistrustful parties is unfortunately impossible to achieve through the asynchronous exchange of classical and quantum messages. Perfect security can nonetheless be achieved if each party splits into two agents exchanging classical information at times and locations satisfying strict relativistic constraints. A relativistic multiround protocol to achieve this was previously proposed and used to implement a 2-millisecond commitment time. Much longer durations were initially thought to be insecure, but recent theoretical progress showed that this is not so. In this Letter, we report on the implementation of a 24-hour bit commitment solely based on timed high-speed optical communication and fast data processing, with all agents located within the city of Geneva. This duration is more than 6 orders of magnitude longer than before, and we argue that it could be extended to one year and allow much more flexibility on the locations of the agents. Our implementation offers a practical and viable solution for use in applications such as digital signatures, secure voting and honesty-preserving auctions.

  20. Remote drill bit loader

    SciTech Connect

    Dokos, James A.

    1997-01-01

    A drill bit loader for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned.

  1. Remote drill bit loader

    DOEpatents

    Dokos, J.A.

    1997-12-30

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. 5 figs.

  2. Pattern recognition of electronic bit-sequences using a semiconductor mode-locked laser and spatial light modulators

    NASA Astrophysics Data System (ADS)

    Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.

    2010-04-01

    A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.

  3. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    ERIC Educational Resources Information Center

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  4. Numerical optimization of writer and media for bit patterned magnetic recording

    NASA Astrophysics Data System (ADS)

    Kovacs, A.; Oezelt, H.; Schabes, M. E.; Schrefl, T.

    2016-07-01

    In this work, we present a micromagnetic study of the performance potential of bit-patterned (BP) magnetic recording media via joint optimization of the design of the media and of the magnetic write heads. Because the design space is large and complex, we developed a novel computational framework suitable for parallel implementation on compute clusters. Our technique combines advanced global optimization algorithms and finite-element micromagnetic solvers. Targeting data bit densities of 4 Tb/in2, we optimize designs for centered, staggered, and shingled BP writing. The magnetization dynamics of the switching of the exchange-coupled composite BP islands of the media is treated micromagnetically. Our simulation framework takes into account not only the dynamics of on-track errors but also the thermally induced adjacent-track erasure. With co-optimized write heads, the results show superior performance of shingled BP magnetic recording where we identify two particular designs achieving write bit-error rates of 1.5 ×10-8 and 8.4 ×10-8 , respectively. A detailed description of the key design features of these designs is provided and contrasted with centered and staggered BP designs which yielded write bit error rates of only 2.8 ×10-3 (centered design) and 1.7 ×10-2 (staggered design) even under optimized conditions.

  5. Heat-assisted magnetic recording of bit-patterned media beyond 10 Tb/in2

    NASA Astrophysics Data System (ADS)

    Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk

    2016-03-01

    The limits of areal storage density that is achievable with heat-assisted magnetic recording are unknown. We addressed this central question and investigated the areal density of bit-patterned media. We analyzed the detailed switching behavior of a recording bit under various external conditions, allowing us to compute the bit error rate of a write process (shingled and conventional) for various grain spacings, write head positions, and write temperatures. Hence, we were able to optimize the areal density yielding values beyond 10 Tb/in2. Our model is based on the Landau-Lifshitz-Bloch equation and uses hard magnetic recording grains with a 5-nm diameter and 10-nm height. It assumes a realistic distribution of the Curie temperature of the underlying material, grain size, as well as grain and head position.

  6. Double acting bit holder

    DOEpatents

    Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.

    1994-01-01

    A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.

  7. Error resilient image transmission based on virtual SPIHT

    NASA Astrophysics Data System (ADS)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  8. Practical Relativistic Bit Commitment.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Wehner, S; Zbinden, H

    2015-07-17

    Bit commitment is a fundamental cryptographic primitive in which Alice wishes to commit a secret bit to Bob. Perfectly secure bit commitment between two mistrustful parties is impossible through an asynchronous exchange of quantum information. Perfect security is, however, possible when Alice and Bob each split into several agents exchanging classical information at times and locations suitably chosen to satisfy specific relativistic constraints. In this Letter we first revisit a previously proposed scheme [C. Crépeau et al., Lect. Notes Comput. Sci. 7073, 407 (2011)] that realizes bit commitment using only classical communication. We prove that the protocol is secure against quantum adversaries for a duration limited by the light-speed communication time between the locations of the agents. We then propose a novel multiround scheme based on finite-field arithmetic that extends the commitment time beyond this limit, and we prove its security against classical attacks. Finally, we present an implementation of these protocols using dedicated hardware and we demonstrate a 2 ms-long bit commitment over a distance of 131 km. By positioning the agents on antipodal points on the surface of Earth, the commitment time could possibly be extended to 212 ms.

  9. Bit-serial neuroprocessor architecture

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    2001-01-01

    A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.

  10. A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems

    NASA Astrophysics Data System (ADS)

    Zhu, Li-Ping; Yao, Yan; Zhou, Shi-Dong; Dong, Shi-Wei

    2007-12-01

    A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT) systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL) test loops show that the proposed algorithm is efficient for practical DMT transmissions.

  11. Flexible bit: A new anti-vibration PDC bit concept

    SciTech Connect

    Defourny, P.; Abbassian, F.

    1995-12-31

    This paper introduces the novel concept of a {open_quotes}flexible{close_quotes} polycrystalline diamond compact (PDC) bit, and its capability to reduce detrimental vibration associated with drag bits. The tilt flexibility, introduced at the bit, decouples the dynamic motion of the bottom hole assembly (BHA) from that of the bit, thus providing a dynamically more stable bit. The paper describes the details of a prototype 8-1/2 inch flexible bit design together with laboratory experiments and field tests which verify the concept.

  12. Recent developments in polycrystalline diamond-drill-bit design

    SciTech Connect

    Huff, C.F.; Varnado, S.G.

    1980-05-01

    Development of design criteria for polycrystalline diamond compact (PDC) drill bits for use in severe environments (hard or fractured formations, hot and/or deep wells) is continuing. This effort consists of both analytical and experimental analyses. The experimental program includes single point tests of cutters, laboratory tests of full scale bits, and field tests of these designs. The results of laboratory tests at simulated downhole conditions utilizing new and worn bits are presented. Drilling at simulated downhole pressures was conducted in Mancos Shale and Carthage Marble. Comparisons are made between PDC bits and roller cone bits in drilling with borehole pressures up to 5000 psi (34.5 PMa) with oil and water based muds. The PDC bits drilled at rates up to 5 times as fast as roller bits in the shale. In the first field test, drilling rates approximately twice those achieved with conventional bits were achieved with a PDC bit. A second test demonstrated the value of these bits in correcting deviation and reaming.

  13. 20 Gb/s WDM-OFDM-PON over 20-km single fiber uplink transmission using optical millimeter-wave signal seeding with rate adaptive bit-power loading

    NASA Astrophysics Data System (ADS)

    Kartiwa, Iwa; Jung, Sang-Min; Hong, Moon-Ki; Han, Sang-Kook

    2013-06-01

    We experimentally demonstrate the use of millimeter-wave signal generation by optical carrier suppression (OCS) method using single-drive Mach-Zehnder modulator as a light sources seed for 20 Gb/s WDM-OFDM-PON in 20-km single fiber loopback transmission based on cost-effective RSOA modulation. Practical discrete rate adaptive bit loading algorithm was employed in this colorless ONU system to maximize the achievable bit rate for an average bit error rate (BER) below 2 × 10-3.

  14. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  15. Diamond-Cutter Drill Bits

    SciTech Connect

    1995-11-01

    Geothermal Energy Program Office of Geothermal and Wind Technologies Diamond-Cutter Drill Bits Diamond-cutter drill bits cut through tough rock quicker, reducing the cost of drilling for energy resources The U.S. Department of Energy (DOE) contributed markedly to the geothermal, oil, and gas industries through the development of the advanced polycrystalline diamond compact (PDC) drill bit. Introduced in the 1970s by General Electric Company (GE), the PDC bit uses thin, diamond layers bonded to t

  16. Entanglement and Quantum Error Correction with Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Reed, Matthew

    2015-03-01

    Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.

  17. Classical teleportation of a quantum Bit

    PubMed

    Cerf; Gisin; Massar

    2000-03-13

    Classical teleportation is defined as a scenario where the sender is given the classical description of an arbitrary quantum state while the receiver simulates any measurement on it. This scenario is shown to be achievable by transmitting only a few classical bits if the sender and receiver initially share local hidden variables. Specifically, a communication of 2.19 bits is sufficient on average for the classical teleportation of a qubit, when restricted to von Neumann measurements. The generalization to positive-operator-valued measurements is also discussed.

  18. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  19. Achieving the Complete-Basis Limit in Large Molecular Clusters: Computationally Efficient Procedures to Eliminate Basis-Set Superposition Error

    NASA Astrophysics Data System (ADS)

    Richard, Ryan M.; Herbert, John M.

    2013-06-01

    Previous electronic structure studies that have relied on fragmentation have been primarily interested in those methods' abilities to replicate the supersystem energy (or a related energy difference) without recourse to the ability of those supersystem results to replicate experiment or high accuracy benchmarks. Here we focus on replicating accurate ab initio benchmarks, that are suitable for comparison to experimental data. In doing this it becomes imperative that we correct our methods for basis-set superposition errors (BSSE) in a computationally feasible way. This criterion leads us to develop a new method for BSSE correction, which we term the many-body counterpoise correction, or MBn for short. MBn is truncated at order n, in much the same manner as a normal many-body expansion leading to a decrease in computational time. Furthermore, its formulation in terms of fragments makes it especially suitable for use with pre-existing fragment codes. A secondary focus of this study is directed at assessing fragment methods' abilities to extrapolate to the complete basis set (CBS) limit as well as compute approximate triples corrections. Ultimately, by analysis of (H_2O)_6 and (H_2O)_{10}F^- systems, it is concluded that with large enough basis-sets (triple or quad zeta) fragment based methods can replicate high level benchmarks in a fraction of the time.

  20. Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-time Minimal-byte-error Probability Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Vo, Q. D.

    1984-01-01

    A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.

  1. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  2. Drilling bits optimized for the Paris basin

    SciTech Connect

    Vennin, H.C. Pouyastruc )

    1989-07-31

    Paris basin wells have been successfully drilled using steel-body bits with stud-type cutters. These bits offer the possibility of optimizing the bit-face based on the strata to be drilled, as well as allowing replacement of worn cutters. This article discusses: bit manufacturing; bit repair; optimizing bits; hydraulics.

  3. Adaptive downsampling to improve image compression at low bit rates.

    PubMed

    Lin, Weisi; Dong, Li

    2006-09-01

    At low bit rates, better coding quality can be achieved by downsampling the image prior to compression and estimating the missing portion after decompression. This paper presents a new algorithm in such a paradigm, based on the adaptive decision of appropriate downsampling directions/ratios and quantization steps, in order to achieve higher coding quality with low bit rates with the consideration of local visual significance. The full-resolution image can be restored from the DCT coefficients of the downsampled pixels so that the spatial interpolation required otherwise is avoided. The proposed algorithm significantly raises the critical bit rate to approximately 1.2 bpp, from 0.15-0.41 bpp in the existing downsample-prior-to-JPEG schemes and, therefore, outperforms the standard JPEG method in a much wider bit-rate scope. The experiments have demonstrated better PSNR improvement over the existing techniques before the critical bit rate. In addition, the adaptive mode decision not only makes the critical bit rate less image-independent, but also automates the switching coders in variable bit-rate applications, since the algorithm turns to the standard JPEG method whenever it is necessary at higher bit rates.

  4. Drill bit assembly for releasably retaining a drill bit cutter

    DOEpatents

    Glowka, David A.; Raymond, David W.

    2002-01-01

    A drill bit assembly is provided for releasably retaining a polycrystalline diamond compact drill bit cutter. Two adjacent cavities formed in a drill bit body house, respectively, the disc-shaped drill bit cutter and a wedge-shaped cutter lock element with a removable fastener. The cutter lock element engages one flat surface of the cutter to retain the cutter in its cavity. The drill bit assembly thus enables the cutter to be locked against axial and/or rotational movement while still providing for easy removal of a worn or damaged cutter. The ability to adjust and replace cutters in the field reduces the effect of wear, helps maintains performance and improves drilling efficiency.

  5. Experimental unconditionally secure bit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adan; Pan, Jian-Wei

    2014-03-01

    Quantum physics allows unconditionally secure communication between parties that trust each other. However, when they do not trust each other such as in the bit commitment, quantum physics is not enough to guarantee security. Only when relativistic causality constraints combined, the unconditional secure bit commitment becomes feasible. Here we experimentally implement a quantum bit commitment with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. Bits are successfully committed with less than 5 . 68 ×10-2 cheating probability. This provides an experimental proof of unconditional secure bit commitment and demonstrates the feasibility of relativistic quantum communication.

  6. Positional Information, in bits

    NASA Astrophysics Data System (ADS)

    Dubuis, Julien; Bialek, William; Wieschaus, Eric; Gregor, Thomas

    2010-03-01

    Pattern formation in early embryonic development provides an important testing ground for ideas about the structure and dynamics of genetic regulatory networks. Spatial variations in the concentration of particular transcription factors act as ``morphogens,'' driving more complex patterns of gene expression that in turn define cell fates, which must be appropriate to the physical location of the cells in the embryo. Thus, in these networks, the regulation of gene expression serves to transmit and process ``positional information.'' Here, using the early Drosophila embryo as a model system, we measure the amount of positional information carried by a group of four genes (the gap genes Hunchback, Kr"uppel, Giant and Knirps) that respond directly to the primary maternal morphogen gradients. We find that the information carried by individual gap genes is much larger than one bit, so that their spatial patterns provide much more than the location of an ``expression boundary.'' Preliminary data indicate that, taken together these genes provide enough information to specify the location of every row of cells along the embryo's anterior-posterior axis.

  7. Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation

    NASA Technical Reports Server (NTRS)

    Swift, G.

    2002-01-01

    JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.

  8. Addressing medical coding and billing part II: a strategy for achieving compliance. A risk management approach for reducing coding and billing errors.

    PubMed Central

    Adams, Diane L.; Norman, Helen; Burroughs, Valentine J.

    2002-01-01

    Medical practice today, more than ever before, places greater demands on physicians to see more patients, provide more complex medical services and adhere to stricter regulatory rules, leaving little time for coding and billing. Yet, the need to adequately document medical records, appropriately apply billing codes and accurately charge insurers for medical services is essential to the medical practice's financial condition. Many physicians rely on office staff and billing companies to process their medical bills without ever reviewing the bills before they are submitted for payment. Some physicians may not be receiving the payment they deserve when they do not sufficiently oversee the medical practice's coding and billing patterns. This article emphasizes the importance of monitoring and auditing medical record documentation and coding application as a strategy for achieving compliance and reducing billing errors. When medical bills are submitted with missing and incorrect information, they may result in unpaid claims and loss of revenue to physicians. Addressing Medical Audits, Part I--A Strategy for Achieving Compliance--CMS, JCAHO, NCQA, published January 2002 in the Journal of the National Medical Association, stressed the importance of preparing the medical practice for audits. The article highlighted steps the medical practice can take to prepare for audits and presented examples of guidelines used by regulatory agencies to conduct both medical and financial audits. The Medicare Integrity Program was cited as an example of guidelines used by regulators to identify coding errors during an audit and deny payment to providers when improper billing occurs. For each denied claim, payments owed to the medical practice are are also denied. Health care is, no doubt, a costly endeavor for health care providers, consumers and insurers. The potential risk to physicians for improper billing may include loss of revenue, fraud investigations, financial sanction

  9. Experimental unconditionally secure bit commitment.

    PubMed

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adán; Pan, Jian-Wei

    2014-01-10

    Quantum physics allows for unconditionally secure communication between parties that trust each other. However, when the parties do not trust each other such as in the bit commitment scenario, quantum physics is not enough to guarantee security unless extra assumptions are made. Unconditionally secure bit commitment only becomes feasible when quantum physics is combined with relativistic causality constraints. Here we experimentally implement a quantum bit commitment protocol with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. The security of the protocol relies on the properties of quantum information and relativity theory. In each run of the experiment, a bit is successfully committed with less than 5.68×10(-2) cheating probability. This demonstrates the experimental feasibility of quantum communication with relativistic constraints.

  10. Experimental Unconditionally Secure Bit Commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adán; Pan, Jian-Wei

    2014-01-01

    Quantum physics allows for unconditionally secure communication between parties that trust each other. However, when the parties do not trust each other such as in the bit commitment scenario, quantum physics is not enough to guarantee security unless extra assumptions are made. Unconditionally secure bit commitment only becomes feasible when quantum physics is combined with relativistic causality constraints. Here we experimentally implement a quantum bit commitment protocol with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. The security of the protocol relies on the properties of quantum information and relativity theory. In each run of the experiment, a bit is successfully committed with less than 5.68×10-2 cheating probability. This demonstrates the experimental feasibility of quantum communication with relativistic constraints.

  11. Experimental unconditionally secure bit commitment.

    PubMed

    Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adán; Pan, Jian-Wei

    2014-01-10

    Quantum physics allows for unconditionally secure communication between parties that trust each other. However, when the parties do not trust each other such as in the bit commitment scenario, quantum physics is not enough to guarantee security unless extra assumptions are made. Unconditionally secure bit commitment only becomes feasible when quantum physics is combined with relativistic causality constraints. Here we experimentally implement a quantum bit commitment protocol with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. The security of the protocol relies on the properties of quantum information and relativity theory. In each run of the experiment, a bit is successfully committed with less than 5.68×10(-2) cheating probability. This demonstrates the experimental feasibility of quantum communication with relativistic constraints. PMID:24483878

  12. Modeling and analysis of stick-slip and bit bounce in oil well drillstrings equipped with drag bits

    NASA Astrophysics Data System (ADS)

    Kamel, Jasem M.; Yigit, Ahmet S.

    2014-12-01

    Rotary drilling systems equipped with drag bits or fixed cutter bits (also called PDC), used for drilling deep boreholes for the production and the exploration of oil and natural gas, often suffer from severe vibrations. These vibrations are detrimental to the bit and the drillstring causing different failures of equipment (e.g., twist-off, abrasive wear of tubulars, bit damage), and inefficiencies in the drilling operation (reduction of the rate of penetration (ROP)). Despite extensive research conducted in the last several decades, there is still a need to develop a consistent model that adequately captures all phenomena related to drillstring vibrations such as nonlinear cutting and friction forces at the bit/rock formation interface, drive system characteristics and coupling between various motions. In this work, a physically consistent nonlinear model for the axial and torsional motions of a rotating drillstring equipped with a drag bit is proposed. A more realistic cutting and contact model is used to represent bit/rock formation interaction at the bit. The dynamics of both drive systems for rotary and translational motions of the drillstring, including the hoisting system are also considered. In this model, the rotational and translational motions of the bit are obtained as a result of the overall dynamic behavior rather than prescribed functions or constants. The dynamic behavior predicted by the proposed model qualitatively agree well with field observations and published theoretical results. The effects of various operational parameters on the dynamic behavior are investigated with the objective of achieving a smooth and efficient drilling. The results show that with proper choice of operational parameters, it may be possible to minimize the effects of stick-slip and bit-bounce and increase the ROP. Therefore, it is expected that the results will help reduce the time spent in drilling process and costs incurred due to severe vibrations and consequent

  13. Precious bits: frame synchronization in Jet Propulsion Laboratory's Advanced Multi-Mission Operations System (AMMOS)

    NASA Technical Reports Server (NTRS)

    Wilson, E.

    2001-01-01

    The Jet Propulsion Laboratory's (JPL) Advanced Multi-Mission Operations System (AMMOS) system processes data received from deep-space spacecraft, where error rates are high, bit rates are low, and every bit is precious. Frame synchronization and data extraction as performed by AMMOS enhanced data acquisition and reliability for maximum data return and validity.

  14. String bit models for superstring

    SciTech Connect

    Bergman, O.; Thorn, C.B.

    1995-12-31

    The authors extend the model of string as a polymer of string bits to the case of superstring. They mainly concentrate on type II-B superstring, with some discussion of the obstacles presented by not II-B superstring, together with possible strategies for surmounting them. As with previous work on bosonic string work within the light-cone gauge. The bit model possesses a good deal less symmetry than the continuous string theory. For one thing, the bit model is formulated as a Galilei invariant theory in (D {minus} 2) + 1 dimensional space-time. This means that Poincare invariance is reduced to the Galilei subgroup in D {minus} 2 space dimensions. Naturally the supersymmetry present in the bit model is likewise dramatically reduced. Continuous string can arise in the bit models with the formation of infinitely long polymers of string bits. Under the right circumstances (at the critical dimension) these polymers can behave as string moving in D dimensional space-time enjoying the full N = 2 Poincare supersymmetric dynamics of type II-B superstring.

  15. Improved error resilient H.264 coding scheme using SP/SI macroblocks

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaosong; Kung, Wei-Ying; Kuo, C.-C. Jay

    2005-03-01

    An error resilient H.264 coding scheme using SP/SI macroblocks is presented in this work. It is able to generate alternative SP macroblocks utilizing multiple reference frames and the concealed versions of the corrupted frames. These alternative macroblocks are used to replace the original ones in the output video stream to protect them from being affected by previous errors detected at the decoder side. The introduced bit bate is further reduced by adjusting quantization levels adaptively. Specifically, different versions of alternative SP macroblocks can be coded using different quantization levels, which is associated with different levels of error resilience performance and different bit rate consumptions. A proper alternative version is encoded according to the importance of the macroblock. The importance of the macroblock is measured by its influence on subsequent frames if the macroblock is not correctly reconstructed. Accordingly, fewer bits are used to replace those macroblocks with less importance. Simulation results demonstrate the proposed approach achieves an excellent error resilient capability and an improvement in reducing the bit rate overhead.

  16. Bit-level systolic arrays

    SciTech Connect

    De Groot, A.J.

    1989-01-01

    In this dissertation the author considered the design of bit - level systolic arrays where the basic computational unit consists of a simple one - bit logic unit, so that the systolic process is carried out at the level of individual bits. In order to pursue the foregoing research, several areas have been studied. First, the concept of systolic processing has been investigated. Several important algorithms were investigated and put into systolic form using graph-theoretic methods. The bit-level, word-level and block-level systolic arrays which have been designed for these algorithms exhibit linear speedup with respect to the number of processors and exhibit efficiency close to 100%, even with low interprocessor communication bandwidth. Block-level systolic arrays deal with blocks of data with block-level operations and communications. Block-level systolic arrays improve cell efficiency and are more efficient than their word-level counterparts. A comparison of bit-level, word-level and block-level systolic arrays was performed. In order to verify the foregoing theory and analysis a systolic processor called the SPRINT was developed to provide and environment where bit-level, word-level and block-level systolic algorithms could be confirmed by direct implementation rather than by computer simulation. The SPRINT is a supercomputer class, 64-element multiprocessor with a reconfigurable interconnection network. The theory has been confirmed by the execution on the SPRINT of the bit-level, word-level, and block-level systolic algorithms presented in the dissertation.

  17. Drill bit method and apparatus

    SciTech Connect

    Davis, K.

    1986-08-19

    This patent describes a drill bit having a lower cutting face which includes a plurality of stud assemblies radially spaced from a longitudinal axial centerline of the bit, each stud assembly being mounted within a stud receiving socket which is formed in the bit cutting face. The method of removing the stud assemblies from the sockets of the bit face consists of: forming a socket passageway along the longitudinal axial centerline of the stud receiving socket and extending the passageway rearwardly of the socket; forming a blind passageway which extends from the bit cutting face into the bit body, and into intersecting relationship respective to the socket passageway; while arranging the socket passageway and the blind passageway laterally respective to one another; forming a wedge face on one side of a tool, forming a support post which has one side inclined to receive the wedge face of the tool thereagainst; forcing a ball to move from the cutting face of the bit, into the blind passageway, onto the support post, then into the socket passageway, and into abutting engagement with a rear end portion of the stud assembly; placing the wedge face against the side of the ball which is opposed to the stud assembly; forcing the tool to move into the blind passageway while part of the tool engages the blind passageway and the wedge face engages the ball and thereby forces the ball to move in a direction away from the blind passageway; applying sufficient force to the tool to cause the ball to engage the stud assembly with sufficient force to be moved outwardly in a direction away from the socket, thereby releasing the stud assembly from the socket.

  18. Drill bit and method of renewing drill bit cutting face

    SciTech Connect

    Davis, K.

    1987-04-07

    This patent describes a drill bit having a lower formation engaging face which includes sockets formed therein, a stud assembly mounted in each socket. The method is described of removing the stud assemblies from the bit face comprises: placing a seal means about each stud assembly so that a stud assembly can sealingly reciprocate within a socket with a piston-like action; forming a reduced diameter passageway which extends rearwardly from communication with each socket to the exterior of the bit; flowing fluid into the passageway, thereby exerting fluid pressure against the rear end of the stud assembly; applying sufficient pressure to the fluid within the passageway to produce a pressure differential across the stud assembly to force the stud assembly to move outwardly in a direction away from the socket, thereby releasing the stud assembly from the socket.

  19. Bit timing with pulse distortion and intersymbol interference

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1977-01-01

    Pulse distortion and intersymbol interference due to insufficient filtering in PCM and PSK channels cause performance degradation in terms of both bit error probabilities and timing errors. This paper reports the results of a study analyzing these effects on bit timing subsystems. Consideration is given to both the filter-rectifier and transition tracking type of timing subsystem. Although both these systems perform similarly in high SNR and ideal pulse models, pulse distortion and intersymbol affects each differently. The primary effects in both systems is to cause the presence of an irreducible mean squared timing error due to the intersymbol which limits the ultimate performance. Design procedures to minimize the anomalies of both systems are presented, and indicate modifications of the standard timing subsystems. It is found that specific design directions depend on whether the intersymbol or the receiver noise tends to dominate.

  20. Bit by bit: the Darwinian basis of life.

    PubMed

    Joyce, Gerald F

    2012-01-01

    All known examples of life belong to the same biology, but there is increasing enthusiasm among astronomers, astrobiologists, and synthetic biologists that other forms of life may soon be discovered or synthesized. This enthusiasm should be tempered by the fact that the probability for life to originate is not known. As a guiding principle in parsing potential examples of alternative life, one should ask: How many heritable "bits" of information are involved, and where did they come from? A genetic system that contains more bits than the number that were required to initiate its operation might reasonably be considered a new form of life.

  1. Bit by bit: the Darwinian basis of life.

    PubMed

    Joyce, Gerald F

    2012-01-01

    All known examples of life belong to the same biology, but there is increasing enthusiasm among astronomers, astrobiologists, and synthetic biologists that other forms of life may soon be discovered or synthesized. This enthusiasm should be tempered by the fact that the probability for life to originate is not known. As a guiding principle in parsing potential examples of alternative life, one should ask: How many heritable "bits" of information are involved, and where did they come from? A genetic system that contains more bits than the number that were required to initiate its operation might reasonably be considered a new form of life. PMID:22589698

  2. Technical note: signal-to-noise performance evaluation of a new 12-bit digitizer on time-of-flight mass spectrometer.

    PubMed

    Hondo, Toshinobu; Kawai, Yousuke; Toyoda, Michisato

    2015-01-01

    Rapid acquisition of time-of-flight (TOF) spectra from fewer acquisitions on average was investigated using the newly introduced 12-bit digitizer, Keysight model U5303A. This is expected to achieve a spectrum acquisition 32 times faster than the commonly used 8-bit digitizer for an equal signal-to-noise (S/N) ratio. Averaging fewer pulses improves the detection speed and chromatographic separation performance. However, increasing the analog-to-digital converter bit resolution for a high-frequency signal, such as a TOF spectrum, increases the system noise and requires the timing jitter (aperture error) to be minimized. We studied the relationship between the S/N ratio and the average number of acquisitions using U5303A and compared this with an 8-bit digitizer. The results show that the noise, measured as root-mean-square, decreases linearly to the square root of the average number of acquisitions without background subtraction, which means that almost no systematic noise existed in our signal bandwidth of interest (a few hundreds megahertz). In comparison, 8-bit digitizers that are commonly used in the market require 32 times more pulses with background subtraction.

  3. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    SciTech Connect

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident on an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.

  4. Introduction to the Mu-bit

    NASA Astrophysics Data System (ADS)

    Smarandache, Florentin; Christianto, V.

    2011-03-01

    Mu-bit is defined here as `multi-space bit'. It is different from the standard meaning of bit in conventional computation, because in Smarandache's multispace theory (also spelt multi-space) the bit is created simultaneously in many subspaces (that form together a multi-space). This new `bit' term is different from multi-valued-bit already known in computer technology, for example as MVLong. This new concept is also different from qu-bit from quantum computation terminology. We know that using quantum mechanics logic we could introduce new way of computation with `qubit' (quantum bit), but the logic remains Neumann. Now, from the viewpoint of m-valued multi-space logic, we introduce a new term: `mu-bit' (from `multi-space bit).

  5. Study of bit error rate (BER) for multicarrier OFDM

    NASA Astrophysics Data System (ADS)

    Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad

    2012-10-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.

  6. A bit serial sequential circuit

    NASA Technical Reports Server (NTRS)

    Hu, S.; Whitaker, S.

    1990-01-01

    Normally a sequential circuit with n state variables consists of n unique hardware realizations, one for each state variable. All variables are processed in parallel. This paper introduces a new sequential circuit architecture that allows the state variables to be realized in a serial manner using only one next state logic circuit. The action of processing the state variables in a serial manner has never been addressed before. This paper presents a general design procedure for circuit construction and initialization. Utilizing pass transistors to form the combinational next state forming logic in synchronous sequential machines, a bit serial state machine can be realized with a single NMOS pass transistor network connected to shift registers. The bit serial state machine occupies less area than other realizations which perform parallel operations. Moreover, the logical circuit of the bit serial state machine can be modified by simply changing the circuit input matrix to develop an adaptive state machine.

  7. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  8. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  9. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  10. High Capacity Reversible Watermarking for Audio by Histogram Shifting and Predicted Error Expansion

    PubMed Central

    Wang, Fei; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability. PMID:25097883

  11. Precision goniometer equipped with a 22-bit absolute rotary encoder.

    PubMed

    Xiaowei, Z; Ando, M; Jidong, W

    1998-05-01

    The calibration of a compact precision goniometer equipped with a 22-bit absolute rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten times larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the absolute encoder is approximately 18 bit due to systematic errors.

  12. Resolution upgrade toward 6-bit optical quantization using power-to-wavelength conversion for photonic analog-to-digital conversion.

    PubMed

    Takahashi, Koji; Matsui, Hideki; Nagashima, Tomotaka; Konishi, Tsuyoshi

    2013-11-15

    We demonstrate a resolution upgrade toward 6 bit optical quantization using a power-to-wavelength conversion without an increment of system parallelism. Expansion of a full-scale input range is employed in conjunction with reduction of a quantization step size with keeping a sampling-rate transparency characteristic over several 100 sGS/s. The effective number of bits is estimated to 5.74 bit, and the integral nonlinearity error and differential nonlinearity error are estimated to less than 1 least significant bit.

  13. Rapid programmable/code-length-variable, time-domain bit-by-bit code shifting for high-speed secure optical communication.

    PubMed

    Gao, Zhensen; Dai, Bo; Wang, Xu; Kataoka, Nobuyuki; Wada, Naoya

    2011-05-01

    We propose and experimentally demonstrate a time-domain bit-by-bit code-shifting scheme that can rapidly program ultralong, code-length variable optical code by using only a dispersive element and a high-speed phase modulator for improving information security. The proposed scheme operates in the bit overlap regime and could eliminate the vulnerability of extracting the code by analyzing the fine structure of the time-domain spectral phase encoded signal. It is also intrinsically immune to eavesdropping via conventional power detection and differential-phase-shift-keying (DPSK) demodulation attacks. With this scheme, 10 Gbits/s of return-to-zero-DPSK data secured by bit-by-bit code shifting using up to 1024 chip optical code patterns have been transmitted over 49 km error free. The proposed scheme exhibits the potential for high-data-rate secure optical communication and to realize even one time pad.

  14. Hey! A Tarantula Bit Me!

    MedlinePlus

    ... leave you alone. Reviewed by: Elana Pearl Ben-Joseph, MD Date reviewed: April 2013 For Teens For Kids For Parents MORE ON THIS TOPIC Hey! A Fire Ant Stung Me! Hey! A Scorpion Stung Me! Hey! A Black Widow Spider Bit Me! Hey! A Brown Recluse ...

  15. Hey! A Mosquito Bit Me!

    MedlinePlus

    ... Here's Help White House Lunch Recipes Hey! A Mosquito Bit Me! KidsHealth > For Kids > Hey! A Mosquito ... español ¡Ay! ¡Me picó un mosquito! What's a Mosquito? A mosquito (say: mus-KEE-toe) is an ...

  16. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  17. A bit-serial first-level calorimeter trigger for LHC detectors

    SciTech Connect

    Bohm, C.; Zhao, X.; Appelquist, G.; Engstroem, M.; Hellman, S.; Holmgren, S.O.; Johansson, E.; Yamdagni, N.

    1994-12-31

    A first-level calorimeter trigger design, implemented as a farm of local bit-serial systolic arrays, is presented. The massive bit-serial operation can achieve higher processing throughput and more compact designs than conventional bit-parallel data representation. The construction is based on high speed optical fiber data transmissions, Application Specific Integrated Circuits (ASICs) and multi-chip modules (MCMs) packaging technologies.

  18. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. Stability of single skyrmionic bits

    NASA Astrophysics Data System (ADS)

    Hagemeister, J.; Romming, N.; von Bergmann, K.; Vedmedenko, E. Y.; Wiesendanger, R.

    2015-10-01

    The switching between topologically distinct skyrmionic and ferromagnetic states has been proposed as a bit operation for information storage. While long lifetimes of the bits are required for data storage devices, the lifetimes of skyrmions have not been addressed so far. Here we show by means of atomistic Monte Carlo simulations that the field-dependent mean lifetimes of the skyrmionic and ferromagnetic states have a high asymmetry with respect to the critical magnetic field, at which these lifetimes are identical. According to our calculations, the main reason for the enhanced stability of skyrmions is a different field dependence of skyrmionic and ferromagnetic activation energies and a lower attempt frequency of skyrmions rather than the height of energy barriers. We use this knowledge to propose a procedure for the determination of effective material parameters and the quantification of the Monte Carlo timescale from the comparison of theoretical and experimental data.

  20. Demonstration of low-power bit-interleaving TDM PON.

    PubMed

    Van Praet, Christophe; Chow, Hungkei; Suvakovic, Dusan; Van Veen, Doutje; Dupas, Arnaud; Boislaigue, Roger; Farah, Robert; Lau, Man Fai; Galaro, Joseph; Qua, Gin; Anthapadmanabhan, N Prasanth; Torfs, Guy; Yin, Xin; Vetter, Peter

    2012-12-10

    A functional demonstration of bit-interleaving TDM downstream protocol for passive optical networks (Bi-PON) is reported. The proposed protocol presents a significant reduction in dynamic power consumption in the customer premise equipment over the conventional TDM protocol. It allows to select the relevant bits of all aggregated incoming data immediately after clock and data recovery (CDR) and, hence, allows subsequent hardware to run at much lower user rate. Comparison of experimental results of FPGA-based implementations of Bi-PON and XG-PON shows that more than 30x energy-savings in protocol processing is achievable. PMID:23262914

  1. Quantifying the Impact of Single Bit Flips on Floating Point Arithmetic

    SciTech Connect

    Elliott, James J; Mueller, Frank; Stoyanov, Miroslav K; Webster, Clayton G

    2013-08-01

    In high-end computing, the collective surface area, smaller fabrication sizes, and increasing density of components have led to an increase in the number of observed bit flips. If mechanisms are not in place to detect them, such flips produce silent errors, i.e. the code returns a result that deviates from the desired solution by more than the allowed tolerance and the discrepancy cannot be distinguished from the standard numerical error associated with the algorithm. These phenomena are believed to occur more frequently in DRAM, but logic gates, arithmetic units, and other circuits are also susceptible to bit flips. Previous work has focused on algorithmic techniques for detecting and correcting bit flips in specific data structures, however, they suffer from lack of generality and often times cannot be implemented in heterogeneous computing environment. Our work takes a novel approach to this problem. We focus on quantifying the impact of a single bit flip on specific floating-point operations. We analyze the error induced by flipping specific bits in the most widely used IEEE floating-point representation in an architecture-agnostic manner, i.e., without requiring proprietary information such as bit flip rates and the vendor-specific circuit designs. We initially study dot products of vectors and demonstrate that not all bit flips create a large error and, more importantly, expected value of the relative magnitude of the error is very sensitive on the bit pattern of the binary representation of the exponent, which strongly depends on scaling. Our results are derived analytically and then verified experimentally with Monte Carlo sampling of random vectors. Furthermore, we consider the natural resilience properties of solvers based on the fixed point iteration and we demonstrate how the resilience of the Jacobi method for linear equations can be significantly improved by rescaling the associated matrix.

  2. Panel focuses on diamond shear bit care

    SciTech Connect

    Park, A.

    1982-10-04

    This article examines drilling parameters and marketability of Stratapax bits. Finds that core bits drill from 2 to 3 times faster than conventional diamond bits, thereby reducing filtrate invasion. Predicts that high speed drilling, downhole motors, deeper wells and slim hole drilling will mean greater Stratapax use.

  3. Development of PDC Bits for Downhole Motors

    SciTech Connect

    Karasawa, H.; Ohno, T.

    1995-01-01

    To develop polycrystalline hamond compact (PDC) bits of the full-face type which can be applied to downhole motor drilling, drilling tests for granite and two types of andesite were conducted using bits with 98.43 and 142.88 mm diameters. The bits successfully drilled these types of rock at rotary speeds from 300 to 400 rpm.

  4. Avalanche and bit independence characteristics of double random phase encoding in the Fourier and Fresnel domains.

    PubMed

    Moon, Inkyu; Yi, Faliu; Lee, Yeon H; Javidi, Bahram

    2014-05-01

    In this work, we evaluate the avalanche effect and bit independence properties of the double random phase encoding (DRPE) algorithm in the Fourier and Fresnel domains. Experimental results show that DRPE has excellent bit independence characteristics in both the Fourier and Fresnel domains. However, DRPE achieves better avalanche effect results in the Fresnel domain than in the Fourier domain. DRPE gives especially poor avalanche effect results in the Fourier domain when only one bit is changed in the plaintext or in the encryption key. Despite this, DRPE shows satisfactory avalanche effect results in the Fresnel domain when any other number of bits changes in the plaintext or in the encryption key. To the best of our knowledge, this is the first report on the avalanche effect and bit independence behaviors of optical encryption approaches for bit units.

  5. BIT BY BIT: A Game Simulating Natural Language Processing in Computers

    ERIC Educational Resources Information Center

    Kato, Taichi; Arakawa, Chuichi

    2008-01-01

    BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

  6. Bit by Bit: The Darwinian Basis of Life

    PubMed Central

    Joyce, Gerald F.

    2012-01-01

    All known examples of life belong to the same biology, but there is increasing enthusiasm among astronomers, astrobiologists, and synthetic biologists that other forms of life may soon be discovered or synthesized. This enthusiasm should be tempered by the fact that the probability for life to originate is not known. As a guiding principle in parsing potential examples of alternative life, one should ask: How many heritable “bits” of information are involved, and where did they come from? A genetic system that contains more bits than the number that were required to initiate its operation might reasonably be considered a new form of life. PMID:22589698

  7. An improved EZBC algorithm based on block bit length

    NASA Astrophysics Data System (ADS)

    Wang, Renlong; Ruan, Shuangchen; Liu, Chengxiang; Wang, Wenda; Zhang, Li

    2011-12-01

    Embedded ZeroBlock Coding and context modeling (EZBC) algorithm has high compression performance. However, it consumes large amounts of memory space because an Amplitude Quadtree of wavelet coefficients and other two link lists would be built during the encoding process. This is one of the big challenges for EZBC to be used in real time or hardware applications. An improved EZBC algorithm based on bit length of coefficients was brought forward in this article. It uses Bit Length Quadtree to complete the coding process and output the context for Arithmetic Coder. It can achieve the same compression performance as EZBC and save more than 75% memory space required in the encoding process. As Bit Length Quadtree can quickly locate the wavelet coefficients and judge their significance, the improved algorithm can dramatically accelerate the encoding speed. These improvements are also beneficial for hardware. PACS: 42.30.Va, 42.30.Wb

  8. Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly

    SciTech Connect

    Garcia-Gavito, D.; Azar, J.J.

    1994-09-01

    During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effects of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.

  9. Nanostructures applied to bit-cell devices

    NASA Astrophysics Data System (ADS)

    Kołodziej, Andrzej; Łukasiak, Lidia; Kołodziej, Michał

    2013-07-01

    In this work split-gate charge trap FLASH memory with a storage layer containing 3D nano-crystals is proposed and compared with existing sub-90 nm solutions. We estimate electrical properties, cell operations and reliability issues. Analytical predictions show that for nano-crystals with the diameter < 3 nm metals could be the preferred material. The presented 3D layers were fabricated in a CMOS compatible process. We also show what kinds of nano-crystal geometries and distributions could be achieved. The study shows that the proposed memory cells have very good program/erase/read characteristics approaching those of SONOS cells but better retention time than standard discrete charge storage cells. Also dense nano-crystal structure should allow 2-bits of information to be stored.

  10. Switching field distribution of exchange coupled ferri-/ferromagnetic composite bit patterned media

    NASA Astrophysics Data System (ADS)

    Oezelt, Harald; Kovacs, Alexander; Fischbacher, Johann; Matthes, Patrick; Kirk, Eugenie; Wohlhüter, Phillip; Heyderman, Laura Jane; Albrecht, Manfred; Schrefl, Thomas

    2016-09-01

    We investigate the switching field distribution and the resulting bit error rate of exchange coupled ferri-/ferromagnetic bilayer island arrays by micromagnetic simulations. Using islands with varying microstructure and anisotropic properties, the intrinsic switching field distribution is computed. The dipolar contribution to the switching field distribution is obtained separately by using a model of a triangular patterned island array resembling 1.4 Tb/in2 bit patterned media. Both contributions are computed for different thicknesses of the soft exchange coupled ferrimagnet and also for ferromagnetic single phase FePt islands. A bit patterned media with a bilayer structure of FeGd( 5 nm )/FePt( 5 nm ) shows a bit error rate of 10-4 with a write field of 1.16 T .

  11. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  12. Performance analyses of subcarrier BPSK modulation over M turbulence channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Ma, Shuang; Li, Ya-tian; Wu, Jia-bin; Geng, Tian-wen; Wu, Zhiyong

    2016-05-01

    An aggregated channel model is achieved by fitting the Weibull distribution, which includes the effects of atmospheric attenuation, M distributed atmospheric turbulence and nonzero boresight pointing errors. With this approximate channel model, the bit error rate ( BER) and the ergodic capacity of free-space optical (FSO) communication systems utilizing subcarrier binary phase-shift keying (BPSK) modulation are analyzed, respectively. A closed-form expression of BER is derived by using the generalized Gauss-Lagueree quadrature rule, and the bounds of ergodic capacity are discussed. Monte Carlo simulation is provided to confirm the validity of the BER expressions and the bounds of ergodic capacity.

  13. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  14. Field trial for the mixed bit rate at 100G and beyond

    NASA Astrophysics Data System (ADS)

    Yu, Jianjun; Jia, Zhensheng; Dong, Ze; Chien, Hung-Chang

    2013-01-01

    Successful joint experiments with Deutsche Telecom (DT) on long-haul transmission of 100G and beyond are demonstrated over standard single mode fiber (SSMF) and inline EDFA-only amplification. The transmission link consists of 8 nodes and 950-km installed SSMF in DT's optical infrastructure with the addition of lab SSMF for extended optical reach. The first field transmission of 8×216.4-Gb/s Nyquist-WDM signals is reported over 1750- km distance with 21.6-dB average loss per span. Each channel modulated by 54.2-Gbaud PDM-CSRZ-QPSK signal is on 50-GHz grid, achieving a net spectral efficiency (SE) of 4 bit/s/Hz. We also demonstrate mixed data-rate transmission coexisting with 1T, 400G, and 100G channels. The 400G uses four independent subcarriers modulated by 28-Gbaud PDM-QPSK signals, yielding the net SE of 4 bit/s/Hz while 13 optically generated subcarriers from single optical source are employed in 1T channel with 25-Gbaud PDM-QPSK modulation. The 100G signal uses real-time coherent PDM-QPSK transponder with 15% overhead of soft-decision forward-error correction (SD-FEC). The digital post filter and 1-bit maximum likelihood sequence estimation (MLSE) are introduced at the receiver DSP to suppress noise, linear crosstalk and filtering effects. Our results show the future 400G and 1T channels utilizing Nyquist WDM technique can transmit long-haul distance with higher SE using the same QPSK format.

  15. Error-thresholds for qudit-based topological quantum memories

    NASA Astrophysics Data System (ADS)

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.

    2014-03-01

    Extending the quantum computing paradigm from qubits to higher-dimensional quantum systems allows for increased channel capacity and a more efficient implementation of quantum gates. However, to perform reliable computations an efficient error-correction scheme adapted for these multi-level quantum systems is needed. A promising approach is via topological quantum error correction, where stability to external noise is achieved by encoding quantum information in non-local degrees of freedom. A key figure of merit is the error threshold which quantifies the fraction of physical qudits that can be damaged before logical information is lost. Here we analyze the resilience of generalized topological memories built from d-level quantum systems (qudits) to bit-flip errors. The error threshold is determined by mapping the quantum setup to a classical Potts-like model with bond disorder, which is then investigated numerically using large-scale Monte Carlo simulations. Our results show that topological error correction with qutrits exhibits an improved error threshold in comparison to qubit-based systems.

  16. Stability of single skyrmionic bits

    NASA Astrophysics Data System (ADS)

    Vedmedenko, Olena; Hagemeister, Julian; Romming, Niklas; von Bergmann, Kirsten; Wiesendanger, Roland

    The switching between topologically distinct skyrmionic and ferromagnetic states has been proposed as a bit operation for information storage. While long lifetimes of the bits are required for data storage devices, the lifetimes of skyrmions have not been addressed so far. Here we show by means of atomistic Monte Carlo simulations that the field-dependent mean lifetimes of the skyrmionic and ferromagnetic states have a high asymmetry with respect to the critical magnetic field, at which these lifetimes are identical. According to our calculations, the main reason for the enhanced stability of skyrmions is a different field dependence of skyrmionic and ferromagnetic activation energies and a lower attempt frequency of skyrmions rather than the height of energy barriers. We use this knowledge to propose a procedure for the determination of effective material parameters and the quantification of the Monte Carlo timescale from the comparison of theoretical and experimental data. Financial support from the DFG in the framework of the SFB668 is acknowledged.

  17. Error detection and correction unit with built-in self-test capability for spacecraft applications

    NASA Technical Reports Server (NTRS)

    Timoc, Constantin

    1990-01-01

    The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.

  18. Quantum error correction via robust probe modes

    SciTech Connect

    Yamaguchi, Fumiko; Nemoto, Kae; Munro, William J.

    2006-06-15

    We propose a scheme for quantum error correction using robust continuous variable probe modes, rather than fragile ancilla qubits, to detect errors without destroying data qubits. The use of such probe modes reduces the required number of expensive qubits in error correction and allows efficient encoding, error detection, and error correction. Moreover, the elimination of the need for direct qubit interactions significantly simplifies the construction of quantum circuits. We will illustrate how the approach implements three existing quantum error correcting codes: the three-qubit bit-flip (phase-flip) code, the Shor code, and an erasure code.

  19. Holographic disk data storage at a high areal density of 33.7 bits/´m2

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Tao, Shiquan; Wang, Dayong; Yuan, Wei; Liu, Guoqing; Ding, Xiaohong; Jiang, Zhuqing; Liu, Changjiang

    2003-09-01

    Ten thousand data pages, each containing 768×768 pixels, have been stored in a single section of a disk-shaped, iron-doped LiNbO3 crystal using spatioangular multiplexing with a convergent spherical reference beam, leading to an areal density of 33.7bits/μm2 and a volumetric density of 6.7Gbits/cm3. The system design considerations for the achievement of the goals ensure the success of the experiment. Customer-designed Fourier transform and imaging optics with short focal length provide tightly confined object beam at the crystal and good iamge quality in the detector array. An optimized reflection configuration avoids the detrimental scattering from the crystal surface to enter the detector array. An optimzied reflection configuration avoids the detrimetnal scattering from the crystal surface to enter the detector. The images were reconstructed with good fidelity. The signal to noise ratio (SNR) was measured to be 3.6 for the worst-case in the sampled retrieved images, from which, a raw bit error rate of 1.6×10-4 before error correction could be estimated.

  20. Masking of errors in transmission of VAPC-coded speech

    NASA Technical Reports Server (NTRS)

    Cox, Neil B.; Froese, Edwin L.

    1990-01-01

    A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.

  1. FastBit Reference Manual

    SciTech Connect

    Wu, Kesheng

    2007-08-02

    An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.

  2. Stinger Enhanced Drill Bits For EGS

    SciTech Connect

    Durrand, Christopher J.; Skeem, Marcus R.; Crockett, Ron B.; Hall, David R.

    2013-04-29

    The project objectives were to design, engineer, test, and commercialize a drill bit suitable for drilling in hard rock and high temperature environments (10,000 meters) likely to be encountered in drilling enhanced geothermal wells. The goal is provide a drill bit that can aid in the increased penetration rate of three times over conventional drilling. Novatek has sought to leverage its polycrystalline diamond technology and a new conical cutter shape, known as the Stinger®, for this purpose. Novatek has developed a fixed bladed bit, known as the JackBit®, populated with both shear cutter and Stingers that is currently being tested by major drilling companies for geothermal and oil and gas applications. The JackBit concept comprises a fixed bladed bit with a center indenter, referred to as the Jack. The JackBit has been extensively tested in the lab and in the field. The JackBit has been transferred to a major bit manufacturer and oil service company. Except for the attached published reports all other information is confidential.

  3. A localized orbital analysis of the thermochemical errors in hybrid density functional theory: achieving chemical accuracy via a simple empirical correction scheme.

    PubMed

    Friesner, Richard A; Knoll, Eric H; Cao, Yixiang

    2006-09-28

    This paper describes an empirical localized orbital correction model which improves the accuracy of density functional theory (DFT) methods for the prediction of thermochemical properties for molecules of first and second row elements. The B3LYP localized orbital correction version of the model improves B3LYP DFT atomization energy calculations on the G3 data set of 222 molecules from a mean absolute deviation (MAD) from experiment of 4.8 to 0.8 kcal/mol. The almost complete elimination of large outliers and the substantial reduction in MAD yield overall results comparable to the G3 wave-function-based method; furthermore, the new model has zero additional computational cost beyond standard DFT calculations. The following four classes of correction parameters are applied to a molecule based on standard valence bond assignments: corrections to atoms, corrections to individual bonds, corrections for neighboring bonds of a given bond, and radical environmental corrections. Although the model is heuristic and is based on a 22 parameter multiple linear regression to experimental errors, each of the parameters is justified on physical grounds, and each provides insight into the fundamental limitations of DFT, most importantly the failure of current DFT methods to accurately account for nondynamical electron correlation.

  4. Performance analysis of relay-aided free-space optical communication system over gamma-gamma fading channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Fu, Hui-hua; Wang, Ping; Wang, Ran-ran; Liu, Xiao-xia; Guo, Li-xin; Yang, Yin-tang

    2016-07-01

    The average bit error rate ( ABER) performance of a decode-and-forward (DF) based relay-assisted free-space optical (FSO) communication system over gamma-gamma distribution channels considering the pointing errors is studied. With the help of Meijer's G-function, the probability density function (PDF) and cumulative distribution function (CDF) of the aggregated channel model are derived on the basis of the best path selection scheme. The analytical ABER expression is achieved and the system performance is then investigated with the influence of pointing errors, turbulence strengths and structure parameters. Monte Carlo (MC) simulation is also provided to confirm the analytical ABER expression.

  5. Steganography forensics method for detecting least significant bit replacement attack

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao

    2015-01-01

    We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.

  6. Photon-number-resolving detector with 10 bits of resolution

    SciTech Connect

    Jiang, Leaf A.; Dauler, Eric A.; Chang, Joshua T

    2007-06-15

    A photon-number-resolving detector with single-photon resolution is described and demonstrated. It has 10 bits of resolution, does not require cryogenic cooling, and is sensitive to near ir wavelengths. This performance is achieved by flood illuminating a 32x32 element In{sub x}Ga{sub 1-x}AsP Geiger-mode avalanche photodiode array that has an integrated counter and digital readout circuit behind each pixel.

  7. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  8. A hyperspectral images compression algorithm based on 3D bit plane transform

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue

    2010-10-01

    According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.

  9. Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC

    NASA Astrophysics Data System (ADS)

    Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo

    We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.

  10. Tb/s physical random bit generation with bandwidth-enhanced chaos in three-cascaded semiconductor lasers.

    PubMed

    Sakuraba, Ryohsuke; Iwakawa, Kento; Kanno, Kazutaka; Uchida, Atsushi

    2015-01-26

    We experimentally demonstrate fast physical random bit generation from bandwidth-enhanced chaos by using three-cascaded semiconductor lasers. The bandwidth-enhanced chaos is obtained with the standard bandwidth of 35.2 GHz, the effective bandwidth of 26.0 GHz and the flatness of 5.6 dB, whose waveform is used for random bit generation. Two schemes of single-bit and multi-bit extraction methods for random bit generation are carried out to evaluate the entropy rate and the maximum random bit generation rate. For single-bit generation, the generation rate at 20 Gb/s is obtained for physical random bit sequences. For multi-bit generation, the maximum generation rate at 1.2 Tb/s ( = 100 GS/s × 6 bits × 2 data) is equivalently achieved for physical random bit sequences whose randomness is verified by using both NIST Special Publication 800-22 and TestU01.

  11. REVERSIBLE N-BIT TO N-BIT INTEGER HAAR-LIKE TRANSFORMS

    SciTech Connect

    Duchaineau, M; Joy, K I; Senecal, J

    2004-02-14

    We introduce TLHaar, an n-bit to n-bit reversible transform similar to the Haar IntegerWavelet Transform (IWT). TLHaar uses lookup tables that approximate the Haar IWT, but reorder the coefficients so they fit into n bits. TLHaar is suited for lossless compression in fixed-width channels, such as digital video channels and graphics hardware frame buffers.

  12. Hey! A Brown Recluse Spider Bit Me!

    MedlinePlus

    ... putting them on. Reviewed by: Elana Pearl Ben-Joseph, MD Date reviewed: April 2013 For Teens For Kids For Parents MORE ON THIS TOPIC Hey! A Fire Ant Stung Me! Hey! A Tarantula Bit Me! Hey! A Scorpion Stung Me! Hey! A Black Widow Spider Bit Me! Camping and Woods Safety ...

  13. Drill bit with suction jet means

    SciTech Connect

    Castel, Y.; Cholet, H.

    1980-12-16

    This drill bit comprises a plurality of rollers provided with cutting teeth or inserts. At least one upwardly directed eduction jet is created and the bit comprises at least one nozzle located between two adjacent rollers and creating at least two fluid jets respectively directed towards these two adjacent rollers.

  14. MWD tools open window at bit

    SciTech Connect

    Not Available

    1993-05-24

    A new measurement-while-drilling (MWD) system takes resistivity and directional measurements directly at the bit, allowing drillers and geologists to 'see' the true direction and inclination of the bit with respect to the formation drilled. With real-time resistivity measurements at the bit (RAB), the formation is logged before fluid invasion occurs and the driller can steer directional wells more accurately than with conventional MWD tools. The MWD tools comprise an instrumented steerable motor and an instrumented near-bit stabilizer for rotary drilling. The tools have sensors for resistivity, gamma ray, and inclination located in a sub just behind the bit. The integrated steerable system was successfully tested in the Barbara 79 D well offshore Italy and in the Cortemaggiore 134 D well in northern Italy in November, 1992. This paper describes the system and its advantages over conventional MWD tools.

  15. An Improved N-Bit to N-Bit Reversible Haar-Like Transform

    SciTech Connect

    Senecal, J G; Lindstrom, P; Duchaineau, M A; Joy, K I

    2004-07-26

    We introduce the Piecewise-Linear Haar (PLHaar) transform, a reversible n-bit to n-bit transform that is based on the Haar wavelet transform. PLHaar is continuous, while all current n-bit to n-bit methods are not, and is therefore uniquely usable with both lossy and lossless methods (e.g. image compression). PLHaar has both integer and continuous (i.e. non-discrete) forms. By keeping the coefficients to n bits PLHaar is particularly suited for use in hardware environments where channel width is limited, such as digital video channels and graphics hardware.

  16. High performance 14-bit pipelined redundant signed digit ADC

    NASA Astrophysics Data System (ADS)

    Narula, Swina; Pandey, Sujata

    2016-03-01

    A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design.

  17. Changes realized from extended bit-depth and metal artifact reduction in CT

    SciTech Connect

    Glide-Hurst, C.; Chen, D.; Zhong, H.; Chetty, I. J.

    2013-06-15

    Purpose: High-Z material in computed tomography (CT) yields metal artifacts that degrade image quality and may cause substantial errors in dose calculation. This study couples a metal artifact reduction (MAR) algorithm with enhanced 16-bit depth (vs standard 12-bit) to quantify potential gains in image quality and dosimetry. Methods: Extended CT to electron density (CT-ED) curves were derived from a tissue characterization phantom with titanium and stainless steel inserts scanned at 90-140 kVp for 12- and 16-bit reconstructions. MAR was applied to sinogram data (Brilliance BigBore CT scanner, Philips Healthcare, v.3.5). Monte Carlo simulation (MC-SIM) was performed on a simulated double hip prostheses case (Cerrobend rods embedded in a pelvic phantom) using BEAMnrc/Dosxyz (400 000 0000 histories, 6X, 10 Multiplication-Sign 10 cm{sup 2} beam traversing Cerrobend rod). A phantom study was also conducted using a stainless steel rod embedded in solid water, and dosimetric verification was performed with Gafchromic film analysis (absolute difference and gamma analysis, 2% dose and 2 mm distance to agreement) for plans calculated with Anisotropic Analytic Algorithm (AAA, Eclipse v11.0) to elucidate changes between 12- and 16-bit data. Three patients (bony metastases to the femur and humerus, and a prostate cancer case) with metal implants were reconstructed using both bit depths, with dose calculated using AAA and derived CT-ED curves. Planar dose distributions were assessed via matrix analyses and using gamma criteria of 2%/2 mm. Results: For 12-bit images, CT numbers for titanium and stainless steel saturated at 3071 Hounsfield units (HU), whereas for 16-bit depth, mean CT numbers were much larger (e.g., titanium and stainless steel yielded HU of 8066.5 {+-} 56.6 and 13 588.5 {+-} 198.8 for 16-bit uncorrected scans at 120 kVp, respectively). MC-SIM was well-matched between 12- and 16-bit images except downstream of the Cerrobend rod, where 16-bit dose was {approx}6

  18. Room temperature single-photon detectors for high bit rate quantum key distribution

    SciTech Connect

    Comandar, L. C.; Patel, K. A.; Fröhlich, B. Lucamarini, M.; Sharpe, A. W.; Dynes, J. F.; Yuan, Z. L.; Shields, A. J.; Penty, R. V.

    2014-01-13

    We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.

  19. Fast nondeterministic random-bit generation using on-chip chaos lasers

    SciTech Connect

    Harayama, Takahisa; Sunada, Satoshi; Yoshimura, Kazuyuki; Davis, Peter; Tsuzuki, Ken; Uchida, Atsushi

    2011-03-15

    It is shown that broadband chaos suitable for fast nondeterministic random-bit generation in small devices can be achieved in a semiconductor laser with a short external cavity. The design of the device is based on a theoretical model for nondeterministic random-bit generation by amplification of microscopic noise. Moreover, it is demonstrated that bit sequences passing common tests of statistical randomness at rates up to 2.08 Gbits/s can be generated using on-chip lasers with a monolithically integrated external cavity, amplifiers, and a photodetector.

  20. Cheat sensitive quantum bit commitment via pre- and post-selected quantum states

    NASA Astrophysics Data System (ADS)

    Li, Yan-Bing; Wen, Qiao-Yan; Li, Zi-Chen; Qin, Su-Juan; Yang, Ya-Tao

    2014-01-01

    Cheat sensitive quantum bit commitment is a most important and realizable quantum bit commitment (QBC) protocol. By taking advantage of quantum mechanism, it can achieve higher security than classical bit commitment. In this paper, we propose a QBC schemes based on pre- and post-selected quantum states. The analysis indicates that both of the two participants' cheat strategies will be detected with non-zero probability. And the protocol can be implemented with today's technology as a long-term quantum memory is not needed.

  1. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  2. Bit-string scattering theory

    SciTech Connect

    Noyes, H.P.

    1990-01-29

    We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc{sup 2} in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc{sup 2} our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G{sub {pi}N}{sup 2}){sup 2} = (2m{sub N}/m{sub {pi}}){sup 2} {minus} 1. 21 refs., 1 fig.

  3. Finger Vein Recognition Based on a Personalized Best Bit Map

    PubMed Central

    Yang, Gongping; Xi, Xiaoming; Yin, Yilong

    2012-01-01

    Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition. PMID:22438735

  4. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  5. Dynamics of a semiconductor laser with polarization-rotated feedback and its utilization for random bit generation.

    PubMed

    Oliver, Neus; Soriano, Miguel C; Sukow, David W; Fischer, Ingo

    2011-12-01

    Chaotic semiconductor lasers have been proven attractive for fast random bit generation. To follow this strategy, simple robust systems and a systematic approach determining the required dynamical properties and most suitable conditions for this application are needed. We show that dynamics of a single mode laser with polarization-rotated feedback are optimal for random bit generation when characterized simultaneously by a broad power spectrum and low autocorrelation. We observe that successful random bit generation also is sensitive to digitization and postprocessing procedures. Applying the identified criteria, we achieve fast random bit generation rates (up to 4 Gbit/s) with minimal postprocessing.

  6. 10-25 GHz frequency reconfigurable MEMS 5-bit phase shifter using push-pull actuator based toggle mechanism

    NASA Astrophysics Data System (ADS)

    Dey, Sukomal; Koul, Shiban K.

    2015-06-01

    This paper presents a frequency tunable 5-bit true-time-delay digital phase shifter using radio frequency microelectromechanical system (RF MEMS) technology. The phase shifter is based on the distributed MEMS transmission line (DMTL) concept utilizing a MEMS varactor. The main source of frequency tuning in this work is a bridge actuation mechanism followed by capacitance variation. Two stages of actuation mechanisms (push and pull) are used to achieve a 2:1 tuning ratio. Accurate control of the actuation voltage between the pull to push stages contributes differential phase shift over the band of interest. The functional behavior of the push-pull actuation over the phase shifter application is theoretically established, experimentally investigated and validated with simulation. The phase shifter is fabricated monolithically using a gold based surface micromachining process on an alumina substrate. The individual primary phase-bits (11.25°/22.5°/45°/90°/180°) that are the fundamental building blocks of the complete 5-bit phase shifter are designed, fabricated and experimentally characterized from 10-25 GHz for specific applications. Finally, the complete 5-bit phase shifter demonstrates an average phase error of 4.32°, 2.8°, 1° and 1.58°, an average insertion loss of 3.76, 4.1, 4.2 and 4.84 dB and an average return loss of 11.7, 12, 14 and 11.8 dB at 10, 12, 17.2 and 25 GHz, respectively. To the best of the authors’ knowledge, this is the first reported band tunable stand alone 5-bit phase shifter in the literature which can work over the large spectrum for different applications. The total area of the 5-bit phase shifter is 15.6 mm2. Furthermore, the cold-switched reliability of the unit cell and the complete 5-bit MEMS phase shifter are extensively investigated and presented.

  7. A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip

    NASA Technical Reports Server (NTRS)

    Timoc, C.; Tran, T.; Wongso, J.

    1992-01-01

    This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.

  8. 1064 nm, 565 Mbit/s PSK transmission experiment with homodyne receiver using synchronisation bits

    NASA Astrophysics Data System (ADS)

    Wandernoth, B.

    1991-09-01

    An optical 565 Mbit/s transmission system at 1064 nm with phase shift keying and homodyne detection using a new carrier recovery technique is presented. The phase error signal in the receiver is obtained by means of synchronization bits. This method combines the advantages of the Costas loop with the simplicity of the pilot carrier technique.

  9. Loss factors associated with spatial and temporal tracking errors in intersatellite PPM communication links

    NASA Technical Reports Server (NTRS)

    Chen, C. C.; Gardner, C. S.

    1986-01-01

    The performance of an optical PPM intersatellite link in the presence of spatial and temporal tracking errors is investigated. It is shown that for a given rms spatial tracking error, an optimal transmitter beamwidth exists which minimizes the probability of bit error. The power penalty associated with the spatial tracking error when the transmitter beamwidth is adjusted to achieve optimal performance is shown to be large (greater than 9 dB) when the rms pointing jitter becomes a significant fraction (greater than 30 percent) of the diffraction limited beamwidth. The power penalty due to temporal tracking error, on the other hand, is relatively small (less than 0.1 dB) when the tracking loop bandwidth is less than 0.1 percent of the slot frequency. By properly allocating losses to spatial and temporal tracking errors, it is seen that a 10 to the -9th error rate can be achieved for a realistic link design with an approximately 3 dB signal power margin.

  10. PDC bits find applications in Oklahoma drilling

    SciTech Connect

    Offenbacher, L.A.; McDermaid, J.D.; Patterson, C.R.

    1983-02-01

    Drilling in Oklahoma is difficult by any standards. Polycrystalline diamond cutter (PDC) bits, with proven success drilling soft, homogenous formations common in the North Sea and U.S. Gulf Coast regions, have found some significant ''spot'' applications in Oklahoma. Applications qualified by bit design and application development over the past two (2) years include slim hole drilling in the deep Anadarko Basin, deviation control in Southern Oklahoma, drilling on mud motors, drilling in oil base mud, drilling cement, sidetracking, coring and some rotary drilling in larger hole sizes. PDC bits are formation sensitive, and care must be taken in selecting where to run them in Oklahoma. Most of the successful runs have been in water base mud drilling hard shales and soft, unconsolidated sands and lime, although bit life is often extended in oil-base muds.

  11. A practical quantum bit commitment protocol

    NASA Astrophysics Data System (ADS)

    Arash Sheikholeslam, S.; Aaron Gulliver, T.

    2012-01-01

    In this paper, we introduce a new quantum bit commitment protocol which is secure against entanglement attacks. A general cheating strategy is examined and shown to be practically ineffective against the proposed approach.

  12. 28-Bit serial word simulator/monitor

    NASA Technical Reports Server (NTRS)

    Durbin, J. W.

    1979-01-01

    Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.

  13. FastBit: Interactively Searching Massive Data

    SciTech Connect

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.

  14. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  15. Application of morphological bit planes in retinal blood vessel extraction.

    PubMed

    Fraz, M M; Basit, A; Barman, S A

    2013-04-01

    The appearance of the retinal blood vessels is an important diagnostic indicator of various clinical disorders of the eye and the body. Retinal blood vessels have been shown to provide evidence in terms of change in diameter, branching angles, or tortuosity, as a result of ophthalmic disease. This paper reports the development for an automated method for segmentation of blood vessels in retinal images. A unique combination of methods for retinal blood vessel skeleton detection and multidirectional morphological bit plane slicing is presented to extract the blood vessels from the color retinal images. The skeleton of main vessels is extracted by the application of directional differential operators and then evaluation of combination of derivative signs and average derivative values. Mathematical morphology has been materialized as a proficient technique for quantifying the retinal vasculature in ocular fundus images. A multidirectional top-hat operator with rotating structuring elements is used to emphasize the vessels in a particular direction, and information is extracted using bit plane slicing. An iterative region growing method is applied to integrate the main skeleton and the images resulting from bit plane slicing of vessel direction-dependent morphological filters. The approach is tested on two publicly available databases DRIVE and STARE. Average accuracy achieved by the proposed method is 0.9423 for both the databases with significant values of sensitivity and specificity also; the algorithm outperforms the second human observer in terms of precision of segmented vessel tree.

  16. A Ku band 5 bit MEMS phase shifter for active electronically steerable phased array applications

    NASA Astrophysics Data System (ADS)

    Sharma, Anesh K.; Gautam, Ashu K.; Farinelli, Paola; Dutta, Asudeb; Singh, S. G.

    2015-03-01

    The design, fabrication and measurement of a 5 bit Ku band MEMS phase shifter in different configurations, i.e. a coplanar waveguide and microstrip, are presented in this work. The development architecture is based on the hybrid approach of switched and loaded line topologies. All the switches are monolithically manufactured on a 200 µm high resistivity silicon substrate using 4 inch diameter wafers. The first three bits (180°, 90° and 45°) are realized using switched microstrip lines and series ohmic MEMS switches whereas the fourth and fifth bits (22.5° and 11.25°) consist of microstrip line sections loaded by shunt ohmic MEMS devices. Individual bits are fabricated and evaluated for performance and the monolithic device is a 5 bit Ku band (16-18 GHz) phase shifter with very low average insertion loss of the order of 3.3 dB and a return loss better than 15 dB over the 32 states with a chip area of 44 mm2. A total phase shift of 348.75° with phase accuracy within 3° is achieved over all of the states. The performance of individual bits has been optimized in order to achieve an integrated performance so that they can be implemented into active electronically steerable antennas for phased array applications.

  17. Designing an efficient LT-code with unequal error protection for image transmission

    NASA Astrophysics Data System (ADS)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression

  18. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  19. Using magnetic permeability bits to store information

    NASA Astrophysics Data System (ADS)

    Timmerwilke, John; Petrie, J. R.; Wieland, K. A.; Mencia, Raymond; Liou, Sy-Hwang; Cress, C. D.; Newburgh, G. A.; Edelstein, A. S.

    2015-10-01

    Steps are described in the development of a new magnetic memory technology, based on states with different magnetic permeability, with the capability to reliably store large amounts of information in a high-density form for decades. The advantages of using the permeability to store information include an insensitivity to accidental exposure to magnetic fields or temperature changes, both of which are known to corrupt memory approaches that rely on remanent magnetization. The high permeability media investigated consists of either films of Metglas 2826 MB (Fe40Ni38Mo4B18) or bilayers of permalloy (Ni78Fe22)/Cu. Regions of films of the high permeability media were converted thermally to low permeability regions by laser or ohmic heating. The permeability of the bits was read by detecting changes of an external 32 Oe probe field using a magnetic tunnel junction 10 μm away from the media. Metglas bits were written with 100 μs laser pulses and arrays of 300 nm diameter bits were read. The high and low permeability bits written using bilayers of permalloy/Cu are not affected by 10 Mrad(Si) of gamma radiation from a 60Co source. An economical route for writing and reading bits as small at 20 nm using a variation of heat assisted magnetic recording is discussed.

  20. Managing the number of tag bits transmitted in a bit-tracking RFID collision resolution protocol.

    PubMed

    Landaluce, Hugo; Perallos, Asier; Angulo, Ignacio

    2014-01-08

    Radio Frequency Identification (RFID) technology faces the problem of message collisions. The coexistence of tags sharing the communication channel degrades bandwidth, and increases the number of bits transmitted. The window methodology, which controls the number of bits transmitted by the tags, is applied to the collision tree (CT) protocol to solve the tag collision problem. The combination of this methodology with the bit-tracking technology, used in CT, improves the performance of the window and produces a new protocol which decreases the number of bits transmitted. The aim of this paper is to show how the CT bit-tracking protocol is influenced by the proposed window, and how the performance of the novel protocol improves under different conditions of the scenario. Therefore, we have performed a fair comparison of the CT protocol, which uses bit-tracking to identify the first collided bit, and the new proposed protocol with the window methodology. Simulations results show that the proposed window positively decreases the total number of bits that are transmitted by the tags, and outperforms the CT protocol latency in slow tag data rate scenarios.

  1. Tid-bits from Geneva.

    PubMed

    1998-09-01

    Global access to HIV/AIDS treatment was the universal theme at the 12th World AIDS Conference. However, 90 percent of people with AIDS do not have access to available therapy. Special attention is needed in dealing with HIV infected persons who are incarcerated and do not have access to the same level of care as the rest of the population. Although perinatal AIDS is now regarded as a preventable disease in the United States, many pregnant women also do not have access to prevention or treatment information. In many parts of the world, women do not have the ability to negotiate safer sexual practices and therefore, remain vulnerable to HIV infection. Partnerships are needed between the fields of prevention, treatment, biomedical research, and behavioral science in order to possibly achieve global access to treatment. PMID:11367485

  2. Friction of drill bits under Martian pressure

    NASA Astrophysics Data System (ADS)

    Zacny, K. A.; Cooper, G. A.

    2007-03-01

    Frictional behavior was investigated for two materials that are good candidates for Mars drill bits: Diamond Impregnated Segments and Polycrystalline Diamond Compacts (PDC). The bits were sliding against dry sandstone and basalt rocks under both Earth and Mars atmospheric pressures and also at temperatures ranging from subzero to over 400 °C. It was found that the friction coefficient dropped from approximately 0.16 to 0.1 as the pressure was lowered from the Earth's pressure to Mars' pressure, at room temperature. This is thought to be a result of the loss of weakly bound water on the sliding surfaces. Holding the pressure at 5 torr and increasing the temperature to approximately 200°C caused a sudden increase in the friction coefficient by approximately 50%. This is attributed to the loss of surface oxides. If no indication of the bit temperature is available, an increase in drilling torque could be misinterpreted as being caused by an increase in auger torque (due to accumulation of cuttings) rather than being the result of a loss of oxide layers due to elevated bit temperatures. An increase in rotational speed (to allow for clearing of cuttings) would then cause greater frictional heating and would increase the drilling torque further. Therefore it would be advisable to monitor the bit temperature or, if that is not possible, to include pauses in drilling to allow the heat to dissipate. Higher friction would also accelerate the wear of the drill bit and in turn reduce the depth of the hole.

  3. Quantum bit commitment under Gaussian constraints

    NASA Astrophysics Data System (ADS)

    Mandilara, Aikaterini; Cerf, Nicolas J.

    2012-06-01

    Quantum bit commitment has long been known to be impossible. Nevertheless, just as in the classical case, imposing certain constraints on the power of the parties may enable the construction of asymptotically secure protocols. Here, we introduce a quantum bit commitment protocol and prove that it is asymptotically secure if cheating is restricted to Gaussian operations. This protocol exploits continuous-variable quantum optical carriers, for which such a Gaussian constraint is experimentally relevant as the high optical nonlinearity needed to effect deterministic non-Gaussian cheating is inaccessible.

  4. A 16K-bit static IIL RAM with 25-ns access time

    NASA Astrophysics Data System (ADS)

    Inabe, Y.; Hayashi, T.; Kawarada, K.; Miwa, H.; Ogiue, K.

    1982-04-01

    A 16,384 x 1-bit RAM with 25-ns access time, 600-mW power dissipation, and 33 sq mm chip size has been developed. Excellent speed-power performance with high packing density has been achieved by an oxide isolation technology in conjunction with novel ECL circuit techniques and IIL flip-flop memory cells, 980 sq microns (35 x 28 microns) in cell size. Development results have shown that IIL flip-flop memory cell is a trump card for assuring achievement of a high-performance large-capacity bipolar RAM, in the above 16K-bit/chip area.

  5. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  6. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  7. Experimental implementation of bit commitment in the noisy-storage model.

    PubMed

    Ng, Nelly Huei Ying; Joshi, Siddarth K; Ming, Chia Chen; Kurtsiefer, Christian; Wehner, Stephanie

    2012-01-01

    Fundamental primitives such as bit commitment and oblivious transfer serve as building blocks for many other two-party protocols. Hence, the secure implementation of such primitives is important in modern cryptography. Here we present a bit commitment protocol that is secure as long as the attacker's quantum memory device is imperfect. The latter assumption is known as the noisy-storage model. We experimentally executed this protocol by performing measurements on polarization-entangled photon pairs. Our work includes a full security analysis, accounting for all experimental error rates and finite size effects. This demonstrates the feasibility of two-party protocols in this model using real-world quantum devices. Finally, we provide a general analysis of our bit commitment protocol for a range of experimental parameters. PMID:23271659

  8. Protected Polycrystalline Diamond Compact Bits For Hard Rock Drilling

    SciTech Connect

    Robert Lee Cardenas

    2000-10-31

    Two bits were designed. One bit was fabricated and tested at Terra-Tek's Drilling Research Laboratory. Fabrication of the second bit was not completed due to complications in fabrication and meeting scheduled test dates at the test facility. A conical bit was tested in a Carthage Marble (compressive strength 14,500 psi) and Sierra White Granite (compressive strength 28,200 psi). During the testing, Hydraulic Horsepower, Bit Weight, Rotation Rate, were varied for the Conical Bit, a Varel Tricone Bit and Varel PDC bit. The Conical Bi did cut rock at a reasonable rate in both rocks. Beneficial effects from the near and through cutter water nozzles were not evident in the marble due to test conditions and were not conclusive in the granite due to test conditions. At atmospheric drilling, the Conical Bit's penetration rate was as good as the standard PDC bit and better than the Tricone Bit. Torque requirements for the Conical Bit were higher than that required for the Standard Bits. Spudding the conical bit into the rock required some care to avoid overloading the nose cutters. The nose design should be evaluated to improve the bit's spudding characteristics.

  9. Reducing Truncation Error In Integer Processing

    NASA Technical Reports Server (NTRS)

    Thomas, J. Brooks; Berner, Jeffrey B.; Graham, J. Scott

    1995-01-01

    Improved method of rounding off (truncation of least-significant bits) in integer processing of data devised. Provides for reduction, to extremely low value, of numerical bias otherwise generated by accumulation of truncation errors from many arithmetic operations. Devised for use in integer signal processing, in which rescaling and truncation usually performed to reduce number of bits, which typically builds up in sequence of operations. Essence of method to alternate direction of roundoff (plus, then minus) on alternate occurrences of truncated values contributing to bias.

  10. Errors and Their Mitigation at the Kirchhoff-Law-Johnson-Noise Secure Key Exchange

    PubMed Central

    Saez, Yessica; Kish, Laszlo B.

    2013-01-01

    A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The most interesting finding is that the error probability decays exponentially with the duration of the time window of single bit exchange. The results indicate that it is feasible to have so small error probabilities of the exchanged bits that error correction algorithms are not required. The results are demonstrated with practical considerations. PMID:24303033

  11. Power of one bit of quantum information in quantum metrology

    NASA Astrophysics Data System (ADS)

    Cable, Hugo; Gu, Mile; Modi, Kavan

    2016-04-01

    We present a model of quantum metrology inspired by the computational model known as deterministic quantum computation with one quantum bit (DQC1). Using only one pure qubit together with l fully mixed qubits we obtain measurement precision (defined as root-mean-square error for the parameter being estimated) at the standard quantum limit, which is typically obtained using the same number of uncorrelated qubits in fully pure states. In principle, the standard quantum limit can be exceeded using an additional qubit which adds only a small amount of purity. We show that the discord in the final state vanishes only in the limit of attaining infinite precision for the parameter being estimated.

  12. An experimental study of voice communication over a bandlimited channel using variable bit width delta modulation

    NASA Astrophysics Data System (ADS)

    Tumok, N. Nur

    1989-12-01

    A variable bit width delta modulator (VBWDM) demodulator was designed, built and tested to achieve voice and music communication using a bandlimited channel. Only baseband modulation is applied to the input signal. Since there is no clock used during the digitizing process at the modulator, no bit synchronization is required for signal recovery in the receiver. The modulator is a hybrid design using 7 linear and 3 digital integrated circuits (IC), and the demodulator uses 2 linear ICs. A lowpass filter (LPF) is used to simulate the channel. The average number of bits sent over the channel is measured with a frequency counter at the output of the modulator. The minimum bandwidth required for the LPF is determined according to the intelligibility of the recovered message. Measurements indicate an average bit rate required for intelligible voice transmission is in the range of 2 to 4 kilobits per seconds (kbps) and between 2 to 5 kbps for music. The channel 3 dB bandwidth required is determined to be 1.5 kilohertzs. Besides the hardware simplicity, VBWDM provides an option for intelligible digitized voice transmission at very low bit rates without requiring synchronization. Another important feature of the modulator design is that no bits are sent when no signal is present at the input which saves transmitter power (important for mobile stations) and reduces probability of intercept and jamming in military applications.

  13. Guaranteed energy-efficient bit reset in finite time.

    PubMed

    Browne, Cormac; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2014-09-01

    Landauer's principle states that it costs at least kBTln2 of work to reset one bit in the presence of a heat bath at temperature T. The bound of kBTln2 is achieved in the unphysical infinite-time limit. Here we ask what is possible if one is restricted to finite-time protocols. We prove analytically that it is possible to reset a bit with a work cost close to kBTln2 in a finite time. We construct an explicit protocol that achieves this, which involves thermalizing and changing the system's Hamiltonian so as to avoid quantum coherences. Using concepts and techniques pertaining to single-shot statistical mechanics, we furthermore prove that the heat dissipated is exponentially close to the minimal amount possible not just on average, but guaranteed with high confidence in every run. Moreover, we exploit the protocol to design a quantum heat engine that works near the Carnot efficiency in finite time.

  14. Jet bit with onboard deviation means

    SciTech Connect

    Cherrington, M.D.

    1990-02-13

    This patent describes a directional drill bit utilizing pressurized fluid as a means for eroding earth in a forward path of said bit. It comprises: an elongate hollow body having a first proximal end and a first distal end, and having at least a rigid first section and at least a rigid second section. The first section and said second section being connected one to the other by a flexible joint positioned intermediately of said first section and said second section, with the combination of said first section, said flexible joint and said second section providing a conduit having lead-free annular sidewalls. The said combination thereby defining said elongate hollow body; a connecting means formed by said first proximal end for joining said elongated hollow body with an appropriate fluid conveyance means used to transport said pressurized fluid; a nozzle means borne by said first distal end. The nozzle means comprising a nozzle plate having at least one jet nozzle attached to and carried by said nozzle plate; and an articulation means. The articulation means being responsive to changes in fluid pressure and permitting a forward portion of said bit bearing said nozzle structure to change angular position with respect to an aft portion of the bit.

  15. Composite grease for rock bit bearings

    SciTech Connect

    Newcomb, A.L.

    1982-11-09

    A rock bit for drilling subterranean formations is lubricated with a grease with the following composition: molybdenum disulfide particles in the range of from 6 to 14% by weight; copper particles in the range of from 3 to 9% by weight; a metal soap thickener in the range of from 4 to 10% by weight; and a balance of primarily hydrocarbon oil.

  16. Multiple bit differential detection of offset QPSK

    NASA Technical Reports Server (NTRS)

    Simon, M.

    2003-01-01

    Analogous to multiple symbol differential detection of quadrature phase-shift-keying, a multiple bit differential detection scheme is described for offset QPSK that also exhibits continuous improvement in performance with increasing observation interval. Being derived from maximum-likelihood (ML) considerations, the proposed scheme is purported to be the most power efficient scheme for such a modulation and detection method.

  17. REVERSIBLE N-BIT TO N-BIT INTEGER HAAR-LIKE TRANSFORMS

    SciTech Connect

    Senecal, J G; Duchaineau, M A; Joy, K I

    2004-07-26

    We introduce TLHaar, an n-bit to n-bit reversible transform similar to the S-transform. TLHaar uses lookup tables that approximate the S-transform, but reorder the coefficients so they fit into n bits. TLHaar is suited for lossless compression in fixed-width channels, such as digital video channels and graphics hardware frame buffers. Tests indicate that when the incoming image data has lines or hard edges TLHaar coefficients compress better than S-transform coefficients. For other types of image data TLHaar coefficients compress up to 2.5% worse than those of the S-transform, depending on the data and the compression method used.

  18. Frictional ignition with coal-mining bits. Information Circular/1990

    SciTech Connect

    Courtney, W.G.

    1990-01-01

    The publication reviews recent U.S. Bureau of Mines studies of frictional ignition of a methane-air environment by coal mining bits cutting into sandstone and the effectiveness of remedial techniques to reduce the likelihood of frictional ignition. Frictional ignition with a mining bit always involves a worn bit having a wear flat on the tip of the bit. The worn bit forms hot spots on the surface of the sandstone because of frictional abrasion. The hot spots then can ignite the methane-air environment. A small wear flat forms a small hot spot, which does not give ignition, while a large wear flat forms a large hot spot, which gives ignition. The likelihood of frictional ignition can be somewhat reduced by using a mushroom-shaped tungsten-carbide bit tip on the mining bit and by increasing the bit clearance angle; it can be significantly reduced by using a water spray nozzle in back of each bit.

  19. Source-optimized irregular repeat accumulate codes with inherent unequal error protection capabilities and their application to scalable image transmission.

    PubMed

    Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R

    2006-07-01

    The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases. PMID:16830898

  20. Bit-by-bit autophagic removal of parkin-labelled mitochondria.

    PubMed

    Yang, Jin-Yi; Yang, Wei Yuan

    2013-01-01

    Eukaryotic cells maintain mitochondrial integrity through mitophagy, an autophagic process by which dysfunctional mitochondria are selectively sequestered into double-layered membrane structures, termed phagophores, and delivered to lysosomes for degradation. Here we show that small fragments of parkin-labelled mitochondria at omegasome-marked sites are engulfed by autophagic membranes one at a time. Using a light-activation scheme to impair long mitochondrial tubules, we demonstrate that sites undergoing bit-by-bit mitophagy display preferential ubiquitination, and are situated where parkin-labelled mitochondrial tubules and endoplasmic reticulum intersect. Our observations suggest contact regions between the endoplasmic reticulum and impaired mitochondria are initiation sites for local LC3 recruitment and mitochondrial remodelling that support bit-by-bit, parkin-mediated mitophagy. These results help in understanding how cells manage to fit large and morphologically heterogeneous mitochondria into micron-sized autophagic membranes during mitophagy.

  1. Bit-by-bit optical code scrambling technique for secure optical communication.

    PubMed

    Wang, Xu; Gao, Zhensen; Wang, Xuhua; Kataoka, Nobuyuki; Wada, Naoya

    2011-02-14

    We propose and demonstrate a novel bit-by-bit code scrambling technique based on time domain spectral phase encoding/decoding (SPE/SPD) scheme using only a single phase modulator to simultaneously generate and decode the code hopping sequence and DPSK data for secure optical communication application. In the experiment, 2.5-Gb/s DPSK data has been generated, decoded and securely transmitted over 34 km by scrambling five 8-chip, 20-Gchip/s Gold codes with prime-hop patterns. The proposed scheme can rapidly reconfigure the optical code hopping sequence bit-by-bit with the DPSK data, and thus it is very robust to conventional data rate energy detection and DPSK demodulation attack, exhibiting the potential to provide unconditional transmission security and realize even one-time pad.

  2. A constructive inter-track interference coding scheme for bit-patterned media recording system

    NASA Astrophysics Data System (ADS)

    Arrayangkool, A.; Warisarn, C.; Kovintavewat, P.

    2014-05-01

    The inter-track interference (ITI) can severely degrade the system performance of bit-patterned media recording (BPMR). One way to alleviate the ITI effect is to encode an input data sequence before recording to avoid some data patterns that easily cause an error at the data detection process. This paper proposes a constructive ITI (CITI) coding scheme for a multi-track multi-head BPMR system to eliminate the data patterns that lead to severe ITI. Numerical results indicate that the system with CITI coding outperforms that without CITI coding, especially when an areal density (AD) is high and/or the position jitter is large. Specifically, for the system without position jitter at bit-error rate of 10-4, the proposed scheme can provide about 3 dB gain at the AD of 2.5 Tb/in.2 over the system without CITI coding.

  3. Bit-1 is an essential regulator of myogenic differentiation.

    PubMed

    Griffiths, Genevieve S; Doe, Jinger; Jijiwa, Mayumi; Van Ry, Pam; Cruz, Vivian; de la Vega, Michelle; Ramos, Joe W; Burkin, Dean J; Matter, Michelle L

    2015-05-01

    Muscle differentiation requires a complex signaling cascade that leads to the production of multinucleated myofibers. Genes regulating the intrinsic mitochondrial apoptotic pathway also function in controlling cell differentiation. How such signaling pathways are regulated during differentiation is not fully understood. Bit-1 (also known as PTRH2) mutations in humans cause infantile-onset multisystem disease with muscle weakness. We demonstrate here that Bit-1 controls skeletal myogenesis through a caspase-mediated signaling pathway. Bit-1-null mice exhibit a myopathy with hypotrophic myofibers. Bit-1-null myoblasts prematurely express muscle-specific proteins. Similarly, knockdown of Bit-1 expression in C2C12 myoblasts promotes early differentiation, whereas overexpression delays differentiation. In wild-type mice, Bit-1 levels increase during differentiation. Bit-1-null myoblasts exhibited increased levels of caspase 9 and caspase 3 without increased apoptosis. Bit-1 re-expression partially rescued differentiation. In Bit-1-null muscle, Bcl-2 levels are reduced, suggesting that Bcl-2-mediated inhibition of caspase 9 and caspase 3 is decreased. Bcl-2 re-expression rescued Bit-1-mediated early differentiation in Bit-1-null myoblasts and C2C12 cells with knockdown of Bit-1 expression. These results support an unanticipated yet essential role for Bit-1 in controlling myogenesis through regulation of Bcl-2.

  4. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  5. Acquisition and Retaining Granular Samples via a Rotating Coring Bit

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart

    2013-01-01

    This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the

  6. Drill bit with improved cutter sizing pattern

    SciTech Connect

    Keith, C.W.; Clayton, R.I.

    1993-08-24

    A fixed cutter drill bit is described having a body with a nose portion thereof containing a plurality of angularly spaced generally radial wings, a first of said wings including a first row of cutting elements mounted thereon upon progressing radially outward from a center of said nose portion toward a periphery of the body of the bit, said first row of cutting elements having alternately larger and smaller area cutting faces at spaced radial positions along said first wing relative to the center of said nose, a second of said wings having a second similar row of cutting elements of larger and smaller area cutting faces thereon in substantially the same but reversed radial positions with respect to the relative radial placement of the larger and smaller diameter cutting faces of said elements in said first wing.

  7. Earth boring bit with eccentric seal boss

    SciTech Connect

    Helmick, J.E.

    1981-07-21

    A rolling cone cutter earth boring bit is provided with a sealing system that results in the seal being squeezed uniformly around the seal circumference during drilling. The bearing pin seal surface is machined eccentrically to the bearing pin by an amount equal to the radial clearance of the bearing. The bearing pin seal surface is machined about an axis that is offset from the central axis of the bearing pin in the direction of the unloaded side of the bearing pin. When the bit is drilling and the bearing pin is loaded the seal will run on an axis concentric with the axis of the seal surfaces of the bearing pin and the rolling cutter and will see uniform squeeze around its circumference.

  8. Coherent WDM, toward > 1 bit/s/Hz information spectral density

    NASA Astrophysics Data System (ADS)

    Ellis, Andrew D.; Gunning, Fatima C.

    2005-06-01

    Many approaches to achieving high information spectral density (ISD), have been reported recently. The standard non-return-to-zero (NRZ) format, which offers a base line performance around 0.4 bit/s/Hz, may be enhanced using a variety of techniques, including: pre-filtering within the transmitter, multi-level modulation formats and polarisation interleaving or multiplexing. These techniques either increase the information per channel (multi-level formats and polarization multiplexing) or minimise interferometric cross talk (pre-filtering and polarisation interleaving) and result in ISDs around 0.8 bit/s/Hz. Combinations of these techniques have been used to provide ISDs of up to 1.6 bit/s/Hz. In this paper we propose a new technique, which we call Coherent WDM (CoWDM), to increase the ISD of NRZ binary coded signals in a single polarisation from 0.4 to 1 bit/s/Hz whilst simultaneously eliminating the need for pre-filters within the transmitter. Phase control within the transmitter is used to achieve precise control of interferometric cross talk. This allows the use of stronger demultiplexing filters at the receiver, and provides optimum performance when the bit rate equals the channel spacing, giving an ISD of 1 bit/s/Hz. This interference control may be achieved by controlling the phase of each laser individually with optical phase locked loops, or by replacing the typical bank of lasers with one or more coherent comb sources, and encoding data using an array of modulators that preserves this relative optical phase. Since optical filtering is not required in the transmitter, stronger optical filters may be used to demultiplex the individual WDM channels at the receiver, further reducing cross talk.

  9. Cosmic Ray Induced Bit-Flipping Experiment

    NASA Astrophysics Data System (ADS)

    Pu, Ge; Callaghan, Ed; Parsons, Matthew; Cribflex Team

    2015-04-01

    CRIBFLEX is a novel approach to mid-altitude observational particle physics intended to correlate the phenomena of semiconductor bit-flipping with cosmic ray activity. Here a weather balloon carries a Geiger counter and DRAM memory to various altitudes; the data collected will contribute to the development of memory device protection. We present current progress toward initial flight and data acquisition. This work is supported by the Society of Physics Students with funding from a Chapter Research Award.

  10. A Tunable, Software-based DRAM Error Detection and Correction Library for HPC

    SciTech Connect

    Fiala, David J; Ferreira, Kurt Brian; Mueller, Frank; Engelmann, Christian

    2012-01-01

    Proposed exascale systems will present a number of considerable resiliency challenges. In particular, DRAM soft-errors, or bit-flips, are expected to greatly increase due to the increased memory density of these systems. Current hardware-based fault-tolerance methods will be unsuitable for addressing the expected soft error frequency rate. As a result, additional software will be needed to address this challenge. In this paper we introduce LIBSDC, a tunable, transparent silent data corruption detection and correction library for HPC applications. LIBSDC provides comprehensive SDC protection for program memory by implementing on-demand page integrity verification. Experimental benchmarks with Mantevo HPCCG show that once tuned, LIBSDC is able to achieve SDC protection with 50\\% overhead of resources, less than the 100\\% needed for double modular redundancy.

  11. Lathe tool bit and holder for machining fiberglass materials

    NASA Technical Reports Server (NTRS)

    Winn, L. E. (Inventor)

    1972-01-01

    A lathe tool and holder combination for machining resin impregnated fiberglass cloth laminates is described. The tool holder and tool bit combination is designed to accommodate a conventional carbide-tipped, round shank router bit as the cutting medium, and provides an infinite number of cutting angles in order to produce a true and smooth surface in the fiberglass material workpiece with every pass of the tool bit. The technique utilizes damaged router bits which ordinarily would be discarded.

  12. Method to manufacture bit patterned magnetic recording media

    DOEpatents

    Raeymaekers, Bart; Sinha, Dipen N

    2014-05-13

    A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.

  13. NSC 800, 8-bit CMOS microprocessor

    NASA Technical Reports Server (NTRS)

    Suszko, S. F.

    1984-01-01

    The NSC 800 is an 8-bit CMOS microprocessor manufactured by National Semiconductor Corp., Santa Clara, California. The 8-bit microprocessor chip with 40-pad pin-terminals has eight address buffers (A8-A15), eight data address -- I/O buffers (AD(sub 0)-AD(sub 7)), six interrupt controls and sixteen timing controls with a chip clock generator and an 8-bit dynamic RAM refresh circuit. The 22 internal registers have the capability of addressing 64K bytes of memory and 256 I/O devices. The chip is fabricated on N-type (100) silicon using self-aligned polysilicon gates and local oxidation process technology. The chip interconnect consists of four levels: Aluminum, Polysi 2, Polysi 1, and P(+) and N(+) diffusions. The four levels, except for contact interface, are isolated by interlevel oxide. The chip is packaged in a 40-pin dual-in-line (DIP), side brazed, hermetically sealed, ceramic package with a metal lid. The operating voltage for the device is 5 V. It is available in three operating temperature ranges: 0 to +70 C, -40 to +85 C, and -55 to +125 C. Two devices were submitted for product evaluation by F. Stott, MTS, JPL Microprocessor Specialist. The devices were pencil-marked and photographed for identification.

  14. An Optical Bit-Counting Algorithm

    NASA Technical Reports Server (NTRS)

    Mack, Marilyn; Lapir, Gennadi M.; Berkovich, Simon

    2000-01-01

    This paper addresses the omnipresent problem of counting bits - an operation discussed since the very early stages of the establishing of computer science. The need for a quick bit-counting method acquires a special significance with the proliferation of search engines on the Internet. It arises in several other computer applications. This is especially true in information retrieval in which an array of binary vectors is used to represent a characteristic function (CF) of a set of qualified documents. The number of "I"s in the CF equals the cardinality of the set. The process of repeated evaluations of this cardinality is a pivotal point in choosing a rational strategy for deciding whether to constrain or broaden the search criteria to ensure selection of the desired items. Another need for bit-counting occurs when trying to determine the differences between given files, (images or text), in terms of the Hamming distance. An Exclusive OR operation applied to a pair of files results in a binary vector array of mismatches that must be counted.

  15. Device-independent bit commitment based on the CHSH inequality

    NASA Astrophysics Data System (ADS)

    Aharon, N.; Massar, S.; Pironio, S.; Silman, J.

    2016-02-01

    Bit commitment and coin flipping occupy a unique place in the device-independent landscape, as the only device-independent protocols thus far suggested for these tasks are reliant on tripartite GHZ correlations. Indeed, we know of no other bipartite tasks, which admit a device-independent formulation, but which are not known to be implementable using only bipartite nonlocality. Another interesting feature of these protocols is that the pseudo-telepathic nature of GHZ correlations—in contrast to the generally statistical character of nonlocal correlations, such as those arising in the violation of the CHSH inequality—is essential to their formulation and analysis. In this work, we present a device-independent bit commitment protocol based on CHSH testing, which achieves the same security as the optimal GHZ-based protocol, albeit at the price of fixing the time at which Alice reveals her commitment. The protocol is analyzed in the most general settings, where the devices are used repeatedly and may have long-term quantum memory. We also recast the protocol in a post-quantum setting where both honest and dishonest parties are restricted only by the impossibility of signaling, and find that overall the supra-quantum structure allows for greater security.

  16. Statistical mechanics approach to 1-bit compressed sensing

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Kabashima, Yoshiyuki

    2013-02-01

    Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x∈RN from its linear transformation y∈RM of lower dimensionality M < N. A scheme further reducing the data size of the compressed expression by using only the sign of each entry of y to recover x was recently proposed. This is often termed 1-bit compressed sensing. Here, we analyze the typical performance of an l1-norm-based signal recovery scheme for 1-bit compressed sensing using statistical mechanics methods. We show that the signal recovery performance predicted by the replica method under the replica symmetric ansatz, which turns out to be locally unstable for modes breaking the replica symmetry, is in good consistency with experimental results of an approximate recovery algorithm developed earlier. This suggests that the l1-based recovery problem typically has many local optima of a similar recovery accuracy, which can be achieved by the approximate algorithm. We also develop another approximate recovery algorithm inspired by the cavity method. Numerical experiments show that when the density of nonzero entries in the original signal is relatively large the new algorithm offers better performance than the abovementioned scheme and does so with a lower computational cost.

  17. PDC bits stand up to high speed, soft formation drilling

    SciTech Connect

    Hover, E.R.; Middleton, J.N.

    1982-08-01

    Six experimental, polycrystalline diamond compact (PDC) bit designs were tested in the lab at both high and low speeds in three different types of rock. Testing procedures, bit performance and wear characteristics are discussed. These experimental results are correlated with specific design options such as rake angle and bit profile.

  18. PCM bit detection with correction for intersymbol interference

    NASA Technical Reports Server (NTRS)

    Thumim, A. I.

    1969-01-01

    For pulse code modulation bits, received signals are filtered by integrate and dump filter from which samples are directed to end of PCM bit. Threshold decision circuit determines level of sample voltage. Effects of interference of known past bit can be corrected by raising or lowering threshold voltage value.

  19. Laboratory and field testing of improved geothermal rock bits

    SciTech Connect

    Hendrickson, R.R.; Jones, A.H.; Winzenried, R.W.; Maish, A.B.

    1980-07-01

    The development and testing of 222 mm (8-3/4 inch) unsealed, insert type, medium hard formation, high-temperature bits are described. The new bits were fabricated by substituting improved materials in critical bit components. These materials were selected on bases of their high temperature properties, machinability, and heat treatment response. Program objectives required that both machining and heat treating could be accomplished with existing rock bit production equipment. Two types of experimental bits were subjected to laboratory air drilling tests at 250/sup 0/C (482/sup 0/F) in cast iron. These tests indicated field testing could be conducted without danger to the hole, and that bearing wear would be substantially reduced. Six additional experimental bits, and eight conventional bits were then subjected to air drilling a 240/sup 0/C (464/sup 0/F) in Francisan Graywacke at The Geysers, CA. The materials selected improved roller wear by 200%, friction-pin wear by 150%, and lug wear by 150%. Geysers drilling performances compared directly to conventional bits indicate that in-gage drilling life was increased by 70%. All bits at The Geysers are subjected to reaming out-of-gage hole prior to drilling. Under these conditions the experimental bits showed a 30% increase in usable hole over the conventional bits. These tests demonstrated a potential well cost reduction of 4 to 8%. Savings of 12% are considered possible with drilling procedures optimized for the experimental bits.

  20. An 8-bit 100-MS/s digital-to-skew converter embedded switch with a 200-ps range for time-interleaved sampling

    NASA Astrophysics Data System (ADS)

    Xiaoshi, Zhu; Chixiao, Chen; Jialiang, Xu; Fan, Ye; Junyan, Ren

    2013-03-01

    A sampling switch with an embedded digital-to-skew converter (DSC) is presented. The proposed switch eliminates time-interleaved ADCs' skews by adjusting the boosted voltage. A similar bridged capacitors' charge sharing structure is used to minimize the area. The circuit is fabricated in a 0.18 μm CMOS process and achieves sub-1 ps resolution and 200 ps timing range at a rate of 100 MS/s. The power consumption is 430 μW at maximum. The measurement result also includes a 2-channel 14-bit 100 MS/s time-interleaved ADCs (TI-ADCs) with the proposed DSC switch's demonstration. This scheme is widely applicable for the clock skew and aperture error calibration demanded in TI-ADCs and SHA-less ADCs.

  1. Inertial and Magnetic Sensor Data Compression Considering the Estimation Error

    PubMed Central

    Suh, Young Soo

    2009-01-01

    This paper presents a compression method for inertial and magnetic sensor data, where the compressed data are used to estimate some states. When sensor data are bounded, the proposed compression method guarantees that the compression error is smaller than a prescribed bound. The manner in which this error bound affects the bit rate and the estimation error is investigated. Through the simulation, it is shown that the estimation error is improved by 18.81% over a test set of 12 cases compared with a filter that does not use the compression error bound. PMID:22454564

  2. Multi-Bit Nano-Electromechanical Nonvolatile Memory Cells (Zigzag T Cells) for the Suppression of Bit-to-Bit Interference.

    PubMed

    Choi, Woo Young; Han, Jae Hwan; Cha, Tae Min

    2016-05-01

    Multi-bit nano-electromechanical (NEM) nonvolatile memory cells such as T cells were proposed for higher memory density. However, they suffered from bit-to-bit interference (BI). In order to suppress BI without sacrificing cell size, this paper proposes zigzag T cell structures. The BI suppression of the proposed zigzag T cell is verified by finite-element modeling (FEM). Based on the FEM results, the design of zigzag T cells is optimized.

  3. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms. PMID:26636023

  4. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  5. Implementation of digital filters for minimum quantization errors

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1974-01-01

    In this paper a technique is developed for choosing programing forms and bit configurations for digital filters that minimize the quantization errors. The technique applies to digital filters operating in fixed-point arithmetic in either open-loop or closed-loop systems, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum quantization errors, the total bit configuration required in the filter, and the location of the binary decimal point at each quantizer within the filter.

  6. Carrier Synchronization for 3-and 4-bit-per-Symbol Optical Transmission

    NASA Astrophysics Data System (ADS)

    Ip, Ezra; Kahn, Joseph M.

    2005-12-01

    We investigate carrier synchronization for coherent detection of optical signals encoding 3 and 4 bits/symbol. We consider the effects of laser phase noise and of additive white Gaussian noise (AWGN), which can arise from local oscillator (LO) shot noise or LO-spontaneous beat noise. We identify 8-and 16-ary quadrature amplitude modulation (QAM) schemes that perform well when the receiver phase-locked loop (PLL) tracks the instantaneous signal phase with moderate phase error. We propose implementations of 8-and 16-QAM transmitters using Mach-Zehnder (MZ) modulators. We outline a numerical method for computing the bit error rate (BER) of 8-and 16-QAM in the presence of AWGN and phase error. It is found that these schemes can tolerate phase-error standard deviations of 2.48° and 1.24°, respectively, for a power penalty of 0.5 dB at a BER of 10-9. We propose a suitable PLL design and analyze its performance, taking account of laser phase noise, AWGN, and propagation delay within the PLL. Our analysis shows that the phase error depends on the constellation penalty, which is the mean power of constellation symbols times the mean inverse power. We establish a procedure for finding the optimal PLL natural frequency, and determine tolerable laser linewidths and PLL propagation delays. For zero propagation delay, 8-and 16-QAM can tolerate linewidth-to-bit-rate ratios of 1.8 × 10-5 and 1.4 × 10-6, respectively, assuming a total penalty of 1.0 dB.

  7. Dynamics of coupled simplest chaotic two-component electronic circuits and its potential application to random bit generation

    SciTech Connect

    Modeste Nguimdo, Romain; Tchitnga, Robert; Woafo, Paul

    2013-12-15

    We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bit rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.

  8. Dynamics of coupled simplest chaotic two-component electronic circuits and its potential application to random bit generation.

    PubMed

    Nguimdo, Romain Modeste; Tchitnga, Robert; Woafo, Paul

    2013-12-01

    We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bit rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s = 1 Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.

  9. A 27-mW 10-bit 125-MSPS charge domain pipelined ADC with a PVT insensitive boosted charge transfer circuit

    NASA Astrophysics Data System (ADS)

    Zhenhai, Chen; Songren, Huang; Hong, Zhang; Zongguang, Yu; Huicai, Ji

    2013-03-01

    A low power 10-bit 125-MSPS charge-domain (CD) pipelined analog-to-digital converter (ADC) based on MOS bucket-brigade devices (BBDs) is presented. A PVT insensitive boosted charge transfer (BCT) that is able to reject the charge error induced by PVT variations is proposed. With the proposed BCT, the common mode charge control circuit can be eliminated in the CD pipelined ADC and the system complexity is reduced remarkably. The prototype ADC based on the proposed BCT is realized in a 0.18 μm CMOS process, with power consumption of only 27 mW at 1.8-V supply and active die area of 1.04 mm2. The prototype ADC achieves a spurious free dynamic range (SFDR) of 67.7 dB, a signal-to-noise ratio (SNDR) of 57.3 dB, and an effective number of bits (ENOB) of 9.0 for a 3.79 MHz input at full sampling rate. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are +0.5/-0.3 LSB and +0.7/-0.55 LSB, respectively.

  10. Cosmic Ray Induced Bit-Flipping Experiment

    NASA Astrophysics Data System (ADS)

    Callaghan, Edward; Parsons, Matthew

    2015-04-01

    CRIBFLEX is a novel approach to mid-altitude observational particle physics intended to correlate the phenomena of semiconductor bit-flipping with cosmic ray activity. Here a weather balloon carries a Geiger counter and DRAM memory to various altitudes; the data collected will contribute to the development of memory device protection. We present current progress toward initial flight and data acquisition. This work is supported by the Society of Physics Students with funding from a Chapter Research Award. Supported by a Society of Physics Students Chapter Research Award.

  11. Performance of 1D quantum cellular automata in the presence of error

    NASA Astrophysics Data System (ADS)

    McNally, Douglas M.; Clemens, James P.

    2016-09-01

    This work expands a previous block-partitioned quantum cellular automata (BQCA) model proposed by Brennen and Williams [Phys. Rev. A. 68, 042311 (2003)] to incorporate physically realistic error models. These include timing errors in the form of over- and under-rotations of quantum states during computational gate sequences, stochastic phase and bit flip errors, as well as undesired two-bit interactions occurring during single-bit gate portions of an update sequence. A compensation method to counteract the undesired pairwise interactions is proposed and investigated. Each of these error models is implemented using Monte Carlo simulations for stochastic errors and modifications to the prescribed gate sequences to account for coherent over-rotations. The impact of these various errors on the function of a QCA gate sequence is evaluated using the fidelity of the final state calculated for four quantum information processing protocols of interest: state transfer, state swap, GHZ state generation, and entangled pair generation.

  12. A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures

    NASA Astrophysics Data System (ADS)

    Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran

    2007-02-01

    H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.

  13. Practical scheme for error control using feedback

    SciTech Connect

    Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene; Jacobs, Kurt

    2004-05-01

    We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

  14. Physical Roots of It from Bit

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    2003-04-01

    Why there is Something rather than Nothing? From Pythagoras ("everything is number") to Wheeler ("it from bit") theme of ultimate origin stresses primordiality of Ideal Platonic World (IPW) of mathematics. Even popular "quantum tunnelling out of nothing" can specify "nothing" only as (essentially) IPW. IPW exists everywhere (but nowhere in particular) and logically precedes space, time, matter or any "physics" in any conceivable universe. This leads to propositional conjecture (axiom?) that (meta)physical "Platonic Pressure" of infinitude of numbers acts as engine for self-generation of physical universe directly out of mathematics: cosmogenesis is driven by the very fact of IPW inexhaustibility. While physics in other quantum branches of inflating universe (Megaverse)can be(arbitrary) different from ours, number theory (and rest of IPW)is not (it is unique, absolute, immutable and infinitely resourceful). Let (infinite) totality of microstates ("its") of entire Megaverse form countable set. Since countable sets are hierarchically inexhaustible (Cantor's "fractal branching"), each single "it" still has infinite tail of non-overlapping IPW-based "personal labels". Thus, each "bit" ("it") is infinitely and uniquely resourceful: possible venue of elimination ergodicity basis for eternal return cosmological argument. Physics (in any subuniverse) may be limited only by inherent impossibilities residing in IPW, e.g. insolvability of Continuum Problem may be IPW foundation of quantum indeterminicity.

  15. Object tracking based on bit-planes

    NASA Astrophysics Data System (ADS)

    Li, Na; Zhao, Xiangmo; Liu, Ying; Li, Daxiang; Wu, Shiqian; Zhao, Feng

    2016-01-01

    Visual object tracking is one of the most important components in computer vision. The main challenge for robust tracking is to handle illumination change, appearance modification, occlusion, motion blur, and pose variation. But in surveillance videos, factors such as low resolution, high levels of noise, and uneven illumination further increase the difficulty of tracking. To tackle this problem, an object tracking algorithm based on bit-planes is proposed. First, intensity and local binary pattern features represented by bit-planes are used to build two appearance models, respectively. Second, in the neighborhood of the estimated object location, a region that is most similar to the models is detected as the tracked object in the current frame. In the last step, the appearance models are updated with new tracking results in order to deal with environmental and object changes. Experimental results on several challenging video sequences demonstrate the superior performance of our tracker compared with six state-of-the-art tracking algorithms. Additionally, our tracker is more robust to low resolution, uneven illumination, and noisy video sequences.

  16. A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments.

    PubMed

    Loebner, Keith T K; Underwood, Thomas C; Cappelli, Mark A

    2015-06-01

    A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenum pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated. PMID:26133835

  17. A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments.

    PubMed

    Loebner, Keith T K; Underwood, Thomas C; Cappelli, Mark A

    2015-06-01

    A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenum pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated.

  18. A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments

    NASA Astrophysics Data System (ADS)

    Loebner, Keith T. K.; Underwood, Thomas C.; Cappelli, Mark A.

    2015-06-01

    A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenum pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated.

  19. A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments

    SciTech Connect

    Loebner, Keith T. K. Underwood, Thomas C.; Cappelli, Mark A.

    2015-06-15

    A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenum pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated.

  20. The error performance analysis over cyclic redundancy check codes

    NASA Astrophysics Data System (ADS)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  1. Progress in the Advanced Synthetic-Diamond Drill Bit Program

    SciTech Connect

    Glowka, D.A.; Dennis, T.; Le, Phi; Cohen, J.; Chow, J.

    1995-11-01

    Cooperative research is currently underway among five drill bit companies and Sandia National Laboratories to improve synthetic-diamond drill bits for hard-rock applications. This work, sponsored by the US Department of Energy and individual bit companies, is aimed at improving performance and bit life in harder rock than has previously been possible to drill effectively with synthetic-diamond drill bits. The goal is to extend to harder rocks the economic advantages seen in using synthetic-diamond drill bits in soft and medium rock formations. Four projects are being conducted under this research program. Each project is investigating a different area of synthetic diamond bit technology that builds on the current technology base and market interests of the individual companies involved. These projects include: optimization of the PDC claw cutter; optimization of the Track-Set PDC bit; advanced TSP bit development; and optimization of impregnated-diamond drill bits. This paper describes the progress made in each of these projects to date.

  2. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  3. Prediction of Error and Error Type in Computation of Sixth Grade Mathematics Students.

    ERIC Educational Resources Information Center

    Baxter, Marion McComb

    The study of computational errors among sixth grade students included identification and classification of errors, investigation of the effects of two feedback treatments and of classwork and homework on error patterns, and investigation of the relationships of error patterns with intelligence, mathematics achievement, attitudes toward…

  4. Asymmetric soft-error resistant memory

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)

    1991-01-01

    A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.

  5. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  6. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  7. Fixed-quality/variable bit-rate on-board image compression for future CNES missions

    NASA Astrophysics Data System (ADS)

    Camarero, Roberto; Delaunay, Xavier; Thiebaut, Carole

    2012-10-01

    The huge improvements in resolution and dynamic range of current [1][2] and future CNES remote sensing missions (from 5m/2.5m in Spot5 to 70cm in Pleiades) illustrate the increasing need of efficient on-board image compressors. Many techniques have been considered by CNES during the last years in order to go beyond usual compression ratios: new image transforms or post-transforms [3][4], exceptional processing [5], selective compression [6]. However, even if significant improvements have been obtained, none of those techniques has ever contested an essential drawback in current on-board compression schemes: fixed-rate (or compression ratio). This classical assumption provides highly-predictable data volumes that simplify storage and transmission. But on the other hand, it demands to compress every image-segment (strip) of the scene within the same amount of data. Therefore, this fixed bit-rate is dimensioned on the worst case assessments to guarantee the quality requirements in all areas of the image. This is obviously not the most economical way of achieving the required image quality for every single segment. Thus, CNES has started a study to re-use existing compressors [7] in a Fixed-Quality/Variable bit-rate mode. The main idea is to compute a local complexity metric in order to assign the optimum bit-rate to comply with quality requirements. Consequently, complex areas are less compressed than simple ones, offering a better image quality for an equivalent global bit-rate. "Near-lossless bit-rate" of image segments has revealed as an efficient image complexity estimator. It links quality criteria and bit-rates through a single theoretical relationship. Compression parameters are thus automatically computed in accordance with the quality requirements. In addition, this complexity estimator could be implemented in a one-pass compression and truncation scheme.

  8. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  9. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  10. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1995-01-01

    This report focuses on the results obtained during the PI's recent sabbatical leave at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, from January 1, 1995 through June 30, 1995. Two projects investigated various properties of TURBO codes, a new form of concatenated coding that achieves near channel capacity performance at moderate bit error rates. The performance of TURBO codes is explained in terms of the code's distance spectrum. These results explain both the near capacity performance of the TURBO codes and the observed 'error floor' for moderate and high signal-to-noise ratios (SNR's). A semester project, entitled 'The Realization of the Turbo-Coding System,' involved a thorough simulation study of the performance of TURBO codes and verified the results claimed by previous authors. A copy of the final report for this project is included as Appendix A. A diploma project, entitled 'On the Free Distance of Turbo Codes and Related Product Codes,' includes an analysis of TURBO codes and an explanation for their remarkable performance. A copy of the final report for this project is included as Appendix B.

  11. New Bits-to-Symbol Mapping for 32 APSK over Nonlinear Satellite Channels

    NASA Astrophysics Data System (ADS)

    Lee, Jaeyoon; Yoon, Dongweon; Park, Sang Kyu

    A 4+12+16 amplitude phase shift keying (APSK) modulation outperforms other 32-APSK modulations such as rectangular or cross 32-quadrature amplitude modulations (QAMs) which have a high peak to average power ratio that causes non-negligible AM/AM and AM/PM distortions when the signal is amplified by a high-power amplifier (HPA). This modulation scheme has therefore been recommended as a standard in the digital video broadcasting-satellite2 (DVB-S2) system. In this letter, we present a new bits-to-symbol mapping with a better bit error rate (BER) for a 4+12+16 APSK signal in a nonlinear satellite channel.

  12. Temperature-compensated 8-bit column driver for AMLCD

    NASA Astrophysics Data System (ADS)

    Dingwall, Andrew G. F.; Lin, Mark L.

    1995-06-01

    An all-digital, 5 V input, 50 Mhz bandwidth, 10-bit resolution, 128- column, AMLCD column driver IC has been designed and tested. The 10-bit design can enhance display definition over 6-bit nd 8-bit column drivers. Precision is realized with on-chip, switched-capacitor DACs plus transparently auto-offset-calibrated, opamp outputs. Increased resolution permits multiple 10-bit digital gamma remappings in EPROMs over temperature. Driver IC features include externally programmable number of output column, bi-directional digital data shifting, user- defined row/column/pixel/frame inversion, power management, timing control for daisy-chained column drivers, and digital bit inversion. The architecture uses fewer reference power supplies.

  13. Proposed first-generation WSQ bit allocation procedure

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-09-08

    The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.

  14. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  15. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  16. Cooling system for cooling the bits of a cutting machine

    SciTech Connect

    Wrulich, H.; Gekle, S.; Schetina, O.; Zitz, A.

    1984-06-26

    The invention refers to a system for cooling the bits of a cutting machine and comprising a nozzle for the cooling water to be ejected under pressure, said nozzle being arranged at the area of the bit, the water supply to said nozzle being closable by means of a shutoff valve and the bit being supported on the bit holder for limited axial shifting movement under the action of the cutting pressure against the force of a spring and against the hydraulic pressure of the cooling water and the shutoff valve being coupled with the bit by means of a coupling member such that the shutoff valve is opened on shifting movement of the bit in direction of the cutting pressure. In this system the arrangement is such that the bit (6) has in a manner known per se the shape of a cap and is enclosing a bit shaft (3) adapted to be inserted into the bit holder (1), in that the cap-shaped bit (6) is supported on the shaft (3) for shifting movement in axial direction and in that the shutoff valve (11) and the coupling member (10) are arranged within the bit shaft (3). The coupling member is formed of a push rod (10) acting on the closure member (11) of the valve, said push rod being guided within a central bore (9) of the bit shaft and the closure member (11) closing the valve in opposite direction to the action of the cutting pressure and being moved in open position by the push rod (10) in direction of the acting cutting pressure.

  17. Development and testing of a Mudjet-augmented PDC bit.

    SciTech Connect

    Black, Alan; Chahine, Georges; Raymond, David Wayne; Matthews, Oliver; Grossman, James W.; Bertagnolli, Ken (US Synthetic); Vail, Michael

    2006-01-01

    This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.

  18. Markov speckle for efficient random bit generation.

    PubMed

    Horstmeyer, Roarke; Chen, Richard Y; Judkewitz, Benjamin; Yang, Changhuei

    2012-11-19

    Optical speckle is commonly observed in measurements using coherent radiation. While lacking experimental validation, previous work has often assumed that speckle's random spatial pattern follows a Markov process. Here, we present a derivation and experimental confirmation of conditions under which this assumption holds true. We demonstrate that a detected speckle field can be designed to obey the first-order Markov property by using a Cauchy attenuation mask to modulate scattered light. Creating Markov speckle enables the development of more accurate and efficient image post-processing algorithms, with applications including improved de-noising, segmentation and super-resolution. To show its versatility, we use the Cauchy mask to maximize the entropy of a detected speckle field with fixed average speckle size, allowing cryptographic applications to extract a maximum number of useful random bits from speckle images.

  19. Single Abrikosov vortices as quantized information bits.

    PubMed

    Golod, T; Iovan, A; Krasnov, V M

    2015-10-12

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.

  20. Bit-commitment-based quantum coin flipping

    SciTech Connect

    Nayak, Ashwin; Shor, Peter

    2003-01-01

    In this paper we focus on a special framework for quantum coin-flipping protocols, bit-commitment-based protocols, within which almost all known protocols fit. We show a lower bound of 1/16 for the bias in any such protocol. We also analyze a sequence of multiround protocols that tries to overcome the drawbacks of the previously proposed protocols in order to lower the bias. We show an intricate cheating strategy for this sequence, which leads to a bias of 1/4. This indicates that a bias of 1/4 might be optimal in such protocols, and also demonstrates that a more clever proof technique may be required to show this optimality.

  1. Second quantization in bit-string physics

    NASA Technical Reports Server (NTRS)

    Noyes, H. Pierre

    1993-01-01

    Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.

  2. Single Abrikosov vortices as quantized information bits

    NASA Astrophysics Data System (ADS)

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-10-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.

  3. Very low bit rate video coding standards

    NASA Astrophysics Data System (ADS)

    Zhang, Ya-Qin

    1995-04-01

    Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.

  4. New PDC bit design increased penetration rate in slim wells

    SciTech Connect

    Gerbaud, L.; Sellami, H.; Lamine, E.; Sagot, A.

    1997-07-01

    This paper describes slim hole bit design developed at the Paris School of Mines and Security DBS. The design is a compromise between several criteria such as drilling efficiency, uniform wear distribution around the bit face and low level of vibration of the bit, according to the hole diameter and the formation characteristics. Two new bits were manufactured and run successfully in a full scale drilling test bench and in field test in Gabon. The result show improvement of the drilling performances in slimhole application.

  5. Microstructural Evolution of DP980 Steel during Friction Bit Joining

    NASA Astrophysics Data System (ADS)

    Huang, T.; Sato, Y. S.; Kokawa, H.; Miles, M. P.; Kohkonen, K.; Siemssen, B.; Steel, R. J.; Packer, S.

    2009-12-01

    The authors study a new solid-state spot joining process, friction bit joining (FBJ), which relies on the use of a consumable joining bit. It has been reported that FBJ is feasible for the joining of steel/steel and aluminum/steel, but the metallurgical characteristics of the joint for enhancement of the properties and reliability remain unclear. Therefore, this study produced friction bit joints in DP980 steel and then examined the microstructures in the joint precisely. In this article, the microstructure distribution associated with hardness in the friction-bit-joined DP980 steel and the microstructural evolution during FBJ are reported.

  6. Quantum bit commitment with cheat sensitive binding and approximate sealing

    NASA Astrophysics Data System (ADS)

    Li, Yan-Bing; Xu, Sheng-Wei; Huang, Wei; Wan, Zong-Jie

    2015-04-01

    This paper proposes a cheat-sensitive quantum bit commitment scheme based on single photons, in which Alice commits a bit to Bob. Here, Bob’s probability of success at cheating as obtains the committed bit before the opening phase becomes close to \\frac{1}{2} (just like performing a guess) as the number of single photons used is increased. And if Alice alters her committed bit after the commitment phase, her cheating will be detected with a probability that becomes close to 1 as the number of single photons used is increased. The scheme is easy to realize with present day technology.

  7. PDC (polycrystalline diamond compact) bit research at Sandia National Laboratories

    SciTech Connect

    Finger, J.T.; Glowka, D.A.

    1989-06-01

    From the beginning of the geothermal development program, Sandia has performed and supported research into polycrystalline diamond compact (PDC) bits. These bits are attractive because they are intrinsically efficient in their cutting action (shearing, rather than crushing) and they have no moving parts (eliminating the problems of high-temperature lubricants, bearings, and seals.) This report is a summary description of the analytical and experimental work done by Sandia and our contractors. It describes analysis and laboratory tests of individual cutters and complete bits, as well as full-scale field tests of prototype and commercial bits. The report includes a bibliography of documents giving more detailed information on these topics. 26 refs.

  8. Computational Errors of Mentally Retarded Students.

    ERIC Educational Resources Information Center

    Janke, Robert W.

    1980-01-01

    Examined computational errors made by educable mentally retarded students on the arithmetic subtest of the Wide Range Achievement Test. Retarded students had a lower percent of grouping and inappropriate inversion errors and a higher percent of incorrect operation errors than regular students had in Engelhardt's study. (Author)

  9. Relative error covariance analysis techniques and application

    NASA Technical Reports Server (NTRS)

    Wolff, Peter, J.; Williams, Bobby G.

    1988-01-01

    A technique for computing the error covariance of the difference between two estimators derived from different (possibly overlapping) data arcs is presented. The relative error covariance is useful for predicting the achievable consistency between Kalman-Bucy filtered estimates generated from two (not necessarily disjoint) data sets. The relative error covariance analysis technique is then applied to a Venus Orbiter simulation.

  10. Injecting Errors for Testing Built-In Test Software

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  11. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  12. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  13. BitPredator: A Discovery Algorithm for BitTorrent Initial Seeders and Peers

    SciTech Connect

    Borges, Raymond; Patton, Robert M; Kettani, Houssain; Masalmah, Yahya

    2011-01-01

    There is a large amount of illegal content being replicated through peer-to-peer (P2P) networks where BitTorrent is dominant; therefore, a framework to profile and police it is needed. The goal of this work is to explore the behavior of initial seeds and highly active peers to develop techniques to correctly identify them. We intend to establish a new methodology and software framework for profiling BitTorrent peers. This involves three steps: crawling torrent indexers for keywords in recently added torrents using Really Simple Syndication protocol (RSS), querying torrent trackers for peer list data and verifying Internet Protocol (IP) addresses from peer lists. We verify IPs using active monitoring methods. Peer behavior is evaluated and modeled using bitfield message responses. We also design a tool to profile worldwide file distribution by mapping IP-to-geolocation and linking to WHOIS server information in Google Earth.

  14. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    PubMed Central

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-01-01

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765

  15. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs.

    PubMed

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-12-26

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  16. Regional bit allocation and rate distortion optimization for multiview depth video coding with view synthesis distortion model.

    PubMed

    Zhang, Yun; Kwong, Sam; Xu, Long; Hu, Sudeng; Jiang, Gangyi; Kuo, C-C Jay

    2013-09-01

    In this paper, we propose a view synthesis distortion model (VSDM) that establishes the relationship between depth distortion and view synthesis distortion for the regions with different characteristics: color texture area corresponding depth (CTAD) region and color smooth area corresponding depth (CSAD), respectively. With this VSDM, we propose regional bit allocation (RBA) and rate distortion optimization (RDO) algorithms for multiview depth video coding (MDVC) by allocating more bits on CTAD for rendering quality and fewer bits on CSAD for compression efficiency. Experimental results show that the proposed VSDM based RBA and RDO can improve the coding efficiency significantly for the test sequences. In addition, for the proposed overall MDVC algorithm that integrates VSDM based RBA and RDO, it achieves 9.99% and 14.51% bit rate reduction on average for the high and low bit rate, respectively. It can improve virtual view image quality 0.22 and 0.24 dB on average at the high and low bit rate, respectively, when compared with the original joint multiview video coding model. The RD performance comparisons using five different metrics also validate the effectiveness of the proposed overall algorithm. In addition, the proposed algorithms can be applied to both INTRA and INTER frames.

  17. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  18. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  19. Design and implementation of low power clock gated 64-bit ALU on ultra scale FPGA

    NASA Astrophysics Data System (ADS)

    Gupta, Ashutosh; Murgai, Shruti; Gulati, Anmol; Kumar, Pradeep

    2016-03-01

    64-bit energy efficient Arithmetic and Logic Unit using negative latch based clock gating technique is designed in this paper. The 64-bit ALU is designed using multiplexer based full adder cell. We have designed a 64-bit ALU with a gated clock. We have used negative latch based circuit for generating gated clock. This gated clock is used to control the multiplexer based 64-bit ALU. The circuit has been synthesized on kintex FPGA through Xilinx ISE Design Suite 14.7 using 28 nm technology in Verilog HDL. The circuit has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment. We have achieved 74.07%, 92. 93% and 95.53% reduction in total clock power, 89.73%, 91.35% and 92.85% reduction in I/Os power, 67.14%, 62.84% and 74.34% reduction in dynamic power and 25.47%, 29.05% and 46.13% reduction in total supply power at 20 MHz, 200 MHz and 2 GHz frequency respectively. The power has been calculated using XPower Analyzer tool of Xilinx ISE Design Suite 14.3.

  20. 10-bit segmented current steering DAC in 90nm CMOS technology

    NASA Astrophysics Data System (ADS)

    Bringas, R., Jr.; Dy, F.; Gerasta, O. J.

    2015-06-01

    This special project presents a 10-Bit 1Gs/s 1.2V/3.3V Digital-to-Analog Converter using1 Poly 9 Metal SAED 90-nm CMOS Technology intended for mixed-signal and power IC applications. To achieve maximum performance with minimum area, the DAC has been implemented in 6+4 Segmentation. The simulation results show a static performance of ±0.56 LSB INL and ±0.79 LSB DNL with a total layout chip area of 0.683 mm2.The segmented architecture is implemented using two sub DAC's, which are the LSB and MSB section with certain number bits. The DAC is designed using 4-BitBinary Weighted DAC for the LSB section and 6-BitThermometer-coded DAC for the MSB section. The thermometer-coded architecture provides the most optimized results in terms of linearity through reducing the clock feed-through effect especially in hot switching between multiple transistors. The binary- weighted architecture gives better linearity output in higher frequencies with better saturation in current sources.

  1. Are 16 bits really needed in CCDs and infrared detectors for astronomy?

    NASA Astrophysics Data System (ADS)

    Gago, Fernando; Rodríguez-Ramos, Luis F.; Gigante, José V.; López-Arozena, D.

    2004-09-01

    One of the problems found in the design of the electronics for astronomical instruments is the difficulty to find precise digitizers (16 bits) at high speed. In fact, most of the chips which claim to have 16-bit actually have a lower ENOB (Effective Number Of Bits), normally around 14, when considering their noise effects. In this paper, a technique based in auto-adjustable gain amplifiers is proposed as a way to relax the A/D requirements for astronomical CCDs and infrared detectors. The amplifiers will automatically toggle between 2 different gains depending on the pixel value. The technique is based on the fact that, due to the shot (photon) noise of the detectors, the maximum signal to noise ratio achievable in most of these devices is relatively low, allowing the use of A/D converters with an ENOB of only 14 (or even 12) bits when combined with auto-adjustable gain amplifiers. It will be shown that the lower resolution of the A/D converters will not affect the accuracy of the science data, even when many images are averaged out to compensate the effects of the shot noise. Furthermore, given that many real A/D converters do not reach an ENOB of 16, for low level signals the accuracy can be even slightly improved with the technique described in this paper. On the other hand, this relaxing of the A/D requirements can allow the use of off-the-shelf boards for the acquisition systems.

  2. Template-Assisted Direct Growth of 1 Td/in(2) Bit Patterned Media.

    PubMed

    Yang, En; Liu, Zuwei; Arora, Hitesh; Wu, Tsai-Wei; Ayanoor-Vitikkate, Vipin; Spoddig, Detlef; Bedau, Daniel; Grobis, Michael; Gurney, Bruce A; Albrecht, Thomas R; Terris, Bruce

    2016-07-13

    We present a method for growing bit patterned magnetic recording media using directed growth of sputtered granular perpendicular magnetic recording media. The grain nucleation is templated using an epitaxial seed layer, which contains Pt pillars separated by amorphous metal oxide. The scheme enables the creation of both templated data and servo regions suitable for high density hard disk drive operation. We illustrate the importance of using a process that is both topographically and chemically driven to achieve high quality media. PMID:27295317

  3. Template-Assisted Direct Growth of 1 Td/in(2) Bit Patterned Media.

    PubMed

    Yang, En; Liu, Zuwei; Arora, Hitesh; Wu, Tsai-Wei; Ayanoor-Vitikkate, Vipin; Spoddig, Detlef; Bedau, Daniel; Grobis, Michael; Gurney, Bruce A; Albrecht, Thomas R; Terris, Bruce

    2016-07-13

    We present a method for growing bit patterned magnetic recording media using directed growth of sputtered granular perpendicular magnetic recording media. The grain nucleation is templated using an epitaxial seed layer, which contains Pt pillars separated by amorphous metal oxide. The scheme enables the creation of both templated data and servo regions suitable for high density hard disk drive operation. We illustrate the importance of using a process that is both topographically and chemically driven to achieve high quality media.

  4. Experimental test of Landauer’s principle in single-bit operations on nanomagnetic memory bits

    PubMed Central

    Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey

    2016-01-01

    Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least kBT ln(2) of heat be dissipated from the memory into the environment, where kB is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between “information thermodynamics” and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology. PMID:26998519

  5. Experimental test of Landauer's principle in single-bit operations on nanomagnetic memory bits.

    PubMed

    Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey

    2016-03-01

    Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least k B T ln(2) of heat be dissipated from the memory into the environment, where k B is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between "information thermodynamics" and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology. PMID:26998519

  6. Experimental test of Landauer's principle in single-bit operations on nanomagnetic memory bits.

    PubMed

    Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey

    2016-03-01

    Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least k B T ln(2) of heat be dissipated from the memory into the environment, where k B is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between "information thermodynamics" and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology.

  7. Error Correction Coding for Reliable Communication in the Presence of Extreme Noise.

    NASA Astrophysics Data System (ADS)

    Chao, Chi-Chao

    This thesis is a study of error-correcting codes for reliable communication in the presence of extreme noise. We consider very noisy channels, which occur in practice by pushing ordinary channels to their physical limits. Both block codes and convolutional codes are examined. We show that the family of triply orthogonal codes, defined and studied in this thesis, or orthogonal codes can be used to achieve channel capacity for certain classes of very noisy discrete memoryless channels. The performance of binary block codes on the unquantized additive white Gaussian noise channel at very low signal-to-noise ratios is studied. Expressions are derived for the decoder block error as well as bit error probabilities and the asymptotic coding gain near the point where the signal energy is zero. The average distance spectrum for the ensemble of time-varying convolutional codes is computed, and the result gives a surprisingly accurate prediction of the growth rate of the number of fundamental paths at large distance for fixed codes. A Gilbert-like free distance lower bound is also given. Finally, a Markov chain model is developed to approximate burst error statistics of Viterbi decoding. The model is validated through computer simulations and is compared with the previously proposed geometric model.

  8. Optimized entanglement-assisted quantum error correction

    SciTech Connect

    Taghavi, Soraya; Brun, Todd A.; Lidar, Daniel A.

    2010-10-15

    Using convex optimization, we propose entanglement-assisted quantum error-correction procedures that are optimized for given noise channels. We demonstrate through numerical examples that such an optimized error-correction method achieves higher channel fidelities than existing methods. This improved performance, which leads to perfect error correction for a larger class of error channels, is interpreted in at least some cases by quantum teleportation, but for general channels this interpretation does not hold.

  9. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  10. 8-, 16-, and 32-Bit Processors: Characteristics and Appropriate Applications.

    ERIC Educational Resources Information Center

    Williams, James G.

    1984-01-01

    Defines and describes the components and functions that constitute a microcomputer--bits, bytes, address register, cycle time, data path, and bus. Characteristics of 8-, 16-, and 32-bit machines are explained in detail, and microprocessor evolution, architecture, and implementation are discussed. Application characteristics or types for each bit…

  11. TriBITS (Tribal Build, Integrate, and Test System)

    SciTech Connect

    2013-05-16

    TriBITS is a configuration, build, test, and reporting system that uses the Kitware open-source CMake/CTest/CDash system. TriBITS contains a number of custom CMake/CTest scripts and python scripts that extend the functionality of the out-of-the-box CMake/CTest/CDash system.

  12. The application of low-bit-rate encoding techniques to digital satellite systems

    NASA Astrophysics Data System (ADS)

    Rowbotham, T. R.; Niwa, K.

    This paper describes the INTELSAT-funded development on low-bit-rate voice encoding techniques. Adaptive Differential Pulse Code Modulation (ADPCM), Nearly Instantaneous Companding (NIC) and Continuously Variable Slope Delta Modulation (CVSD). Subjective and objective evaluation results, with and without transmission errors are presented, primarily for 32 kbit/s per voice channel. A part of the paper is devoted to the interfacing of ADPCM, NIC and CVSD with terrestrial ISDM and satellite networks, the frame structure and how signalling can be accommodated, and the compatibility with other voice-associated digital processors such as DSI and Echo Cancellers.

  13. Evolution of a Hybrid Roller Cone/PDC core bit

    SciTech Connect

    Pettitt, R.; Laney, R.; George, D.; Clemens, G.

    1980-01-01

    The development of the hot dry rock (HDR) geothermal resource, as presently being accomplished by the Los Alamos Scientific Laboratory (LASL), requires that sufficient quantities of good quality core be obtained at a reasonable cost. The use of roller cone core bits, with tungsten carbide inserts, was initiated by the Deep Sea Drilling Program. These bits were modified for continental drilling in deep, hot, granitic rock for the LASL HDR Geothermal Site at Fenton Hill, New Mexico in 1974. After the advent of monocrystalline diamond Stratapax pads, a prototype hybrid roller cone/Stratapax core bit was fabricated by Smith Tool, and tested at Fenton Hill in 1978. During the drilling for a deeper HDR reservoir system in 1979 and 1980, six of the latest generation of these bits, now called Hybrid Roller Cone/Polycrystalline Diamond Cutter (PDC) core bits, were successfully used in granitic rock at depths below 11,000 ft.

  14. Installation of MCNP on 64-bit parallel computers

    SciTech Connect

    Meginnis, A.B.; Hendricks, J.S.; McKinney, G.W.

    1995-09-01

    The Monte Carlo radiation transport code MCNP has been successfully ported to two 64-bit workstations, the SGI and DEC Alpha. We found the biggest problem for installation on these machines to be Fortran and C mismatches in argument passing. Correction of these mismatches enabled, for the first time, dynamic memory allocation on 64-bit workstations. Although the 64-bit hardware is faster because 8-bytes are processed at a time rather than 4-bytes, we found no speed advantage in true 64-bit coding versus implicit double precision when porting an existing code to the 64-bit workstation architecture. We did find that PVM multiasking is very successful and represents a significant performance enhancement for scientific workstations.

  15. Uniqueness: skews bit occurrence frequencies in randomly generated fingerprint libraries.

    PubMed

    Chen, Nelson G

    2016-08-01

    Requiring that randomly generated chemical fingerprint libraries have unique fingerprints such that no two fingerprints are identical causes a systematic skew in bit occurrence frequencies, the proportion at which specified bits are set. Observed frequencies (O) at which each bit is set within the resulting libraries systematically differ from frequencies at which bits are set at fingerprint generation (E). Observed frequencies systematically skew toward 0.5, with the effect being more pronounced as library size approaches the compound space, which is the total number of unique possible fingerprints given the number of bit positions each fingerprint contains. The effect is quantified for varying library sizes as a fraction of the overall compound space, and for changes in the specified frequency E. The cause and implications for this systematic skew are subsequently discussed. When generating random libraries of chemical fingerprints, the imposition of a uniqueness requirement should either be avoided or taken into account.

  16. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  17. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that

  18. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic

  19. Error growth in operational ECMWF forecasts

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Dalcher, A.

    1985-01-01

    A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of error growth on forecast model deficiencies. Error was defined as the difference between the forecast and analysis fields during the verification time. Systematic and random errors were considered separately in calculating the error variance for a 10 day operational forecast. A good fit was obtained with measured forecast errors and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast errors and differences that were performed separately for each wavenumber revealed that the error growth rate grew with wavenumber. The saturation error decreased with the total wavenumber and the limit of predictability, i.e., when error variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.

  20. Drill bit stud and method of manufacture

    SciTech Connect

    Hake, L.W.; Huff, C.F.; Miller, J.W.

    1984-10-23

    A polycrystalline diamond compact is a polycrystalline diamond wafer attached to a tungsten carbide substrate forming a disc. In this form, it is attached to a stud which is attached within a drill bit. The compact is attached to the stud with the aid of a positioning ring. When the stud is made of impact resistant material, a full pedestal may be formed on the stud to facilitate the use of the positioning ring. When the stud is made of brittle material, the positioning ring is attached to the flat face of the stud without a pedestal. The ring is positioned on a stud and the disc inserted in the ring so that the disc is positioned against the bonding surface. The disc remains in position against the bonding surface during the handling before and during the bonding process. As a second embodiment, the polycrystalline diamond compact is smaller than the disc itself and the remainder of the disc is formed of metal having the same thickness as the polycrystalline diamond compact or its tungsten carbide substrate. The shape of the smaller polycrystalline diamond compact may be semicircular, circular, polygon shaped, (i.e., triangular, square, etc.) or other geometric figures.

  1. Continuous chain bit with downhole cycling capability

    DOEpatents

    Ritter, Don F.; St. Clair, Jack A.; Togami, Henry K.

    1983-01-01

    A continuous chain bit for hard rock drilling is capable of downhole cycling. A drill head assembly moves axially relative to a support body while the chain on the head assembly is held in position so that the bodily movement of the chain cycles the chain to present new composite links for drilling. A pair of spring fingers on opposite sides of the chain hold the chain against movement. The chain is held in tension by a spring-biased tensioning bar. A head at the working end of the chain supports the working links. The chain is centered by a reversing pawl and piston actuated by the pressure of the drilling mud. Detent pins lock the head assembly with respect to the support body and are also operated by the drilling mud pressure. A restricted nozzle with a divergent outlet sprays drilling mud into the cavity to remove debris. Indication of the centered position of the chain is provided by noting a low pressure reading indicating proper alignment of drilling mud slots on the links with the corresponding feed branches.

  2. Single Abrikosov vortices as quantized information bits.

    PubMed

    Golod, T; Iovan, A; Krasnov, V M

    2015-01-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592

  3. Single Abrikosov vortices as quantized information bits

    PubMed Central

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-01-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592

  4. A 14-bit 40-MHz analog front end for CCD application

    NASA Astrophysics Data System (ADS)

    Jingyu, Wang; Zhangming, Zhu; Shubin, Liu

    2016-06-01

    A 14-bit, 40-MHz analog front end (AFE) for CCD scanners is analyzed and designed. The proposed system incorporates a digitally controlled wideband variable gain amplifier (VGA) with nearly 42 dB gain range, a correlated double sampler (CDS) with programmable gain functionality, a 14-bit analog-to-digital converter and a programmable timing core. To achieve the maximum dynamic range, the VGA proposed here can linearly amplify the input signal in a gain range from -1.08 to 41.06 dB in 6.02 dB step with a constant bandwidth. A novel CDS takes image information out of noise, and further amplifies the signal accurately in a gain range from 0 to 18 dB in 0.035 dB step. A 14-bit ADC is adopted to quantify the analog signal with optimization in power and linearity. An internal timing core can provide flexible timing for CCD arrays, CDS and ADC. The proposed AFE was fabricated in SMIC 0.18 μm CMOS process. The whole circuit occupied an active area of 2.8 × 4.8 mm2 and consumed 360 mW. When the frequency of input signal is 6.069 MHz, and the sampling frequency is 40 MHz, the signal to noise and distortion (SNDR) is 70.3 dB, the effective number of bits is 11.39 bit. Project supported by the National Natural Science Foundation of China (Nos. 61234002, 61322405, 61306044, 61376033), the National High-Tech Program of China (No. 2013AA014103), and the Opening Project of Science and Technology on Reliability Physics and Application Technology of Electronic Component Laboratory (No. ZHD201302).

  5. A 14-bit 40-MHz analog front end for CCD application

    NASA Astrophysics Data System (ADS)

    Jingyu, Wang; Zhangming, Zhu; Shubin, Liu

    2016-06-01

    A 14-bit, 40-MHz analog front end (AFE) for CCD scanners is analyzed and designed. The proposed system incorporates a digitally controlled wideband variable gain amplifier (VGA) with nearly 42 dB gain range, a correlated double sampler (CDS) with programmable gain functionality, a 14-bit analog-to-digital converter and a programmable timing core. To achieve the maximum dynamic range, the VGA proposed here can linearly amplify the input signal in a gain range from ‑1.08 to 41.06 dB in 6.02 dB step with a constant bandwidth. A novel CDS takes image information out of noise, and further amplifies the signal accurately in a gain range from 0 to 18 dB in 0.035 dB step. A 14-bit ADC is adopted to quantify the analog signal with optimization in power and linearity. An internal timing core can provide flexible timing for CCD arrays, CDS and ADC. The proposed AFE was fabricated in SMIC 0.18 μm CMOS process. The whole circuit occupied an active area of 2.8 × 4.8 mm2 and consumed 360 mW. When the frequency of input signal is 6.069 MHz, and the sampling frequency is 40 MHz, the signal to noise and distortion (SNDR) is 70.3 dB, the effective number of bits is 11.39 bit. Project supported by the National Natural Science Foundation of China (Nos. 61234002, 61322405, 61306044, 61376033), the National High-Tech Program of China (No. 2013AA014103), and the Opening Project of Science and Technology on Reliability Physics and Application Technology of Electronic Component Laboratory (No. ZHD201302).

  6. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  7. Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu

    2015-01-01

    Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908

  8. Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu

    2015-08-05

    Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones.

  9. Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu

    2015-01-01

    Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908

  10. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  11. Performance enhancement using forward error correction on power line communication channels

    SciTech Connect

    Chan, M.H.L. ); Friedman, D.; Donaldson, R.W. )

    1994-04-01

    The use of forward error correction (FEC) coding is investigated, to enhance communication throughput and reliability on noisy power line networks. Rate one-half self-orthogonal convolutional codes are considered. These codes are known to be effective in other environments, and can be decoded inexpensively in real-time using majority logic decoders. Extensive bit and packet error rate tests were conducted on actual, noisy in-building power line links. Coding gains of 15 dB were observed at 10[sup [minus]3] decoded bit error rates. A self-orthogonal (2, 1, 6) convolutional code with interleaving to degree 7 was particularly effective, and was implemented as a VLSI microelectronic chip. Its use improved data throughput and packet error rates substantially, at data transmission rates of 9,600 bits/s.

  12. Development of a near-bit MWD system. Quarterly report, October--December, 1994

    SciTech Connect

    McDonald, W.J.; Pittard, G.T.

    1995-05-01

    As horizontal drilling and completion technology has improved through evolution, the length of the horizontal sections has grown longer and the need for more accurate directional placement become more critical. The reliance on examining formation conditions and borehole directional data some 50 to 80 feet above the bit becomes less acceptable as turning radii decrease and target sands become thinner. The project objective is to develop a measurements-while-drilling module that can reliably provide real-time reports of drilling conditions at the bit. The module is to support multiple types of sensors and to sample and encode their outputs in digital form under microprocessor control. The assembled message will then be electronically transmitted along the drill string back to a standard mud-pulse or EM-MWD tool for data integration and relay to the surface. The development effort will consist of reconfiguring the AccuNav{reg_sign} EM-MWD Directional System manufactured by Guided boring Systems, Inc. of Houston, Texas for near-bit operation followed by the inclusion of additional sensor types (e.g., natural gamma ray, formation resistivity, etc.) in Phase 2. The near-bit MWD prototype fabrication was completed and the system assembled and calibrated. The unit was then subjected to vibration and shock testing for a period in excess of 200 hours. In addition, the unit was completely disassembled and inspected at the conclusion of the reliability tests to assess damage or wear. No fall off in performance or damage to the electronics or battery pack were found. The performance of the telemetry link was also assessed. The tests demonstrated the ability to transmit and receive error-free data over a transmitter-to-receiver separation distance of 100 feet for both liquid-filled and dry boreholes.

  13. High-power TSP bits. [Thermally Stable Polycrystalline diamond

    SciTech Connect

    Cohen, J.H.; Maurer, W.C. ); Westcott, P.A. )

    1994-03-01

    This paper reviews a three-year R D project to develop advanced thermally stable polycrystalline diamond (TSP) bits that can operate at power levels 5 to 10 times greater than those typically delivered by rotary rigs. These bits are designed to operate on advanced drilling motors that drill 3 to 6 times faster than rotary rigs. TSP bit design parameters that were varied during these tests include cutter size, shape, density, and orientation. Drilling tests conducted in limestone, sandstone, marble, and granite blocks showed that these optimized bits drilled many of these rocks at 500 to 1,000 ft/hr (150 to 300 m/h), compared to 50 to 100 ft/hr (15 to 30 m/h) for roller bits. These tests demonstrated that TSP bits are capable of operating at the high speeds and high torques delivered by advanced drilling motors now being developed. These advanced bits and motors are designed for use in slim-hole and horizontal drilling applications.

  14. Reducing Bits in Electrodeposition Process of Commercial Vehicle - A Case Study

    NASA Astrophysics Data System (ADS)

    Rahim, Nabiilah Ab; Hamedon, Zamzuri; Mohd Turan, Faiz; Iskandar, Ismed

    2016-02-01

    Painting process is critical in commercial vehicle manufacturing process for protection and decorative. The good quality on painted body is important to reduce repair cost and achieve customer satisfaction. In order to achieve the good quality, it is important to reduce the defect at the first process in painting process which is electrodeposition process. The Pareto graph and cause and effect diagram in the seven QC tools is utilized to reduce the electrodeposition defects. The main defects in the electrodeposition process in this case study are the bits. The 55% of the bits are iron filings. The iron filings which come from the metal assembly process at the body shop are minimised by controlling the spot welding parameter, defect control and standard body cleaning process. However the iron filings are still remained on the body and carry over to the paint shop. The remained iron filings on the body are settled inside the dipping tank and removed by filtration system and magnetic separation. The implementation of filtration system and magnetic separation improved 27% of bits and reduced 42% of sanding man hour with a total saving of RM38.00 per unit.

  15. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  16. Seismic Investigations of the Zagros-Bitlis Thrust Zone

    NASA Astrophysics Data System (ADS)

    Gritto, R.; Sibol, M.; Caron, P.; Quigley, K.; Ghalib, H.; Chen, Y.

    2009-05-01

    We present results of crustal studies obtained with seismic data from the Northern Iraq Seismic Network (NISN). NISN has operated 10 broadband stations in north-eastern Iraq since late 2005. At present, over 800 GB of seismic waveform data have been analyzed. The aim of the present study is to derive models of the local and regional crustal structure of north and north-eastern Iraq, including the northern extension of the Zagros collision zone. This goal is, in part, achieved by estimating local and regional seismic velocity models using receiver function- and surface wave dispersion analyses and to use these velocity models to obtain accurate hypocenter locations and event focal mechanisms. Our analysis of hypocenter locations produces a clear picture of the seismicity associated with the tectonics of the region. The largest seismicity rate is confined to the active northern section of the Zagros thrust zone, while it decreases towards the southern end, before the intensity increases in the Bandar Abbas region again. Additionally, the rift zones in the Read Sea and the Gulf of Aden are clearly demarked by high seismicity rates. Our analysis of waveform data indicates clear propagation paths from the west or south-west across the Arabian shield as well as from the north and east into NISN. Phases including Pn, Pg, Sn, Lg, as well as LR are clearly observed on these seismograms. In contrast, blockage or attenuation of Pg and Sg-wave energy is observed for propagation paths across the Zagros-Bitlis zone from the south, while Pn and Sn phases are not affected. These findings are in support of earlier tectonic models that suggested the existence of multiple parallel listric faults splitting off the main Zagros fault zone in east-west direction. These faults appear to attenuate the crustal phases while the refracted phases, propagating across the mantle lid, remain unaffected. We will present surface wave analysis in support of these findings, indicating multi

  17. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  18. A sub-picojoule-per-bit CMOS photonic receiver for densely integrated systems.

    PubMed

    Zheng, Xuezhe; Liu, Frankie; Patil, Dinesh; Thacker, Hiren; Luo, Ying; Pinguet, Thierry; Mekis, Attila; Yao, Jin; Li, Guoliang; Shi, Jing; Raj, Kannan; Lexau, Jon; Alon, Elad; Ho, Ron; Cunningham, John E; Krishnamoorthy, Ashok V

    2010-01-01

    We report ultra-low-power (690fJ/bit) operation of an optical receiver consisting of a germanium-silicon waveguide detector intimately integrated with a receiver circuit and embedded in a clocked digital receiver. We show a wall-plug power efficiency of 690microW/Gbps for the photonic receiver made of a 130nm SOI CMOS Ge waveguide detector integrated to a 90nm Si CMOS receiver circuit. The hybrid CMOS photonic receiver achieved a sensitivity of -18.9dBm at 5Gbps for BER of 10(-12). Enabled by a unique low-overhead bias refresh scheme, the receiver operates without the need for DC balanced transmission. Small signal measurements of the CMOS Ge waveguide detector showed a 3dB bandwidth of 10GHz at 1V of reverse bias, indicating that further increases in transmission rate and reductions of energy-per-bit will be possible.

  19. A sub-picojoule-per-bit CMOS photonic receiver for densely integrated systems.

    PubMed

    Zheng, Xuezhe; Liu, Frankie; Patil, Dinesh; Thacker, Hiren; Luo, Ying; Pinguet, Thierry; Mekis, Attila; Yao, Jin; Li, Guoliang; Shi, Jing; Raj, Kannan; Lexau, Jon; Alon, Elad; Ho, Ron; Cunningham, John E; Krishnamoorthy, Ashok V

    2010-01-01

    We report ultra-low-power (690fJ/bit) operation of an optical receiver consisting of a germanium-silicon waveguide detector intimately integrated with a receiver circuit and embedded in a clocked digital receiver. We show a wall-plug power efficiency of 690microW/Gbps for the photonic receiver made of a 130nm SOI CMOS Ge waveguide detector integrated to a 90nm Si CMOS receiver circuit. The hybrid CMOS photonic receiver achieved a sensitivity of -18.9dBm at 5Gbps for BER of 10(-12). Enabled by a unique low-overhead bias refresh scheme, the receiver operates without the need for DC balanced transmission. Small signal measurements of the CMOS Ge waveguide detector showed a 3dB bandwidth of 10GHz at 1V of reverse bias, indicating that further increases in transmission rate and reductions of energy-per-bit will be possible. PMID:20173840

  20. Performance of a phase-conjugate-engine implementing a finite-bit phase correction

    SciTech Connect

    Baker, K; Stappaerts, E; Wilks, S; Young, P; Gavel, D; Tucker, J; Silva, D; Olivier, S

    2003-10-23

    This article examines the achievable Strehl ratio when a finite-bit correction to an aberrated wave-front is implemented. The phase-conjugate-engine (PCE) used to measure the aberrated wavefront consists of a quadrature interferometric wave-front sensor, a liquid-crystal spatial-light-modulator and computer hardware/software to calculate and apply the correction. A finite-bit approximation to the conjugate phase is calculated and applied to the spatial light modulator to remove the aberrations from the optical beam. The experimentally determined Strehl ratio of the corrected beam is compared with analytical expressions for the expected Strehl ratio and shown to be in good agreement with those predictions.

  1. A 2 GS/s 8-bit folding and interpolating ADC in 90 nm CMOS

    NASA Astrophysics Data System (ADS)

    Wenwei, He; Qiao, Meng; Yi, Zhang; Kai, Tang

    2014-08-01

    A single-channel 2 GS/s 8-bit analog-to-digital converter in 90 nm CMOS process technology is presented. It utilizes cascade folding architecture, which incorporates an additional inter-stage sample-and-hold amplifier between the folding circuits to enhance the quantization time. It also uses the foreground on-chip digital-assisted calibration circuit to improve the linearity of the circuit. The post simulation results demonstrate that it has a differential nonlinearity < ±0.3 LSB and an integral nonlinearity < ±0.25 LSB at the Nyquist frequency. Moreover, 7.338 effective numbers of bits can be achieved at 2 GSPS. The whole chip area is 0.88 × 0.88 mm2 with the pad. It consumes 210 mW from a 1.2 V single supply.

  2. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis. PMID:24885680

  3. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  4. Improving reliability of non-volatile memory technologies through circuit level techniques and error control coding

    NASA Astrophysics Data System (ADS)

    Yang, Chengen; Emre, Yunus; Cao, Yu; Chakrabarti, Chaitali

    2012-12-01

    Non-volatile resistive memories, such as phase-change RAM (PRAM) and spin transfer torque RAM (STT-RAM), have emerged as promising candidates because of their fast read access, high storage density, and very low standby power. Unfortunately, in scaled technologies, high storage density comes at a price of lower reliability. In this article, we first study in detail the causes of errors for PRAM and STT-RAM. We see that while for multi-level cell (MLC) PRAM, the errors are due to resistance drift, in STT-RAM they are due to process variations and variations in the device geometry. We develop error models to capture these effects and propose techniques based on tuning of circuit level parameters to mitigate some of these errors. Unfortunately for reliable memory operation, only circuit-level techniques are not sufficient and so we propose error control coding (ECC) techniques that can be used on top of circuit-level techniques. We show that for STT-RAM, a combination of voltage boosting and write pulse width adjustment at the circuit-level followed by a BCH-based ECC scheme can reduce the block failure rate (BFR) to 10-8. For MLC-PRAM, a combination of threshold resistance tuning and BCH-based product code ECC scheme can achieve the same target BFR of 10-8. The product code scheme is flexible; it allows migration to a stronger code to guarantee the same target BFR when the raw bit error rate increases with increase in the number of programming cycles.

  5. Performance of multi level error correction in binary holographic memory

    NASA Technical Reports Server (NTRS)

    Hanan, Jay C.; Chao, Tien-Hsin; Reyes, George F.

    2004-01-01

    At the Optical Computing Lab in the Jet Propulsion Laboratory (JPL) a binary holographic data storage system was designed and tested with methods of recording and retrieving the binary information. Levels of error correction were introduced to the system including pixel averaging, thresholding, and parity checks. Errors were artificially introduced into the binary holographic data storage system and were monitored as a function of the defect area fraction, which showed a strong influence on data integrity. Average area fractions exceeding one quarter of the bit area caused unrecoverable errors. Efficient use of the available data density was discussed. .

  6. One bit/s/Hz Spectrally Efficient Transmission for an Eight-Channel NRZ-Modulated DWDM System

    NASA Astrophysics Data System (ADS)

    Vij, Robin; Sharma, Neeraj

    2016-03-01

    The core of the global telecommunication network consists of wavelength-division multiplexed (WDM) optical transmission systems. WDM is the technology of choice as it allows for a high spectral efficiency. We propose an effective way to counter the nonlinearities like four-wave mixing and cross-phase modulation to achieve the spectral efficiency of 1 bit/s/Hz using non-return-to-zero (NRZ) modulation format. We use the concept of non-uniform channel spacing and non-uniform power assignment between adjacent channels of the WDM system. We have simulated an eight-channel WDM lightwave system with bit rates of 10 and 25 Gbit/s.

  7. Compressing molecular dynamics trajectories: Breaking the one-bit-per-sample barrier.

    PubMed

    Huwald, Jan; Richter, Stephan; Ibrahim, Bashar; Dittrich, Peter

    2016-07-01

    Molecular dynamics simulations yield large amounts of trajectory data. For their durable storage and accessibility an efficient compression algorithm is paramount. State of the art domain-specific algorithms combine quantization, Huffman encoding and occasionally domain knowledge. We propose the high resolution trajectory compression scheme (HRTC) that relies on piecewise linear functions to approximate quantized trajectories. By splitting the error budget between quantization and approximation, our approach beats the current state of the art by several orders of magnitude given the same error tolerance. It allows storing samples at far less than one bit per sample. It is simple and fast enough to be integrated into the inner simulation loop, store every time step, and become the primary representation of trajectory data. © 2016 Wiley Periodicals, Inc. PMID:27191931

  8. Error correction for encoded quantum annealing

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Preskill, John

    2016-05-01

    Recently, W. Lechner, P. Hauke, and P. Zoller [Sci. Adv. 1, e1500838 (2015), 10.1126/sciadv.1500838] have proposed a quantum annealing architecture, in which a classical spin glass with all-to-all pairwise connectivity is simulated by a spin glass with geometrically local interactions. We interpret this architecture as a classical error-correcting code, which is highly robust against weakly correlated bit-flip noise, and we analyze the code's performance using a belief-propagation decoding algorithm. Our observations may also apply to more general encoding schemes and noise models.

  9. Preliminary design for a standard 10 sup 7 bit Solid State Memory (SSM)

    NASA Technical Reports Server (NTRS)

    Hayes, P. J.; Howle, W. M., Jr.; Stermer, R. L., Jr.

    1978-01-01

    A modular concept with three separate modules roughly separating bubble domain technology, control logic technology, and power supply technology was employed. These modules were respectively the standard memory module (SMM), the data control unit (DCU), and power supply module (PSM). The storage medium was provided by bubble domain chips organized into memory cells. These cells and the circuitry for parallel data access to the cells make up the SMM. The DCU provides a flexible serial data interface to the SMM. The PSM provides adequate power to enable one DCU and one SMM to operate simultaneously at the maximum data rate. The SSM was designed to handle asynchronous data rates from dc to 1.024 Mbs with a bit error rate less than 1 error in 10 to the eight power bits. Two versions of the SSM, a serial data memory and a dual parallel data memory were specified using the standard modules. The SSM specification includes requirements for radiation hardness, temperature and mechanical environments, dc magnetic field emission and susceptibility, electromagnetic compatibility, and reliability.

  10. Dating cave speleothems using diamond core bits in Tennessee and Virginia

    NASA Astrophysics Data System (ADS)

    Burnham, T. G.; Gao, Y.; Cheng, H.; Edwards, R.

    2011-12-01

    Removing speleothems for paleoclimate study was a destructive practice in cave and karst communities. It was considered illegal and even resulted in lawsuits in some cases. We used small diamond core bits to investigate age distributions of speleothems in several caves in Virginia and Tennessee without removing them from caves. Core bits of 8 mm diameter and one inch depth of cut were used to drill nearly 100 cores from three caves: Morril's Cave (aka Worley's Cave) and Blue Springs Cave in Tennessee and Grand Caverns in Virginia. Samples drilled in Morrill's Cave were processed for U/Th chemistry directly. Cores drilled in Blue Springs Cave and Grand Caverns were carefully selected and cleaned using ultrasonic cleaner before the U/Th chemistry procedure. Age dating results showed significant differences for these two sets of samples. For Morril's Cave samples, most of the age errors (2 sigma) were greater than 1%. For the pre-cleaned samples, more than 90% of the age errors (2 sigma) were less than 1%. This method is non-destructive and easy to use with minimum equipment needed in the field. Age dating results are fairly precise if samples are pre-cleaned for U/Th chemistry. Major limitations of this methods are 1) Pre-cleaning is necessary to ensure precise dating results; 2) Hiatus could be missed if cores are only drilled at the top and bottom portions of the speleothems; 3) Exact location of ages are hard to interpret especially for relatively longer cores.

  11. Exact probability of error analysis for FHSS/CDMA communications in the presence of single term Rician fading

    NASA Astrophysics Data System (ADS)

    Turcotte, Randy L.; Wickert, Mark A.

    An exact expression is found for the probability of bit error of an FHSS-BFSK (frequency-hopping spread-spectrum/binary-frequency-shift-keying) multiple-access system in the presence of slow, nonselective, 'single-term' Rician fading. The effects of multiple-access interference and/or continuous tone jamming are considered. Comparisons are made between the error expressions developed here and previously published upper bounds. It is found that under certain channel conditions the upper bounds on the probability of bit error may exceed the actual probability of error by an order of magnitude.

  12. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    NASA Astrophysics Data System (ADS)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  13. Bias and spread in extreme value theory measurements of probability of error

    NASA Technical Reports Server (NTRS)

    Smith, J. G.

    1972-01-01

    Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.

  14. Experimental bit commitment based on quantum communication and special relativity.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented.

  15. Compressed bit stream classification using VQ and GMM

    NASA Astrophysics Data System (ADS)

    Chen, Wenhua; Kuo, C.-C. Jay

    1997-10-01

    Algorithms of classifying and segmenting bit streams with different source content (such as speech, text and image, etc.) and different coding methods (such as ADPCM, (mu) -law, tiff, gif and JPEG, etc.) in a communication channel are investigated. In previous work, we focused on the separation of fixed- and variable-length coded bit streams, and the classification of two variable-length coded bit streams by using Fourier analysis and entropy feature. In this work, we consider the classification of multiple (more than two sources) compressed bit streams by using vector quantization (VQ) and Gaussian mixture modeling (GMM). The performance of the VQ and GMM techniques depend on various parameters such as the size of the codebook, the number of mixtures and the test segment length. It is demonstrated with experiments that both VQ and GMM outperform the single entropy feature. It is also shown that GMM generally outperforms VQ.

  16. Experimental bit commitment based on quantum communication and special relativity.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented. PMID:24237497

  17. Bit selection increases coiled tubing and slimhole success

    SciTech Connect

    Feiner, R.F.

    1995-07-01

    Slimhole applications have grown within the past few years to include deepening existing wells to untapped reservoirs, drilling smaller well programs to reduce tangible costs and recompleting wells to adjacent reservoirs through directional or horizontal sidetracks. When selecting the proper bit for an interval, the ultimate goal is the same in the slimhole application as in the conventional application -- to save the operator money by reducing drilling cost per foot (CPF). Slimhole bit selection is a three-step process: (1) identify the characteristics of the formations to be drilled; (2) analyze the operational limitations of the slimhole application; and (3) select the bit type that will most economically drill the interval. Knowledge of lithology is crucial to the selection process. Accurate formation knowledge can be acquired from offset well records, mud logs, cores, electric logs, compressive rock strength analysis and any other information relevant to the drilling operation. This paper reviews the steps in selecting slimhole bits and completion equipment.

  18. Secure self-calibrating quantum random-bit generator

    SciTech Connect

    Fiorentino, M.; Santori, C.; Spillane, S. M.; Beausoleil, R. G.; Munro, W. J.

    2007-03-15

    Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographic method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.

  19. 26. photographer unknown 29 December 1937 FLOATING MOORING BIT INSTALLED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. photographer unknown 29 December 1937 FLOATING MOORING BIT INSTALLED IN LOCK SIDEWALL. - Bonneville Project, Navigation Lock No. 1, Oregon shore of Columbia River near first Powerhouse, Bonneville, Multnomah County, OR

  20. Eight-Bit-Slice GaAs General Processor Circuit

    NASA Technical Reports Server (NTRS)

    Weissman, John; Gauthier, Robert V.

    1989-01-01

    Novel GaAs 8-bit slice enables quick and efficient implementation of variety of fast GaAs digital systems ranging from central processing units of computers to special-purpose processors for communications and signal-processing applications. With GaAs 8-bit slice, designers quickly configure and test hearts of many digital systems that demand fast complex arithmetic, fast and sufficient register storage, efficient multiplexing and routing of data words, and ease of control.

  1. Strong no-go theorem for Gaussian quantum bit commitment

    SciTech Connect

    Magnin, Loieck; Magniez, Frederic; Leverrier, Anthony

    2010-01-15

    Unconditionally secure bit commitment is forbidden by quantum mechanics. We extend this no-go theorem to continuous-variable protocols where both players are restricted to use Gaussian states and operations, which is a reasonable assumption in current-state optical implementations. Our Gaussian no-go theorem also provides a natural counter-example to a conjecture that quantum mechanics can be rederived from the assumption that key distribution is allowed while bit commitment is forbidden in Nature.

  2. Advanced bit establishes superior performance in Ceuta field

    SciTech Connect

    Mensa-Wilmot, G.

    1999-11-01

    A new-generation polycrystalline diamond compact (PDC) bit is redefining operational efficiency and reducing drilling costs in the Ceuta field, in the Lago de Maracaibo area of Venezuela. Its unique cutting structure and advancements in PDC cutter technology have established superior performance in this challenging application. The paper describes the new-generation PDC bit, advanced technology PDC cutters, and performance. A table gives cost per foot evaluation.

  3. 8-Bit Gray Scale Images of Fingerprint Image Groups

    National Institute of Standards and Technology Data Gateway

    NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (PC database for purchase)   The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  4. Achieving the Holevo bound via a bisection decoding protocol

    NASA Astrophysics Data System (ADS)

    Rosati, Matteo; Giovannetti, Vittorio

    2016-06-01

    We present a new decoding protocol to realize transmission of classical information through a quantum channel at asymptotically maximum capacity, achieving the Holevo bound and thus the optimal communication rate. At variance with previous proposals, our scheme recovers the message bit by bit, making use of a series of "yes-no" measurements, organized in bisection fashion, thus determining which codeword was sent in log2 N steps, N being the number of codewords.

  5. Bits with diamond-coated inserts reduce gauge problems

    SciTech Connect

    Eckstrom, D. )

    1991-06-17

    In highly abrasive formations, failure of the gauge row cutters on tungsten carbide insert bits may occur rapidly, resulting in short bit runs, poor performance, and undergauge hole. In certain applications, polycrystalline diamond (PCD) enhanced insert bits have longer bit runs and maintain an in-gauge hole which reduces reaming time and wear on downhole equipment. These bits with PCD-coated inserts have reduced drilling costs in several areas of Canada. PCD has been applied to rock drilling tools for several years because of its high wear resistance. Polycrystalline diamond compact (PDC) bits use polycrystalline diamonds formed in flat wafers applied to the flat surfaces on carbide inserts. The flat PDC cutters drill by shearing the formation. Smith International Canada Ltd. developed a patented process to apply PCD to curved surfaces, which now allows PCD-enhanced inserts to be used for percussion and rotary cone applications. These diamond-enhanced inserts combine the wear resistance properties of diamond with the durability of tungsten carbide.

  6. Supporting 64-bit global indices in Epetra and other Trilinos packages :

    SciTech Connect

    Jhurani, Chetan; Austin, Travis M.; Heroux, Michael Allen; Willenbring, James Michael

    2013-06-01

    The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries within an object-oriented framework. It is intended for large-scale, complex multiphysics engineering and scientific applications [2, 4, 3]. Epetra is one of its basic packages. It provides serial and parallel linear algebra capabilities. Before Trilinos version 11.0, released in 2012, Epetra used the C++ int data-type for storing global and local indices for degrees of freedom (DOFs). Since int is typically 32-bit, this limited the largest problem size to be smaller than approximately two billion DOFs. This was true even if a distributed memory machine could handle larger problems. We have added optional support for C++ long long data-type, which is at least 64-bit wide, for global indices. To save memory, maintain the speed of memory-bound operations, and reduce further changes to the code, the local indices are still 32-bit. We document the changes required to achieve this feature and how the new functionality can be used. We also report on the lessons learned in modifying a mature and popular package from various perspectives design goals, backward compatibility, engineering decisions, C++ language features, effects on existing users and other packages, and build integration.

  7. Patterned media towards Nano-bit magnetic recording: fabrication and challenges.

    PubMed

    Sbiaa, Rachid; Piramanayagam, Seidikkurippu N

    2007-01-01

    During the past decade, magnetic recording density of HDD has doubled almost every 18 months. To keep increasing the recording density, there is a need to make the small bits thermally stable. The most recent method using perpendicular recording media (PMR) will lose its fuel in a few years time and alternatives are sought. Patterned media, where the bits are magnetically separated from each other, offer the possibility to solve many issues encountered by PMR technology. However, implementation of patterned media would involve developing processing methods which offer high resolution (small bits), regular patterns, and high density. All these need to be achieved without sacrificing a high throughput and low cost. In this article, we review some of the ideas that have been proposed in this subject. However, the focus of the paper is on nano-imprint lithography (NIL) as it fulfills most of the needs of HDD as compared to conventional lithography using electron beam, EUV or X-Rays. The latest development of NIL and related technologies and their future prospects for patterned media are also discussed.

  8. Investigation of the potential for using electrochemical technology to reduce drill bit wear

    SciTech Connect

    Hinkebein, T.E.; Glowka, D.A.

    1982-02-01

    Recent work has shown that an important drill bit wear mechanism in aqueous environments is electrochemical in nature. The synergistic effects of corrosion and abrasion are responsible for a large percentage of bit wear in laboratory studies. It has been shown that measured wear rates can be reduced by factors of two to five with the application of a voltage potential which opposes and exceeds the galvanic potential generated by the corrosion cells existing downhole. The present study investigates the potential for applying this technique in the downhole environment. The results demonstrate that a downhole generator sub powered by drilling fluid is a possible electrical power source. Graphite is chosen as the optimal nonsacrificial anode material for this application. Steel is also shown to be a possible anode material, but the anode would be sacrificial in this case, requiring periodic replacement. The electrical power required to achieve the desired effect for 4-1/2 inch drill bit is determined to be on the order of one milliwatt. Additionally, up to 250 feet of 4 inch drill pipe could be protected from corrosion with power levels on the order of 150 milliwatts. These relatively low power levels suggest that dry cell batteries could alternatively be employed as the power source; however, the temperature limitations of commercially available batteries would have to be overcome for geothermal applications.

  9. Entanglment assisted zero-error codes

    NASA Astrophysics Data System (ADS)

    Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan

    2011-03-01

    Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.

  10. Sleep stage classification with low complexity and low bit rate.

    PubMed

    Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti

    2009-01-01

    Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.

  11. Serialized quantum error correction protocol for high-bandwidth quantum repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, A. N.; Waks, E.; Taylor, J. M.

    2016-09-01

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km‑1, logical gate failure probabilities of 10‑5, photon creation and measurement error rates of 10‑5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2

  12. Serialized quantum error correction protocol for high-bandwidth quantum repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, A. N.; Waks, E.; Taylor, J. M.

    2016-09-01

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2

  13. Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Iskenderian, Theodore; Sherrit, Stewart; Bao, Xiaoqi; Linderman, Randel

    2010-01-01

    Long tool bits are undergoing development that can be stowed compactly until used as rock- or ground-penetrating probes actuated by ultrasonic/sonic mechanisms. These bits are designed to be folded or rolled into compact form for transport to exploration sites, where they are to be connected to their ultrasonic/ sonic actuation mechanisms and unfolded or unrolled to their full lengths for penetrating ground or rock to relatively large depths. These bits can be designed to acquire rock or soil samples and/or to be equipped with sensors for measuring properties of rock or soil in situ. These bits can also be designed to be withdrawn from the ground, restowed, and transported for reuse at different exploration sites. Apparatuses based on the concept of a probe actuated by an ultrasonic/sonic mechanism have been described in numerous prior NASA Tech Briefs articles, the most recent and relevant being "Ultrasonic/ Sonic Impacting Penetrators" (NPO-41666) NASA Tech Briefs, Vol. 32, No. 4 (April 2008), page 58. All of those apparatuses are variations on the basic theme of the earliest ones, denoted ultrasonic/sonic drill corers (USDCs). To recapitulate: An apparatus of this type includes a lightweight, low-power, piezoelectrically driven actuator in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that the size of the axial force needed to make the tool bit advance into soil, rock, or another material of interest is much smaller than in ordinary twist drilling, ordinary hammering, or ordinary steady pushing. Examples of properties that could be measured by use of an instrumented tool bit include electrical conductivity, permittivity, magnetic

  14. A 100 MS/s 9 bit 0.43 mW SAR ADC with custom capacitor array

    NASA Astrophysics Data System (ADS)

    Jingjing, Wang; Zemin, Feng; Rongjin, Xu; Chixiao, Chen; Fan, Ye; Jun, Xu; Junyan, Ren

    2016-05-01

    A low power 9 bit 100 MS/s successive approximation register analog-to-digital converter (SAR ADC) with custom capacitor array is presented. A brand-new 3-D MOM unit capacitor is used as the basic capacitor cell of this capacitor array. The unit capacitor has a capacitance of 1 fF. Besides, the advanced capacitor array structure and switch mode decrease the power consumption a lot. To verify the effectiveness of this low power design, the 9 bit 100 MS/s SAR ADC is implemented in TSMC IP9M 65 nm LP CMOS technology. The measurement results demonstrate that this design achieves an effective number of bits (ENOB) of 7.4 bit, a signal-to-noise plus distortion ratio (SNDR) of 46.40 dB and a spurious-free dynamic range (SFDR) of 62.31 dB at 100 MS/s with 1 MHz input. The SAR ADC core occupies an area of 0.030 mm2 and consumes 0.43 mW under a supply voltage of 1.2 V. The figure of merit (FOM) of the SAR ADC achieves 23.75 fJ/conv. Project supported by the National High-Tech Research and Development Program of China (No. 2013AA014101).

  15. Compiler-Assisted Detection of Transient Memory Errors

    SciTech Connect

    Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-06-09

    The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.

  16. CMOS RAM cosmic-ray-induced-error-rate analysis

    NASA Technical Reports Server (NTRS)

    Pickel, J. C.; Blandford, J. T., Jr.

    1981-01-01

    A significant number of spacecraft operational anomalies are believed to be associated with cosmic-ray-induced soft errors in the LSI memories. Test programs using a cyclotron to simulate cosmic rays have established conclusively that many common commercial memory types are vulnerable to heavy-ion upset. A description is given of the methodology and the results of a detailed analysis for predicting the bit-error rate in an assumed space environment for CMOS memory devices. Results are presented for three types of commercially available CMOS 1,024-bit RAMs. It was found that the HM6508 is susceptible to single-ion induced latchup from argon and krypton ions. The HS6508 and HS6508RH and the CDP1821 apparently are not susceptible to single-ion induced latchup.

  17. Reducing Soft-error Vulnerability of Caches using Data Compression

    SciTech Connect

    Vetter, Jeffrey S

    2016-01-01

    With ongoing chip miniaturization and voltage scaling, particle strike-induced soft errors present increasingly severe threat to the reliability of on-chip caches. In this paper, we present a technique to reduce the vulnerability of caches to soft-errors. Our technique uses data compression to reduce the number of vulnerable data bits in the cache and performs selective duplication of more critical data-bits to provide extra protection to them. Microarchitectural simulations have shown that our technique is effective in reducing architectural vulnerability factor (AVF) of the cache and outperforms another technique. For single and dual-core system configuration, the average reduction in AVF is 5.59X and 8.44X, respectively. Also, the implementation and performance overheads of our technique are minimal and it is useful for a broad range of workloads.

  18. Modeling and analysis of drag-bit cutting

    SciTech Connect

    Swenson, D.V.

    1983-07-01

    This report documents a finite-element analysis of drag-bit cutting using polycrystalline-diamond compact cutters. To verify the analysis capability, prototypic indention tests were performed on Berea sandstone specimens. Analysis of these tests, using measured material properties, predicted fairly well the experimentally observed fracture patterns and indention loads. The analysis of drag-bit cutting met with mixed success, being able to capture the major features of the cutting process, but not all the details. In particular, the analysis is sensitive to the assumed contact between the cutter and rock. Calculations of drag-bit cutting predict that typical vertical loads on the cutters are capable of forming fractures. Thus, indention-type loading may be one of the main fracture mechanisms during drag-bit cutting, not only the intuitive notion of contact between the front of the cutter and rock. The model also predicts a change in the cutting process from tensile fractures to shear failure when the rock is confined by in-situ stresses. Both of these results have implications for the design and testing of drag-bit cutters.

  19. An SNR improvement of passive SAW tags with 5-bit Barker code sequence

    NASA Astrophysics Data System (ADS)

    Bae, Hyunchul; Kim, Jaekwon; Burm, Jinwook

    2012-07-01

    Passive surface acoustic wave (SAW) tags require a large signal-to-noise ratio (SNR) in order to increase the interrogation range. For the purpose of achieving high SNR for radio frequency identification (RFID) communication systems, Barker codes, a binary phase shift keying (BPSK) modulation technique, have been adopted in this study. Passive SAW RFID tags were designed with 5-bit Barker code sequences to generate BPSK modulated signals. Through the SNR analysis, the improvements in SNR were about 11 dB using Barker codes along with a correlator, which can be further improved by optimisation in the correlator.

  20. Image steganography based on 2k correction and coherent bit length

    NASA Astrophysics Data System (ADS)

    Sun, Shuliang; Guo, Yongning

    2014-10-01

    In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.

  1. Control of Spacecraft Formations Around the Libration Points Using Electric Motors with One Bit of Resolution

    NASA Astrophysics Data System (ADS)

    Serpelloni, Edoardo; Maggiore, Manfredi; Damaren, Christopher J.

    2015-02-01

    This paper investigates a formation control problem for two space vehicles in the vicinity of the L 2 libration point of the Sun-Earth/Moon system. The objective is to accurately regulate the relative position vector between the vehicles to a desired configuration, under tight tolerances. It is shown that the formation control problem is solvable using six constant thrust electric actuators requiring only one bit of resolution, and bounded switching frequency. The proposed control law is hybrid, and it coordinates the sequence of on-off switches of the thrusters so as to achieve the control objective and, at the same time, avoid high-frequency switching.

  2. AMJoin: An Advanced Join Algorithm for Multiple Data Streams Using a Bit-Vector Hash Table

    NASA Astrophysics Data System (ADS)

    Kwon, Tae-Hyung; Kim, Hyeon-Gyu; Kim, Myoung-Ho; Son, Jin-Hyun

    A multiple stream join is one of the most important but high cost operations in ubiquitous streaming services. In this paper, we propose a newly improved and practical algorithm for joining multiple streams called AMJoin, which improves the multiple join performance by guaranteeing the detection of join failures in constant time. To achieve this goal, we first design a new data structure called BiHT (Bit-vector Hash Table) and present the overall behavior of AMJoin in detail. In addition, we show various experimental results and their analyses for clarifying its efficiency and practicability.

  3. Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification

    NASA Astrophysics Data System (ADS)

    Zhang, Ruimao; Lin, Liang; Zhang, Rui; Zuo, Wangmeng; Zhang, Lei

    2015-12-01

    Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval . Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. Specifically, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between matched pairs and mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.

  4. Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification.

    PubMed

    Zhang, Ruimao; Lin, Liang; Zhang, Rui; Zuo, Wangmeng; Zhang, Lei

    2015-12-01

    Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.

  5. Coded error probability evaluation for antijam communication systems

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Levitt, B. K.

    1982-01-01

    We present a general union-Chernoff bound on the bit error probability for coded communication systems and apply it to examples of antijam systems. The key feature of this bound is the decoupling of the coding aspects of the system from the remaining part of the communication system which includes jamming, suboptimum detectors, and arbitrary decoding metrics which may or may not use jammer state knowledge

  6. Modification of error reconciliation scheme for quantum cryptography

    NASA Astrophysics Data System (ADS)

    Kuritsyn, Konstantin

    2003-07-01

    Quantum cryptography is essentially the quantum key distribution (QKD). In the context of QKD, one from two partners (Alice) generates and sends a sequence of qubits through a private quantum channel to another partner (Bob) and Bob receives the sequence and measures the state of each qubit. After the quantum transmission stage, Alice and Bob have almost identical qubit sequences. The erros are due to physical imperfections in the channel and presence of an eavesdropper. The next stage in QKD is key reconciliation (i.e. finding and correcting discrepancies between Alice's string and that of Bob). This reconciliation can be done by public discussion. Let us suppose there is a secret quantum channel between Alice and Bob through which Alice transmits a n-bit string A=(A1, A2,...,An)ɛ{0,1}n. Then Bob receives a n-bit string B=(B1, B2,...,Bn)ɛ{0,1)n. The string B differs from A due to the presence of noise and eavesdropper in the channel. One can estimate the bit error probability in the channel. For example, Bob can choose a random subset from his string and send it to Alice in public. Then Alice compares the received string with her corresponding subset and calculates the total number of protocol steps. The cascade scheme uses the interaction over the public channel to correct the secret strings by dividing them into the blocks of a fixed length. The length is determined from the bit error probability. A simple interactive routine is applied in each of these blocks. An error found in some block results in some action with other blocks. It is important to optimize the error-finding routines in standalone blocks as well as to organize the effective constrution of blocks with the object of protocol benchmark, information leakage and number of interactions between partners.

  7. Testing of Error-Correcting Sparse Permutation Channel Codes

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill, V.; Orlov, Sergei S.

    2008-01-01

    A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.

  8. The effects of long delay and transmission errors on the performance of TP-4 implementations

    NASA Technical Reports Server (NTRS)

    Durst, Robert C.; Evans, Eric L.; Mitchell, Randy C.

    1991-01-01

    A set of tools that allows us to measure and examine the effects of transmission delay and errors on the performance of TP-4 implementations has been developed. The tools give insight into both the large- and small-scale behaviors of an implementation. These tools have been systematically applied to a commercial implementation of TP-4. Measurements show, among other things, that a 2-second one-way transmission delay and an effective bit-error rate of 1 error per 100,000 bits can result in a 95 percent reduction in TP-4 throughput. The detailed statistics give insight into why transmission delay and errors affect this implementations so significantly and support a number of 'lessons learned' that could be applied to TP-4 implementations that operate more robustly across networks with long transmission delays and transmission errors.

  9. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check. PMID:21102793

  10. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  11. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  12. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  13. Unconditionally secure bit commitment by transmitting measurement outcomes.

    PubMed

    Kent, Adrian

    2012-09-28

    We propose a new unconditionally secure bit commitment scheme based on Minkowski causality and the properties of quantum information. The receiving party sends a number of randomly chosen Bennett-Brassard 1984 (BB84) qubits to the committer at a given point in space-time. The committer carries out measurements in one of the two BB84 bases, depending on the committed bit value, and transmits the outcomes securely at (or near) light speed in opposite directions to remote agents. These agents unveil the bit by returning the outcomes to adjacent agents of the receiver. The protocol's security relies only on simple properties of quantum information and the impossibility of superluminal signalling. PMID:23030073

  14. New bits, motors improve economics of slim hole horizontal wells

    SciTech Connect

    McDonald, S.; Felderhoff, F.; Fisher, K.

    1996-03-11

    The latest generation of small-diameter bits, combined with a new extended power section positive displacement motor (PDM), has improved the economics of slim hole drilling programs. As costs are driven down, redevelopment reserves are generated in the older, more established fields. New reserves result from increases in the ultimate recovery and accelerated production rates from the implementation of horizontal wells in reentry programs. This logic stimulated an entire development program for a Gulf of Mexico platform, which was performed without significant compromises in well bore geometry. The savings from this new-generation drilling system come from reducing the total number of trips required during the drilling phase. This paper reviews the design improvements of roller cone bits, PDC bits, and positive displacement motors for offshore directional drilling operations.

  15. Fully photonics-based physical random bit generator.

    PubMed

    Li, Pu; Sun, Yuanyuan; Liu, Xianglian; Yi, Xiaogang; Zhang, Jianguo; Guo, Xiaomin; Guo, Yanqiang; Wang, Yuncai

    2016-07-15

    We propose a fully photonics-based approach for ultrafast physical random bit generation. This approach exploits a compact nonlinear loop mirror (called a terahertz optical asymmetric demultiplexer, TOAD) to sample the chaotic optical waveform in an all-optical domain and then generate random bit streams through further comparison with a threshold level. This method can efficiently overcome the electronic jitter bottleneck confronted by existing RBGs in practice. A proof-of-concept experiment demonstrates that this method can continuously extract 5 Gb/s random bit streams from the chaotic output of a distributed feedback laser diode (DFB-LD) with optical feedback. This limited generation rate is caused by the bandwidth of the used optical chaos. PMID:27420532

  16. Security bound of cheat sensitive quantum bit commitment

    PubMed Central

    He, Guang Ping

    2015-01-01

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities. PMID:25796977

  17. Drill Bits: Education and Outreach for Scientific Drilling Projects

    NASA Astrophysics Data System (ADS)

    Prose, D. V.; Lamacchia, D. M.

    2007-12-01

    Drill Bits is a series of short, three- to five-minute videos that explore the research and capture the challenging nature of large scientific drilling projects occurring around the world. The drilling projects, conducted under the auspices of the International Continental Scientific Drilling Program (ICDP), address fundamental earth science topics, including those of significant societal relevance such as earthquakes, volcanoes, and global climate change. The videos are filmed on location and aimed at nonscientific audiences. The purpose of the Drill Bits series is to provide scientific drilling organizations, scientists, and educators with a versatile tool to help educate the public, students, the media, and public officials about scientific drilling. The videos are designed to be viewed in multiple formats: on DVD; videotape; and science-related web sites, where they can be streamed or downloaded as video podcasts. Several Drill Bits videos will be screened, and their uses for outreach and education will be discussed.

  18. Inexpensive programmable clock for a 12-bit computer

    NASA Technical Reports Server (NTRS)

    Vrancik, J. E.

    1972-01-01

    An inexpensive programmable clock was built for a digital PDP-12 computer. The instruction list includes skip on flag; clear the flag, clear the clock, and stop the clock; and preset the counter with the contents of the accumulator and start the clock. The clock counts at a rate determined by an external oscillator and causes an interrupt and sets a flag when a 12-bit overflow occurs. An overflow can occur after 1 to 4096 counts. The clock can be built for a total parts cost of less than $100 including power supply and I/O connector. Slight modification can be made to permit its use on larger machines (16 bit, 24 bit, etc.) and logic level shifting can be made to make it compatible with any computer.

  19. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-01-01

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities. PMID:25796977

  20. BitCube: A Bottom-Up Cubing Engineering

    NASA Astrophysics Data System (ADS)

    Ferro, Alfredo; Giugno, Rosalba; Puglisi, Piera Laura; Pulvirenti, Alfredo

    Enhancing on line analytical processing through efficient cube computation plays a key role in Data Warehouse management. Hashing, grouping and mining techniques are commonly used to improve cube pre-computation. BitCube, a fast cubing method which uses bitmaps as inverted indexes for grouping, is presented. It horizontally partitions data according to the values of one dimension and for each resulting fragment it performs grouping following bottom-up criteria. BitCube allows also partial materialization based on iceberg conditions to treat large datasets for which a full cube pre-computation is too expensive. Space requirement of bitmaps is optimized by applying an adaption of the WAH compression technique. Experimental analysis, on both synthetic and real datasets, shows that BitCube outperforms previous algorithms for full cube computation and results comparable on iceberg cubing.

  1. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-03-23

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  2. Fully photonics-based physical random bit generator.

    PubMed

    Li, Pu; Sun, Yuanyuan; Liu, Xianglian; Yi, Xiaogang; Zhang, Jianguo; Guo, Xiaomin; Guo, Yanqiang; Wang, Yuncai

    2016-07-15

    We propose a fully photonics-based approach for ultrafast physical random bit generation. This approach exploits a compact nonlinear loop mirror (called a terahertz optical asymmetric demultiplexer, TOAD) to sample the chaotic optical waveform in an all-optical domain and then generate random bit streams through further comparison with a threshold level. This method can efficiently overcome the electronic jitter bottleneck confronted by existing RBGs in practice. A proof-of-concept experiment demonstrates that this method can continuously extract 5 Gb/s random bit streams from the chaotic output of a distributed feedback laser diode (DFB-LD) with optical feedback. This limited generation rate is caused by the bandwidth of the used optical chaos.

  3. A bit allocation method for sparse source coding.

    PubMed

    Kaaniche, Mounir; Fraysse, Aurélia; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe

    2014-01-01

    In this paper, we develop an efficient bit allocation strategy for subband-based image coding systems. More specifically, our objective is to design a new optimization algorithm based on a rate-distortion optimality criterion. To this end, we consider the uniform scalar quantization of a class of mixed distributed sources following a Bernoulli-generalized Gaussian distribution. This model appears to be particularly well-adapted for image data, which have a sparse representation in a wavelet basis. In this paper, we propose new approximations of the entropy and the distortion functions using piecewise affine and exponential forms, respectively. Because of these approximations, bit allocation is reformulated as a convex optimization problem. Solving the resulting problem allows us to derive the optimal quantization step for each subband. Experimental results show the benefits that can be drawn from the proposed bit allocation method in a typical transform-based coding application.

  4. Can relativistic bit commitment lead to secure quantum oblivious transfer?

    NASA Astrophysics Data System (ADS)

    He, Guang Ping

    2015-05-01

    While unconditionally secure bit commitment (BC) is considered impossible within the quantum framework, it can be obtained under relativistic or experimental constraints. Here we study whether such BC can lead to secure quantum oblivious transfer (QOT). The answer is not completely negative. In one hand, we provide a detailed cheating strategy, showing that the "honest-but-curious adversaries" in some of the existing no-go proofs on QOT still apply even if secure BC is used, enabling the receiver to increase the average reliability of the decoded value of the transferred bit. On the other hand, it is also found that some other no-go proofs claiming that a dishonest receiver can always decode all transferred bits simultaneously with reliability 100% become invalid in this scenario, because their models of cryptographic protocols are too ideal to cover such a BC-based QOT.

  5. Security bound of cheat sensitive quantum bit commitment

    NASA Astrophysics Data System (ADS)

    He, Guang Ping

    2015-03-01

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  6. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  7. Decoding and synchronization of error correcting codes

    NASA Astrophysics Data System (ADS)

    Madkour, S. A.

    1983-01-01

    Decoding devices for hard quantization and soft decision error correcting codes are discussed. A Meggit decoder for Reed-Solomon polynominal codes was implemented and tested. It uses 49 TTL logic IC. A maximum binary frequency of 30 Mbit/sec is demonstrated. A soft decision decoding approach was applied to hard decision decoding, using the principles of threshold decoding. Simulation results indicate that the proposed schema achieves satisfactory performance using only a small number of parity checks. The combined correction of substitution and synchronization errors is analyzed. The algorithm presented shows the capability of convolutional codes to correct synchronization errors as well as independent additive errors without any additional redundancy.

  8. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  9. Results of no-flow rotary drill bit comparison testing

    SciTech Connect

    WITWER, K.S.

    1998-11-30

    This document describes the results of testing of a newer rotary sampling bit and sampler insert called the No-Flow System. This No-Flow System was tested side by side against the currently used rotary bit and sampler insert, called the Standard System. The two systems were tested using several ''hard to sample'' granular non-hazardous simulants to determine which could provide greater sample recovery. The No-Flow System measurably outperformed the Standard System in each of the tested simulants.

  10. Cloning the entanglement of a pair of quantum bits

    SciTech Connect

    Lamoureux, Louis-Philippe; Navez, Patrick; Cerf, Nicolas J.; Fiurasek, Jaromir

    2004-04-01

    It is shown that any quantum operation that perfectly clones the entanglement of all maximally entangled qubit pairs cannot preserve separability. This 'entanglement no-cloning' principle naturally suggests that some approximate cloning of entanglement is nevertheless allowed by quantum mechanics. We investigate a separability-preserving optimal cloning machine that duplicates all maximally entangled states of two qubits, resulting in 0.285 bits of entanglement per clone, while a local cloning machine only yields 0.060 bits of entanglement per clone.

  11. Development of a jet-assisted polycrystalline diamond drill bit

    SciTech Connect

    Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.

    1997-12-31

    A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.

  12. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  13. A low cost alternative to high performance PCM bit synchronizers

    NASA Technical Reports Server (NTRS)

    Deshong, Bruce

    1993-01-01

    The Code Converter/Clock Regenerator (CCCR) provides a low-cost alternative to high-performance Pulse Code Modulation (PCM) bit synchronizers in environments with a large Signal-to-Noise Ratio (SNR). In many applications, the CCCR can be used in place of PCM bit synchronizers at about one fifth the cost. The CCCR operates at rates from 10 bps to 2.5 Mbps and performs PCM code conversion and clock regeneration. The CCCR has been integrated into a stand-alone system configurable from one to six channels and has also been designed for use in VMEbus compatible systems.

  14. Hanford coring bit temperature monitor development testing results report

    SciTech Connect

    Rey, D.

    1995-05-01

    Instrumentation which directly monitors the temperature of a coring bit used to retrieve core samples of high level nuclear waste stored in tanks at Hanford was developed at Sandia National Laboratories. Monitoring the temperature of the coring bit is desired to enhance the safety of the coring operations. A unique application of mature technologies was used to accomplish the measurement. This report documents the results of development testing performed at Sandia to assure the instrumentation will withstand the severe environments present in the waste tanks.

  15. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    PubMed Central

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  16. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  17. Analysis of Efficiency of Drilling of Large-Diameter Wells With a Profiled Wing Bit / Badania Efektywności Wiercenia Studni Wielkośrednicowych Świdrem Skrawającym z Profilowanymi Skrzydłami

    NASA Astrophysics Data System (ADS)

    Macuda, Jan

    2012-11-01

    In Poland all lignite mines are dewatered with the use of large-diameter wells. Drilling of such wells is inefficient owing to the presence of loose Quaternary and Tertiary material and considerable dewatering of rock mass within the open pit area. Difficult geological conditions significantly elongate the time in which large-diameter dewatering wells are drilled, and various drilling complications and break-downs related to the caving may occur. Obtaining higher drilling rates in large-diameter wells can be achieved only when new cutter bits designs are worked out and rock drillability tests performed for optimum mechanical parameters of drilling technology. Those tests were performed for a bit ø 1.16 m in separated macroscopically homogeneous layers of similar drillability. Depending on the designed thickness of the drilled layer, there were determined measurement sections from 0.2 to 1.0 m long, and each of the sections was drilled at constant rotary speed and weight on bit values. Prior to drillability tests, accounting for the technical characteristic of the rig and strength of the string and the cutter bit, there were established limitations for mechanical parameters of drilling technology: P ∈ (Pmin; Pmax) n ∈ (nmin; nmax) where: Pmin; Pmax - lowest and highest values of weight on bit, nmin; nmax - lowest and highest values of rotary speed of bit, For finding the dependence of the rate of penetration on weight on bit and rotary speed of bit various regression models have been analyzed. The most satisfactory results were obtained for the exponential model illustrating the influence of weight on bit and rotary speed of bit on drilling rate. The regression coefficients and statistical parameters prove the good fit of the model to measurement data, presented in tables 4-6. The average drilling rate for a cutter bit with profiled wings has been described with the form: Vśr= Z ·Pa· nb where: Vśr- average drilling rate, Z - drillability coefficient, P

  18. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    SciTech Connect

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  19. An 11 μW Sub-pJ/bit Reconfigurable Transceiver for mm-Sized Wireless Implants.

    PubMed

    Yakovlev, Anatoly; Jang, Ji Hoon; Pivonka, Daniel

    2016-02-01

    A wirelessly powered 11 μW transceiver for implantable devices has been designed and demonstrated through 35 mm of porcine heart tissue. The prototype was implemented in 65 nm CMOS occupying 1 mm × 1 mm with a 2 mm × 2 mm off-chip antenna. The IC consists of a rectifier, regulator, demodulator, modulator, controller, and sensor interface. The forward link transfers power and data on a 1.32 GHz carrier using low-depth ASK modulation that minimizes impact on power delivery and achieves from 4 to 20 Mbps with 0.3 pJ/bit at 4 Mbps. The backscattering link modulates the antenna impedance with a configurable load for operation in diverse biological environments and achieves up to 2 Mbps at 0.7 pJ/bit. The device supports TDMA, allowing for operation of multiple devices from a single external transceiver.

  20. A Planar Approximation for the Least Reliable Bit Log-likelihood Ratio of 8-PSK Modulation

    NASA Technical Reports Server (NTRS)

    Thesling, William H.; Vanderaar, Mark J.

    1994-01-01

    The optimum decoding of component codes in block coded modulation (BCM) schemes requires the use of the log-likelihood ratio (LLR) as the signal metric. An approximation to the LLR for the least reliable bit (LRB) in an 8-PSK modulation based on planar equations with fixed point arithmetic is developed that is both accurate and easily realizable for practical BCM schemes. Through an error power analysis and an example simulation it is shown that the approximation results in 0.06 dB in degradation over the exact expression at an E(sub s)/N(sub o) of 10 dB. It is also shown that the approximation can be realized in combinatorial logic using roughly 7300 transistors. This compares favorably to a look up table approach in typical systems.

  1. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  2. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  3. Cheat-sensitive commitment of a classical bit coded in a block of mxn round-trip qubits

    SciTech Connect

    Shimizu, Kaoru; Fukasaka, Hiroyuki; Tamaki, Kiyoshi; Imoto, Nobuyuki

    2011-08-15

    This paper proposes a quantum protocol for a cheat-sensitive commitment of a classical bit. Alice, the receiver of the bit, can examine dishonest Bob, who changes or postpones his choice. Bob, the sender of the bit, can examine dishonest Alice, who violates concealment. For each round-trip case, Alice sends one of two spin states |S{+-}> by choosing basis S at random from two conjugate bases X and Y. Bob chooses basis C is an element of {l_brace}X,Y{r_brace} to perform a measurement and returns a resultant state |C{+-}>. Alice then performs a measurement with the other basis R ({ne}S) and obtains an outcome |R{+-}>. In the opening phase, she can discover dishonest Bob, who unveils a wrong basis with a faked spin state, or Bob can discover dishonest Alice, who infers basis C but destroys |C{+-}> by setting R to be identical to S in the commitment phase. If a classical bit is coded in a block of mxn qubit particles, impartial examinations and probabilistic security criteria can be achieved.

  4. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  5. Cheat-sensitive commitment of a classical bit coded in a block of m × n round-trip qubits

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Fukasaka, Hiroyuki; Tamaki, Kiyoshi; Imoto, Nobuyuki

    2011-08-01

    This paper proposes a quantum protocol for a cheat-sensitive commitment of a classical bit. Alice, the receiver of the bit, can examine dishonest Bob, who changes or postpones his choice. Bob, the sender of the bit, can examine dishonest Alice, who violates concealment. For each round-trip case, Alice sends one of two spin states |S±⟩ by choosing basis S at random from two conjugate bases X and Y. Bob chooses basis C ∈ {X,Y} to perform a measurement and returns a resultant state |C±⟩. Alice then performs a measurement with the other basis R (≠S) and obtains an outcome |R±⟩. In the opening phase, she can discover dishonest Bob, who unveils a wrong basis with a faked spin state, or Bob can discover dishonest Alice, who infers basis C but destroys |C±⟩ by setting R to be identical to S in the commitment phase. If a classical bit is coded in a block of m × n qubit particles, impartial examinations and probabilistic security criteria can be achieved.

  6. A single-channel 10-bit 160 MS/s SAR ADC in 65 nm CMOS

    NASA Astrophysics Data System (ADS)

    Yuxiao, Lu; Lu, Sun; Zhe, Li; Jianjun, Zhou

    2014-04-01

    This paper demonstrates a single-channel 10-bit 160 MS/s successive-approximation-register (SAR) analog-to-digital converter (ADC) in 65 nm CMOS process with a 1.2 V supply voltage. To achieve high speed, a new window-opening logic based on the asynchronous SAR algorithm is proposed to minimize the logic delay, and a partial set-and-down DAC with binary redundancy bits is presented to reduce the dynamic comparator offset and accelerate the DAC settling. Besides, a new bootstrapped switch with a pre-charge phase is adopted in the track and hold circuits to increase speed and reduce area. The presented ADC achieves 52.9 dB signal-to-noise distortion ratio and 65 dB spurious-free dynamic range measured with a 30 MHz input signal at 160 MHz clock. The power consumption is 9.5 mW and a core die area of 250 × 200 μm2 is occupied.

  7. Nonanalytic function generation routines for 16-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Soeder, J. F.; Shaufl, M.

    1980-01-01

    Interpolation techniques for three types (univariate, bivariate, and map) of nonanalytic functions are described. These interpolation techniques are then implemented in scaled fraction arithmetic on a representative 16 bit microprocessor. A FORTRAN program is described that facilitates the scaling, documentation, and organization of data for use by these routines. Listings of all these programs are included in an appendix.

  8. Characterization of a 16-Bit Digitizer for Lidar Data Acquisition

    NASA Technical Reports Server (NTRS)

    Williamson, Cynthia K.; DeYoung, Russell J.

    2000-01-01

    A 6-MHz 16-bit waveform digitizer was evaluated for use in atmospheric differential absorption lidar (DIAL) measurements of ozone. The digitizer noise characteristics were evaluated, and actual ozone DIAL atmospheric returns were digitized. This digitizer could replace computer-automated measurement and control (CAMAC)-based commercial digitizers and improve voltage accuracy.

  9. 16. STRUCTURAL DETAILS: CHANNEL, BIT & CLEAT, ANCHOR BOLTS & ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. STRUCTURAL DETAILS: CHANNEL, BIT & CLEAT, ANCHOR BOLTS & PLATES FOR PIERS 4, 5, AND 6, DWG. NO. 97, 1-1/2" = 1', MADE BY A.F., JUNE 13, 1908 - Baltimore Inner Harbor, Pier 5, South of Pratt Street between Market Place & Concord Street, Baltimore, Independent City, MD

  10. 17. PLANS & SECTIONS: 36" CAST IRON BITS: USED AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. PLANS & SECTIONS: 36" CAST IRON BITS: USED AT LOWER END OF PIER 5, DWG. 208, 1/2 SIZE, DRAWN BY W.B.C., MARCH 4, 1910 - Baltimore Inner Harbor, Pier 5, South of Pratt Street between Market Place & Concord Street, Baltimore, Independent City, MD

  11. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  12. Fast random bit generation with bandwidth-enhanced chaos in semiconductor lasers.

    PubMed

    Hirano, Kunihito; Yamazaki, Taiki; Morikatsu, Shinichiro; Okumura, Haruka; Aida, Hiroki; Uchida, Atsushi; Yoshimori, Shigeru; Yoshimura, Kazuyuki; Harayama, Takahisa; Davis, Peter

    2010-03-15

    We experimentally demonstrate random bit generation using multi-bit samples of bandwidth-enhanced chaos in semiconductor lasers. Chaotic fluctuation of laser output is generated in a semiconductor laser with optical feedback and the chaotic output is injected into a second semiconductor laser to obtain a chaotic intensity signal with bandwidth enhanced up to 16 GHz. The chaotic signal is converted to an 8-bit digital signal by sampling with a digital oscilloscope at 12.5 Giga samples per second (GS/s). Random bits are generated by bitwise exclusive-OR operation on corresponding bits in samples of the chaotic signal and its time-delayed signal. Statistical tests verify the randomness of bit sequences obtained using 1 to 6 bits per sample, corresponding to fast random bit generation rates from 12.5 to 75 Gigabit per second (Gb/s) ( = 6 bit x 12.5 GS/s).

  13. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  14. Estimating Hardness from the USDC Tool-Bit Temperature Rise

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Sherrit, Stewart

    2008-01-01

    A method of real-time quantification of the hardness of a rock or similar material involves measurement of the temperature, as a function of time, of the tool bit of an ultrasonic/sonic drill corer (USDC) that is being used to drill into the material. The method is based on the idea that, other things being about equal, the rate of rise of temperature and the maximum temperature reached during drilling increase with the hardness of the drilled material. In this method, the temperature is measured by means of a thermocouple embedded in the USDC tool bit near the drilling tip. The hardness of the drilled material can then be determined through correlation of the temperature-rise-versus-time data with time-dependent temperature rises determined in finite-element simulations of, and/or experiments on, drilling at various known rates of advance or known power levels through materials of known hardness. The figure presents an example of empirical temperature-versus-time data for a particular 3.6-mm USDC bit, driven at an average power somewhat below 40 W, drilling through materials of various hardness levels. The temperature readings from within a USDC tool bit can also be used for purposes other than estimating the hardness of the drilled material. For example, they can be especially useful as feedback to control the driving power to prevent thermal damage to the drilled material, the drill bit, or both. In the case of drilling through ice, the temperature readings could be used as a guide to maintaining sufficient drive power to prevent jamming of the drill by preventing refreezing of melted ice in contact with the drill.

  15. Numerical study of the simplest string bit model

    NASA Astrophysics Data System (ADS)

    Chen, Gaoli; Sun, Songge

    2016-05-01

    String bit models provide a possible method to formulate a string as a discrete chain of pointlike string bits. When the bit number M is large, a chain behaves as a continuous string. We study the simplest case that has only one bosonic bit and one fermionic bit. The creation and annihilation operators are adjoint representations of the U (N ) color group. We show that the supersymmetry reduces the parameter number of a Hamiltonian from 7 to 3 and, at N =∞ , ensures a continuous energy spectrum, which implies the emergence of one spatial dimension. The Hamiltonian H0 is constructed so that in the large N limit it produces a world sheet spectrum with one Grassmann world sheet field. We concentrate on the numerical study of the model in finite N . For the Hamiltonian H0, we find that the would-be ground energy states disappear at N =(M -1 ) /2 for odd M ≤11 . Such a simple pattern is spoiled if H has an additional term ξ Δ H which does not affect the result of N =∞ . The disappearance point moves to higher (lower) N when ξ increases (decreases). Particularly, the ±(H0-Δ H ) cases suggest a possibility that the ground state could survive at large M and M ≫N . Our study reveals that the model has stringy behavior: when N is fixed and large enough, the ground energy decreases linearly with respect to M , and the excitation energy is roughly of order M-1. We also verify that a stable system of Hamiltonian ±H0+ξ Δ H requires ξ ≥∓1 .

  16. An improved pi/4-QPSK with nonredundant error correction for satellite mobile broadcasting

    NASA Technical Reports Server (NTRS)

    Feher, Kamilo; Yang, Jiashi

    1991-01-01

    An improved pi/4-quadrature phase-shift keying (QPSK) receiver that incorporates a simple nonredundant error correction (NEC) structure is proposed for satellite and land-mobile digital broadcasting. The bit-error-rate (BER) performance of the pi/4-QPSK with NEC is analyzed and evaluated in a fast Rician fading and additive white Gaussian noise (AWGN) environment using computer simulation. It is demonstrated that with simple electronics the performance of a noncoherently detected pi/4-QPSK signal in both AWGN and fast Rician fading can be improved. When the K-factor (a ratio of average power of multipath signal to direct path power) of the Rician channel decreases, the improvement increases. An improvement of 1.2 dB could be obtained at a BER of 0.0001 in the AWGN channel. This performance gain is achieved without requiring any signal redundancy and additional bandwidth. Three types of noncoherent detection schemes of pi/4-QPSK with NEC structure, such as IF band differential detection, baseband differential detection, and FM discriminator, are discussed. It is concluded that the pi/4-QPSK with NEC is an attractive scheme for power-limited satellite land-mobile broadcasting systems.

  17. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    NASA Astrophysics Data System (ADS)

    Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.

    2016-05-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.

  18. The effects of reduced bit depth on optical coherence tomography phase data.

    PubMed

    Ling, William A; Ellerbee, Audrey K

    2012-07-01

    Past studies of the effects of bit depth on OCT magnitude data concluded that 8 bits of digitizer resolution provided nearly the same image quality as a 14-bit digitizer. However, such studies did not assess the effects of bit depth on the accuracy of phase data. In this work, we show that the effects of bit depth on phase data and magnitude data can differ significantly. This finding has an important impact on the design of phase-resolved OCT systems, such as those measuring motion and the birefringence of samples, particularly as one begins to consider the tradeoff between bit depth and digitizer speed.

  19. Gain and noise characteristics of high-bit-rate silicon parametric amplifiers.

    PubMed

    Sang, Xinzhu; Boyraz, Ozdal

    2008-08-18

    We report a numerical investigation on parametric amplification of high-bit-rate signals and related noise figure inside silicon waveguides in the presence of two-photon absorption (TPA), TPA-induced free-carrier absorption, free-carrier-induced dispersion and linear loss. Different pump parameters are considered to achieve net gain and low noise figure. We show that the net gain can only be achieved in the anomalous dispersion regime at the high-repetition-rate, if short pulses are used. An evaluation of noise properties of parametric amplification in silicon waveguides is presented. By choosing pulsed pump in suitably designed silicon waveguides, parametric amplification can be a chip-scale solution in the high-speed optical communication and optical signal processing systems.

  20. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  1. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  2. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  3. Unequal error correction strategy for magnetic recording systems with multi-track processing

    NASA Astrophysics Data System (ADS)

    Myint, L. M. M.; Supnithi, P.

    2012-04-01

    In multi-track detection, the simultaneous recovery of user data of all tracks is obtained from multi-head or single-head reader with buffer. Due to incomplete inter-track interference (ITI) information of the outer tracks, unequal error rates exist among tracks. For a system with three-track processing, the center track exhibits a better performance than the others. In this work, we propose the unequal error protection (UEP) schemes to improve the overall system performance of a 2-D interference bit-patterned recording system with multi-track detection. The performances of the proposed schemes are investigated for the BPM channels with and without the media noise. Based on the simulation results, the proposed schemes offer the gain of about 0.2-0.3 dB over the equal error protection (EEP) scheme at a bit error rate of 10-4.

  4. Pixel-level Matching Based Multi-hypothesis Error Concealment Modes for Wireless 3D H.264/MVC Communication

    NASA Astrophysics Data System (ADS)

    El-Shafai, Walid

    2015-09-01

    3D multi-view video (MVV) is multiple video streams shot by several cameras around a single scene simultaneously. Therefore it is an urgent task to achieve high 3D MVV compression to meet future bandwidth constraints while maintaining a high reception quality. 3D MVV coded bit-streams that are transmitted over wireless network can suffer from error propagation in the space, time and view domains. Error concealment (EC) algorithms have the advantage of improving the received 3D video quality without any modifications in the transmission rate or in the encoder hardware or software. To improve the quality of reconstructed 3D MVV, we propose an efficient adaptive EC algorithm with multi-hypothesis modes to conceal the erroneous Macro-Blocks (MBs) of intra-coded and inter-coded frames by exploiting the spatial, temporal and inter-view correlations between frames and views. Our proposed algorithm adapts to 3D MVV motion features and to the error locations. The lost MBs are optimally recovered by utilizing motion and disparity matching between frames and views on pixel-by-pixel matching basis. Our simulation results show that the proposed adaptive multi-hypothesis EC algorithm can significantly improve the objective and subjective 3D MVV quality.

  5. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  6. Soft Error Vulnerability of Iterative Linear Algebra Methods

    SciTech Connect

    Bronevetsky, G; de Supinski, B

    2007-12-15

    Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.

  7. Error performance of digital subscriber lines in the presence of impulse noise

    NASA Astrophysics Data System (ADS)

    Kerpez, Kenneth J.; Gottlieb, Albert M.

    1995-05-01

    This paper describes the error performance of the ISDN basic access digital subscriber line (DSL), the high bit rate digital subscriber line (HDSL), and the asymmetric digital subscriber line (ADSL) in the presence of impulse noise. Results are found by using data from the 1986 NYNEX impulse noise survey in simulations. It is shown that a simple uncoded ADSL would have an order of magnitude more errored seconds than DSL and HDSL.

  8. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.

  9. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  10. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  11. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  12. Bit silencing in fingerprints enables the derivation of compound class-directed similarity metrics.

    PubMed

    Wang, Yuan; Bajorath, Jürgen

    2008-09-01

    Fingerprints are molecular bit string representations and are among the most popular descriptors for similarity searching. In key-type fingerprints, each bit position monitors the presence or absence of a prespecified chemical or structural feature. In contrast to hashed fingerprints, this keyed design makes it possible to evaluate individual bit positions and the associated structural features during similarity searching. Bit silencing is introduced as a systematic approach to assess the contribution of each bit in a fingerprint to similarity search performance. From the resulting bit contribution profile, a bit position-dependent weight vector is derived that determines the relative weight of each bit on the basis of its individual contribution. By merging this weight vector with the Tanimoto coefficient, compound class-directed similarity metrics are obtained that further increase fingerprint search calculations compared to conventional calculations of Tanimoto similarity.

  13. Efficient biased random bit generation for parallel processing

    SciTech Connect

    Slone, D.M.

    1994-09-28

    A lattice gas automaton was implemented on a massively parallel machine (the BBN TC2000) and a vector supercomputer (the CRAY C90). The automaton models Burgers equation {rho}t + {rho}{rho}{sub x} = {nu}{rho}{sub xx} in 1 dimension. The lattice gas evolves by advecting and colliding pseudo-particles on a 1-dimensional, periodic grid. The specific rules for colliding particles are stochastic in nature and require the generation of many billions of random numbers to create the random bits necessary for the lattice gas. The goal of the thesis was to speed up the process of generating the random bits and thereby lessen the computational bottleneck of the automaton.

  14. Very low bit rate voice for packetized mobile applications

    SciTech Connect

    Knittle, C.D.; Malone, K.T.

    1991-01-01

    Transmitting digital voice via packetized mobile communications systems that employ relatively short packet lengths and narrow bandwidths often necessitates very low bit rate coding of the voice data. Sandia National Laboratories is currently developing an efficient voice coding system operating at 800 bits per second (bps). The coding scheme is a modified version of the 2400 bps NSA LPC-10e standard. The most significant modification to the LPC-10e scheme is the vector quantization of the line spectrum frequencies associated with the synthesis filters. An outline of a hardware implementation for the 800 bps coder is presented. The speech quality of the coder is generally good, although speaker recognition is not possible. Further research is being conducted to reduce the memory requirements and complexity of the vector quantizer, and to increase the quality of the reconstructed speech. 4 refs., 2 figs., 3 tabs.

  15. Very low bit rate voice for packetized mobile applications

    SciTech Connect

    Knittle, C.D.; Malone, K.T. )

    1991-01-01

    This paper reports that transmitting digital voice via packetized mobile communications systems that employ relatively short packet lengths and narrow bandwidths often necessitates very low bit rate coding of the voice data. Sandia National Laboratories is currently developing an efficient voice coding system operating at 800 bits per second (bps). The coding scheme is a modified version of the 2400 bps NSA LPC-10e standard. The most significant modification to the LPC-10e scheme is the vector quantization of the line spectrum frequencies associated with the synthesis filters. An outline of a hardware implementation for the 800 bps coder is presented. The speech quality of the coder is generally good, although speaker recognition is not possible. Further research is being conducted to reduce the memory requirements and complexity of the vector quantizer, and to increase the quality of the reconstructed speech. This work may be of use dealing with nuclear materials.

  16. Fully distrustful quantum bit commitment and coin flipping.

    PubMed

    Silman, J; Chailloux, A; Aharon, N; Kerenidis, I; Pironio, S; Massar, S

    2011-06-01

    In the distrustful quantum cryptography model the parties have conflicting interests and do not trust one another. Nevertheless, they trust the quantum devices in their labs. The aim of the device-independent approach to cryptography is to do away with the latter assumption, and, consequently, significantly increase security. It is an open question whether the scope of this approach also extends to protocols in the distrustful cryptography model, thereby rendering them "fully" distrustful. In this Letter, we show that for bit commitment-one of the most basic primitives within the model-the answer is positive. We present a device-independent (imperfect) bit-commitment protocol, where Alice's and Bob's cheating probabilities are ≃0.854 and 3/4, which we then use to construct a device-independent coin flipping protocol with bias ≲0.336.

  17. Fully distrustful quantum bit commitment and coin flipping.

    PubMed

    Silman, J; Chailloux, A; Aharon, N; Kerenidis, I; Pironio, S; Massar, S

    2011-06-01

    In the distrustful quantum cryptography model the parties have conflicting interests and do not trust one another. Nevertheless, they trust the quantum devices in their labs. The aim of the device-independent approach to cryptography is to do away with the latter assumption, and, consequently, significantly increase security. It is an open question whether the scope of this approach also extends to protocols in the distrustful cryptography model, thereby rendering them "fully" distrustful. In this Letter, we show that for bit commitment-one of the most basic primitives within the model-the answer is positive. We present a device-independent (imperfect) bit-commitment protocol, where Alice's and Bob's cheating probabilities are ≃0.854 and 3/4, which we then use to construct a device-independent coin flipping protocol with bias ≲0.336. PMID:21702585

  18. A 128K-bit CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K. H.; Wallace, R. W.; Robinson, C. R.

    1976-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. 8K-bit CCD shift register memories were used to construct a feasibility model 128K-bit buffer memory system. Peak power dissipation during a data transfer is less than 7 W., while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. Descriptions are provided of both the buffer memory system and a custom tester that was used to exercise the memory. The testing procedures and testing results are discussed. Suggestions are provided for further development with regards to the utilization of advanced versions of CCD memory devices to both simplified and expanded memory system applications.

  19. Pack carburizing process for earth boring drill bits

    SciTech Connect

    Simons, R.W.; Scott, D.E.; Poland, J.R.

    1987-02-17

    A method is described of manufacturing an earth boring drill bit of the type having a bearing pin extending from a head section of the drill bit for rotatably mounting a cutter, comprising the steps of: providing a container having opposing end openings with sidewalls therebetween which define a container interior; placing the container over a portion of the head section so that the pin extends within the interior of the container; installing a spring spacer within the interior of the container about at least a portion of the circumference of the bearing pin at least one axial location; packing the container with a particulate treating medium; covering the container; and placing the pin and container into a furnace for a time and at a temperature to activate the treating medium.

  20. Fully Distrustful Quantum Bit Commitment and Coin Flipping

    NASA Astrophysics Data System (ADS)

    Silman, J.; Chailloux, A.; Aharon, N.; Kerenidis, I.; Pironio, S.; Massar, S.

    2011-06-01

    In the distrustful quantum cryptography model the parties have conflicting interests and do not trust one another. Nevertheless, they trust the quantum devices in their labs. The aim of the device-independent approach to cryptography is to do away with the latter assumption, and, consequently, significantly increase security. It is an open question whether the scope of this approach also extends to protocols in the distrustful cryptography model, thereby rendering them “fully” distrustful. In this Letter, we show that for bit commitment—one of the most basic primitives within the model—the answer is positive. We present a device-independent (imperfect) bit-commitment protocol, where Alice’s and Bob’s cheating probabilities are ≃0.854 and (3)/(4), which we then use to construct a device-independent coin flipping protocol with bias ≲0.336.

  1. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  2. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  3. Two-level renegotiated constant bit rate algorithm (2RCBR) for scalable MPEG2 video over QoS networks

    NASA Astrophysics Data System (ADS)

    Pegueroles, Josep R.; Alins, Juan J.; de la Cruz, Luis J.; Mata, Jorge

    2001-07-01

    MPEG family codecs generate variable-bit-rate (VBR) compressed video with significant multiple-time-scale bit rate variability. Smoothing techniques remove the periodic fluctuations generated by the codification modes. However, global efficiency concerning network resource allocation remains low due to scene-time-scale variability. RCBR techniques provide suitable means to achieving higher efficiency. Among all RCBR techniques described in literature, 2RCBR mechanism seems to be especially suitable for video-on demand. The method takes advantage of the knowledge of the stored video to calculate the renegotiation intervals and of the client buffer memory to perform work-ahead buffering techniques. 2RCBR achieves 100% bandwidth global efficiency with only two renegotiation levels. The algorithm is based on the study of the second derivative of the cumulative video sequence to find out sharp-sloped inflection points that point out changes in the scene complexity. Due to its nature, 2RCBR becomes very adequate to deliver MPEG2 scalable sequences into the network cause it can assure a constant bit rate to the base MPEG2 layer and use the higher rate intervals to deliver the enhanced MPEG2 layer. However, slight changes in the algorithm parameters must be introduced to attain an optimal behavior. This is verified by means of simulations on MPEG2 video patterns.

  4. Color encoding for gamut extension and bit-depth extension

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao

    2005-02-01

    Monitor oriented RGB color spaces (e.g. sRGB) are widely applied for digital image representation for the simplicity in displaying images on monitor displays. However, the physical gamut limits its ability to encode colors accurately for color images that are not limited to the display RGB gamut. To extend the encoding gamut, non-physical RGB primaries may be used to define the color space, or the RGB tone ranges may be extended beyond the physical range. An out-of-gamut color has at least one of the R, G, and B channels that are smaller than 0 or higher than 100%. Instead of using wide-gamut RGB primaries for gamut expansion, we may extend the tone ranges to expand the encoding gamut. Negative tone values and tone values over 100% are allowed. Methods to efficiently and accurately encode out-of-gamut colors are discussed in this paper. Interpretation bits are added to interpret the range of color values or to encode color values with a higher bit-depth. The interpretation bits of R, G, and B primaries can be packed and stored in an alpha channel in some image formats (e.g. TIFF) or stored in a data tag (e.g. in JEPG format). If a color image does not have colors that are out of a regular RGB gamut, a regular program (e.g. Photoshop) is able to manipulate the data correctly.

  5. Development of a Near-Bit MWD system

    SciTech Connect

    McDonald, W.J.; Pittard, G.T.

    1995-06-01

    The project objective is to develop a measurements-while-drilling (MWD) module that provides real-time reports of drilling conditions at the bit. The module is to support multiple types of sensors and to sample and encode their outputs in digital form under microprocessor control. The assembled message is to be electromagnetically transmitted along the drill string back to its associated receiver located in a collar typically 50--100 feet above the bit. The receiver demodulates the transmitted message and passes it data to the third party wireline or MWD telemetry system for relay to the surface. The collar also houses the conventional MWD or wireline probe assembly. The completed Phase 1 program began with the preparation of detailed performance specifications and ended with the design, fabrication and testing of a functioning prototype. The prototype was sized for operation with 6-3/4-inch multi-lobe mud motors due to the widespread use of this size motor in horizontal and directional drilling applications. The Phase 1 prototype provided inclination, temperature and pressure information. The Phase 2 program objective is to expand the current sensor suite to include at least one type of formation evaluation measurement, such as formation resistivity or natural gamma ray. The Near-Bit system will be subjected to a vigorous series of shock and vibration tests followed by field testing to ensure it possesses the reliability and performance required for commercial success.

  6. On the Lorentz invariance of bit-string geometry

    SciTech Connect

    Noyes, H.P.

    1995-09-01

    We construct the class of integer-sided triangles and tetrahedra that respectively correspond to two or three discriminately independent bit-strings. In order to specify integer coordinates in this space, we take one vertex of a regular tetrahedron whose common edge length is an even integer as the origin of a line of integer length to the {open_quotes}point{close_quotes} and three integer distances to this {open_quotes}point{close_quotes} from the three remaining vertices of the reference tetrahedron. This - usually chiral - integer coordinate description of bit-string geometry is possible because three discriminately independent bit-strings generate four more; the Hamming measures of these seven strings always allow this geometrical interpretation. On another occasion we intend to prove the rotational invariance of this coordinate description. By identifying the corners of these figures with the positions of recording counters whose clocks are synchronized using the Einstein convention, we define velocities in this space. This suggests that it may be possible to define boosts and discrete Lorentz transformations in a space of integer coordinates. We relate this description to our previous work on measurement accuracy and the discrete ordered calculus of Etter and Kauffman (DOC).

  7. Studies of Error Sources in Geodetic VLBI

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Niell, A. E.; Corey, B. E.

    1996-01-01

    Achieving the goal of millimeter uncertainty in three dimensional geodetic positioning on a global scale requires significant improvement in the precision and accuracy of both random and systematic error sources. For this investigation we proposed to study errors due to instrumentation in Very Long Base Interferometry (VLBI) and due to the atmosphere. After the inception of this work we expanded the scope to include assessment of error sources in GPS measurements, especially as they affect the vertical component of site position and the measurement of water vapor in the atmosphere. The atmosphere correction 'improvements described below are of benefit to both GPS and VLBI.

  8. Phasing piston error in segmented telescopes.

    PubMed

    Jiang, Junlun; Zhao, Weirui

    2016-08-22

    To achieve a diffraction-limited imaging, the piston errors between the segments of the segmented primary mirror telescope should be reduced to λ/40 RMS. We propose a method to detect the piston error by analyzing the intensity distribution on the image plane according to the Fourier optics principle, which can capture segments with the piston errors as large as the coherence length of the input light and reduce these to 0.026λ RMS (λ = 633nm). This method is adaptable to any segmented and deployable primary mirror telescope. Experiments have been carried out to validate the feasibility of the method. PMID:27557192

  9. Phasing piston error in segmented telescopes.

    PubMed

    Jiang, Junlun; Zhao, Weirui

    2016-08-22

    To achieve a diffraction-limited imaging, the piston errors between the segments of the segmented primary mirror telescope should be reduced to λ/40 RMS. We propose a method to detect the piston error by analyzing the intensity distribution on the image plane according to the Fourier optics principle, which can capture segments with the piston errors as large as the coherence length of the input light and reduce these to 0.026λ RMS (λ = 633nm). This method is adaptable to any segmented and deployable primary mirror telescope. Experiments have been carried out to validate the feasibility of the method.

  10. PEALL4: a 4-channel, 12-bit, 40-MSPS, Power Efficient and Low Latency SAR ADC

    NASA Astrophysics Data System (ADS)

    Rarbi, F.; Dzahini, D.; Gallin-Martel, L.; Bouvier, J.; Zeloufi, M.; Trocme, B.; Gabaldon Ruiz, C.

    2015-01-01

    The PEALL4 chip is a Power Efficient And Low Latency 4-channels, 12-bit and 40-MSPS successive approximation register (SAR) ADC. It was designed featuring a very short latency time in the context of ATLAS Liquid Argon Calorimeter phase I upgrade. Moreover this design could be a good option for ATLAS phase II and other High Energy Physics (HEP) projects. The full functionality of the converter is achieved by an embedded high-speed clock frequency conversion generated by the ADC itself. The design and testing results of the PEALL4 chip implemented in a commercial 130nm CMOS process are presented. The size of this 4-channel ADC with embedded voltage references and sLVS output serializer is 2.8x3.4 mm2. The chip presents a short latency time less than 25 ns defined from the very beginning of the sampling to the last conversion bit made available. A total power consumption below 27mW per channel is measured including the reference buffer and the sLVS serializer.

  11. 55-mW, 1.2-V, 12-bit, 100-MSPS Pipeline ADCs for Wireless Receivers

    NASA Astrophysics Data System (ADS)

    Ito, Tomohiko; Kurose, Daisuke; Ueno, Takeshi; Yamaji, Takafumi; Itakura, Tetsuro

    For wireless receivers, low-power 1.2-V 12-bit 100-MSPS pipeline ADCs are fabricated in 90-nm CMOS technology. To achieve low-power dissipation at 1.2V without the degradation of SNR, the configuration of 2.5bit/stage is employed with an I/Q amplifier sharing technique. Furthermore, single-stage pseudo-differential amplifiers are used in a Sample-and-Hold (S/H) circuit and a 1st Multiplying Digital-to-Analog Converter (MDAC). The pseudo-differential amplifier with two-gain-stage transimpedance gain-boosting amplifiers realizes high DC gain of more than 90dB with low power. The measured SNR of the 100-MSPS ADC is 66.7dB at 1.2-V supply. Under that condition, each ADC dissipates only 55mW.

  12. A 6-bit 3-Gsps ADC implemented in 1 μm GaAs HBT technology

    NASA Astrophysics Data System (ADS)

    Jincan, Zhang; Yuming, Zhang; Hongliang, Lü; Yimen, Zhang; Guangxing, Xiao; Guiping, Ye

    2014-08-01

    The design and test results of a 6-bit 3-Gsps analog-to-digital converter (ADC) using 1 μm GaAs heterojunction bipolar transistor (HBT) technology are presented. The monolithic folding-interpolating ADC makes use of a track-and-hold amplifier (THA) with a highly linear input buffer to maintain a highly effective number of bits (ENOB). The ADC occupies an area of 4.32 × 3.66 mm2 and achieves 5.53 ENOB with an effective resolution bandwidth of 1.1 GHz at a sampling rate of 3 Gsps. The maximum DNL and INL are 0.36 LSB and 0.48 LSB, respectively.

  13. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  14. 39 photons/bit direct detection receiver at 810 nm, BER = 1 x 10 exp -6, 60 Mb/s QPPM

    NASA Astrophysics Data System (ADS)

    MacGregor, Andrew; Dion, Bruno; Noeldeke, Christoph; Duchmann, Olivier

    1991-06-01

    39 photons/bit direct detection receiver sensitivity is reported, at a BER of 1 x 10 exp -6, for a 2-percent extinction ratio, 810 nm, 60 Mb/s QPPM signal. The sensitivity is 68 photons/bit at a BER of 1 x 10 exp -9. These figures represent a record sensitivity for a direct detection receiver. They are achieved by a combination of a novel silicon avalanche photodiode, an optimized preamplifier and a maximum likelihood demodulator. The work was a part of Phase B Breadboarding activities for the European Space Agency (ESA) SILEX (Semiconductor Intersatellite Link EXperiment) program on Intersatellite Optical Links.

  15. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  16. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  17. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  18. Modeling and quality assessment of halftoning by error diffusion.

    PubMed

    Kite, T D; Evans, B L; Bovik, A C

    2000-01-01

    Digital halftoning quantizes a graylevel image to one bit per pixel. Halftoning by error diffusion reduces local quantization error by filtering the quantization error in a feedback loop. In this paper, we linearize error diffusion algorithms by modeling the quantizer as a linear gain plus additive noise. We confirm the accuracy of the linear model in three independent ways. Using the linear model, we quantify the two primary effects of error diffusion: edge sharpening and noise shaping. For each effect, we develop an objective measure of its impact on the subjective quality of the halftone. Edge sharpening is proportional to the linear gain, and we give a formula to estimate the gain from a given error filter. In quantifying the noise, we modify the input image to compensate for the sharpening distortion and apply a perceptually weighted signal-to-noise ratio to the residual of the halftone and modified input image. We compute the correlation between the residual and the original image to show when the residual can be considered signal independent. We also compute a tonality measure similar to total harmonic distortion. We use the proposed measures for edge sharpening, noise shaping, and tonality to evaluate the quality of error diffusion algorithms. PMID:18255461

  19. A 0.23 pJ 11.05-bit ENOB 125-MS/s pipelined ADC in a 0.18 μm CMOS process

    NASA Astrophysics Data System (ADS)

    Yong, Wang; Jianyun, Zhang; Rui, Yin; Yuhang, Zhao; Wei, Zhang

    2015-05-01

    This paper describes a 12-bit 125-MS/s pipelined analog-to-digital converter (ADC) that is implemented in a 0.18 μm CMOS process. A gate-bootstrapping switch is used as the bottom-sampling switch in the first stage to enhance the sampling linearity. The measured differential and integral nonlinearities of the prototype are less than 0.79 least significant bit (LSB) and 0.86 LSB, respectively, at the full sampling rate. The ADC exhibits an effective number of bits (ENOB) of more than 11.05 bits at the input frequency of 10.5 MHz. The ADC also achieves a 10.5 bits ENOB with the Nyquist input frequency at the full sample rate. In addition, the ADC consumes 62 mW from a 1.9 V power supply and occupies 1.17 mm2, which includes an on-chip reference buffer. The figure-of-merit of this ADC is 0.23 pJ/step. Project supported by the Foundation of Shanghai Municipal Commission of Economy and Informatization (No. 130311).

  20. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.