Sample records for higher order bits

  1. Modular high speed counter employing edge-triggered code

    DOEpatents

    Vanstraelen, Guy F.

    1993-06-29

    A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a "0" to "1" transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.

  2. Modular high speed counter employing edge-triggered code

    DOEpatents

    Vanstraelen, G.F.

    1993-06-29

    A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a 0'' to 1'' transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.

  3. A New Interpretation of the Shannon Entropy Measure

    DTIC Science & Technology

    2015-01-01

    Organisation PO Box 1500 Edinburgh South Australia 5111 Australia Telephone: 1300 333 363 Fax: (08) 7389 6567 © Commonwealth of Australia ...0 0 0 00 ( 000 ) (001) (101)(100)(011)(010) (111)(110) 1 bit 2 bits 3 bits m1 m6m5m4m3m2 m7 m8 Symbolic Identifiers Probabilistic messages, states, or...for higher-order and hybrid uncertainty forms which occur across many application domains, e.g. [ 25 ]. As a preliminary step towards developing higher

  4. On the relationships between higher and lower bit-depth system measurements

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Haefner, David P.; Doe, Joshua M.

    2018-04-01

    The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.

  5. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  6. Low latency counter event indication

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY

    2008-09-16

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  7. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-08-24

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  8. Bit-level quantum color image encryption scheme with quantum cross-exchange operation and hyper-chaotic system

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian

    2018-06-01

    In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.

  9. Pseudo-color coding method for high-dynamic single-polarization SAR images

    NASA Astrophysics Data System (ADS)

    Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi

    2018-04-01

    A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.

  10. Implications of scaling on static RAM bit cell stability and reliability

    NASA Astrophysics Data System (ADS)

    Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael

    1993-01-01

    In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.

  11. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  12. Universal Decoder for PPM of any Order

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.

    2010-01-01

    A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.

  13. Pulse transmission receiver with higher-order time derivative pulse correlator

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-09-16

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission receiver includes: a higher-order time derivative pulse correlator; a demodulation decoder coupled to the higher-order time derivative pulse correlator; a clock coupled to the demodulation decoder; and a pseudorandom polynomial generator coupled to both the higher-order time derivative pulse correlator and the clock. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  14. Frame synchronization methods based on channel symbol measurements

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.

    1989-01-01

    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.

  15. Guidance Material for Mode S-Specific Protocol Application Avionics

    DTIC Science & Technology

    2007-06-04

    the high-order 28 bits of each register are used to specify the configuration state of uplink MSP channels, while the low -order 28 bits of each...TID field contains the 24-bit Mode S address of the threat (when the threat is Mode S equipped). The low -order 2 bits of the TID field are cleared. If...the register should be sufficient to ensure that the maximum latency of each data value is not exceeded. (Note: If all five of the status fields in the

  16. Effect of using different cover image quality to obtain robust selective embedding in steganography

    NASA Astrophysics Data System (ADS)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  17. Fabrication of Fe-Based Diamond Composites by Pressureless Infiltration

    PubMed Central

    Li, Meng; Sun, Youhong; Meng, Qingnan; Wu, Haidong; Gao, Ke; Liu, Baochang

    2016-01-01

    A metal-based matrix is usually used for the fabrication of diamond bits in order to achieve favorable properties and easy processing. In the effort to reduce the cost and to attain the desired bit properties, researchers have brought more attention to diamond composites. In this paper, Fe-based impregnated diamond composites for drill bits were fabricated by using a pressureless infiltration sintering method at 970 °C for 5 min. In addition, boron was introduced into Fe-based diamond composites. The influence of boron on the density, hardness, bending strength, grinding ratio, and microstructure was investigated. An Fe-based diamond composite with 1 wt % B has an optimal overall performance, the grinding ratio especially improving by 80%. After comparing with tungsten carbide (WC)-based diamond composites with and without 1 wt % B, results showed that the Fe-based diamond composite with 1 wt % B exhibits higher bending strength and wear resistance, being satisfactory to bit needs. PMID:28774124

  18. Pulse transmission transceiver architecture for low power communications

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-08-05

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A method of pulse transmission communications includes: generating a modulated pulse signal waveform; transforming said modulated pulse signal waveform into at least one higher-order derivative waveform; and transmitting said at least one higher-order derivative waveform as an emitted pulse. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  19. Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space.

    DTIC Science & Technology

    1997-01-01

    create a dependency tree containing an optimum set of n-1 first-order dependencies. To do this, first, we select an arbitrary bit Xroot to place at the...the root to an arbitrary bit Xroot -For all other bits Xi, set bestMatchingBitInTree[Xi] to Xroot . -While not all bits have been

  20. An interlaboratory study of TEX86 and BIT analysis of sediments, extracts, and standard mixtures

    NASA Astrophysics Data System (ADS)

    Schouten, Stefan; Hopmans, Ellen C.; Rosell-Melé, Antoni; Pearson, Ann; Adam, Pierre; Bauersachs, Thorsten; Bard, Edouard; Bernasconi, Stefano M.; Bianchi, Thomas S.; Brocks, Jochen J.; Carlson, Laura Truxal; Castañeda, Isla S.; Derenne, Sylvie; Selver, Ayça. Doǧrul; Dutta, Koushik; Eglinton, Timothy; Fosse, Celine; Galy, Valier; Grice, Kliti; Hinrichs, Kai-Uwe; Huang, Yongsong; Huguet, Arnaud; Huguet, Carme; Hurley, Sarah; Ingalls, Anitra; Jia, Guodong; Keely, Brendan; Knappy, Chris; Kondo, Miyuki; Krishnan, Srinath; Lincoln, Sara; Lipp, Julius; Mangelsdorf, Kai; Martínez-García, Alfredo; Ménot, Guillemette; Mets, Anchelique; Mollenhauer, Gesine; Ohkouchi, Naohiko; Ossebaar, Jort; Pagani, Mark; Pancost, Richard D.; Pearson, Emma J.; Peterse, Francien; Reichart, Gert-Jan; Schaeffer, Philippe; Schmitt, Gaby; Schwark, Lorenz; Shah, Sunita R.; Smith, Richard W.; Smittenberg, Rienk H.; Summons, Roger E.; Takano, Yoshinori; Talbot, Helen M.; Taylor, Kyle W. R.; Tarozo, Rafael; Uchida, Masao; van Dongen, Bart E.; Van Mooy, Benjamin A. S.; Wang, Jinxiang; Warren, Courtney; Weijers, Johan W. H.; Werne, Josef P.; Woltering, Martijn; Xie, Shucheng; Yamamoto, Masanobu; Yang, Huan; Zhang, Chuanlun L.; Zhang, Yige; Zhao, Meixun; Damsté, Jaap S. Sinninghe

    2013-12-01

    Two commonly used proxies based on the distribution of glycerol dialkyl glycerol tetraethers (GDGTs) are the TEX86 (TetraEther indeX of 86 carbon atoms) paleothermometer for sea surface temperature reconstructions and the BIT (Branched Isoprenoid Tetraether) index for reconstructing soil organic matter input to the ocean. An initial round-robin study of two sediment extracts, in which 15 laboratories participated, showed relatively consistent TEX86 values (reproducibility ±3-4°C when translated to temperature) but a large spread in BIT measurements (reproducibility ±0.41 on a scale of 0-1). Here we report results of a second round-robin study with 35 laboratories in which three sediments, one sediment extract, and two mixtures of pure, isolated GDGTs were analyzed. The results for TEX86 and BIT index showed improvement compared to the previous round-robin study. The reproducibility, indicating interlaboratory variation, of TEX86 values ranged from 1.3 to 3.0°C when translated to temperature. These results are similar to those of other temperature proxies used in paleoceanography. Comparison of the results obtained from one of the three sediments showed that TEX86 and BIT indices are not significantly affected by interlaboratory differences in sediment extraction techniques. BIT values of the sediments and extracts were at the extremes of the index with values close to 0 or 1, and showed good reproducibility (ranging from 0.013 to 0.042). However, the measured BIT values for the two GDGT mixtures, with known molar ratios of crenarchaeol and branched GDGTs, had intermediate BIT values and showed poor reproducibility and a large overestimation of the "true" (i.e., molar-based) BIT index. The latter is likely due to, among other factors, the higher mass spectrometric response of branched GDGTs compared to crenarchaeol, which also varies among mass spectrometers. Correction for this different mass spectrometric response showed a considerable improvement in the reproducibility of BIT index measurements among laboratories, as well as a substantially improved estimation of molar-based BIT values. This suggests that standard mixtures should be used in order to obtain consistent, and molar-based, BIT values.

  1. Space and power efficient hybrid counters array

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY

    2009-05-12

    A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.

  2. Space and power efficient hybrid counters array

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-03-30

    A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.

  3. Analyzing Enron Data: Bitmap Indexing Outperforms MySQL Queries bySeveral Orders of Magnitude

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stockinger, Kurt; Rotem, Doron; Shoshani, Arie

    2006-01-28

    FastBit is an efficient, compressed bitmap indexing technology that was developed in our group. In this report we evaluate the performance of MySQL and FastBit for analyzing the email traffic of the Enron dataset. The first finding shows that materializing the join results of several tables significantly improves the query performance. The second finding shows that FastBit outperforms MySQL by several orders of magnitude.

  4. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  5. Lathe tool bit and holder for machining fiberglass materials

    NASA Technical Reports Server (NTRS)

    Winn, L. E. (Inventor)

    1972-01-01

    A lathe tool and holder combination for machining resin impregnated fiberglass cloth laminates is described. The tool holder and tool bit combination is designed to accommodate a conventional carbide-tipped, round shank router bit as the cutting medium, and provides an infinite number of cutting angles in order to produce a true and smooth surface in the fiberglass material workpiece with every pass of the tool bit. The technique utilizes damaged router bits which ordinarily would be discarded.

  6. A design method for high performance seismic data acquisition based on oversampling delta-sigma modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shanghua; Xue, Bing

    2017-04-01

    The dynamic range of the currently most widely used 24-bit seismic data acquisition devices is 10-20 dB lower than that of broadband seismometers, and this can affect the completeness of seismic waveform recordings under certain conditions. However, this problem is not easy to solve because of the lack of analog to digital converter (ADC) chips with more than 24 bits in the market. So the key difficulties for higher-resolution data acquisition devices lie in achieving more than 24-bit ADC circuit. In the paper, we propose a method in which an adder, an integrator, a digital to analog converter chip, a field-programmable gate array, and an existing low-resolution ADC chip are used to build a third-order 16-bit oversampling delta-sigma modulator. This modulator is equipped with a digital decimation filter, thus forming a complete analog to digital converting circuit. Experimental results show that, within the 0.1-40 Hz frequency range, the circuit board's dynamic range reaches 158.2 dB, its resolution reaches 25.99 dB, and its linearity error is below 2.5 ppm, which is better than what is achieved by the commercial 24-bit ADC chips ADS1281 and CS5371. This demonstrates that the proposed method may alleviate or even solve the amplitude-limitation problem that broadband observation systems so commonly have to face during strong earthquakes.

  7. Pulse transmission receiver with higher-order time derivative pulse generator

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-08-12

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission receiver includes: a front-end amplification/processing circuit; a synchronization circuit coupled to the front-end amplification/processing circuit; a clock coupled to the synchronization circuit; a trigger signal generator coupled to the clock; and at least one higher-order time derivative pulse generator coupled to the trigger signal generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  8. Bit-error rate for free-space adaptive optics laser communications.

    PubMed

    Tyson, Robert K

    2002-04-01

    An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.

  9. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    PubMed Central

    Park, Keunyeol; Song, Minkyu

    2018-01-01

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273

  10. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    PubMed

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  11. Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Gavito, D.; Azar, J.J.

    1994-09-01

    During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effectsmore » of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.« less

  12. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  13. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  14. Next generation PET data acquisition architectures

    NASA Astrophysics Data System (ADS)

    Jones, W. F.; Reed, J. H.; Everman, J. L.; Young, J. W.; Seese, R. D.

    1997-06-01

    New architectures for higher performance data acquisition in PET are proposed. Improvements are demanded primarily by three areas of advancing PET state of the art. First, larger detector arrays such as the Hammersmith ECAT/sup (R/) EXACT HR/sup ++/ exceed the addressing capacity of 32 bit coincidence event words. Second, better scintillators (LSO) make depth-of interaction (DOI) and time-of-flight (TOF) operation more practical. Third, fully optimized single photon attenuation correction requires higher rates of data collection. New technologies which enable the proposed third generation Real Time Sorter (RTS III) include: (1) 80 Mbyte/sec Fibre Channel RAID disk systems, (2) PowerPC on both VMEbus and PCI Local bus, and (3) quadruple interleaved DRAM controller designs. Data acquisition flexibility is enhanced through a wider 64 bit coincidence event word. PET methodology support includes DOI (6 bits), TOF (6 bits), multiple energy windows (6 bits), 512/spl times/512 sinogram indexes (18 bits), and 256 crystal rings (16 bits). Throughput of 10 M events/sec is expected for list-mode data collection as well as both on-line and replay histogramming. Fully efficient list-mode storage for each PET application is provided by real-time bit packing of only the active event word bits. Real-time circuits provide DOI rebinning.

  15. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  16. Efficient Bit-to-Symbol Likelihood Mappings

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  17. Smart built-in test

    NASA Technical Reports Server (NTRS)

    Richards, Dale W.

    1990-01-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  18. Smart built-in test

    NASA Astrophysics Data System (ADS)

    Richards, Dale W.

    1990-03-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  19. Hydrodynamics beyond Navier-Stokes: the slip flow model.

    PubMed

    Yudistiawan, Wahyu P; Ansumali, Santosh; Karlin, Iliya V

    2008-07-01

    Recently, analytical solutions for the nonlinear Couette flow demonstrated the relevance of the lattice Boltzmann (LB) models to hydrodynamics beyond the continuum limit [S. Ansumali, Phys. Rev. Lett. 98, 124502 (2007)]. In this paper, we present a systematic study of the simplest LB kinetic equation-the nine-bit model in two dimensions--in order to quantify it as a slip flow approximation. Details of the aforementioned analytical solution are presented, and results are extended to include a general shear- and force-driven unidirectional flow in confined geometry. Exact solutions for the velocity, as well as for pertinent higher-order moments of the distribution functions, are obtained in both Couette and Poiseuille steady-state flows for all values of rarefaction parameter (Knudsen number). Results are compared with the slip flow solution by Cercignani, and a good quantitative agreement is found for both flow situations. Thus, the standard nine-bit LB model is characterized as a valid and self-consistent slip flow model for simulations beyond the Navier-Stokes approximation.

  20. Practical steganalysis of digital images: state of the art

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav

    2002-04-01

    Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.

  1. WASP8 Download

    EPA Pesticide Factsheets

    All of the WASP Installers are listed below. There is a 64 Bit Windows Installer, 64 Bit Mac OS X (Yosemite or Higher), 64 Bit Linux (Built on Ubuntu). You will need to have knowledge on how to install software on your target operating system.

  2. Development of a Tool Condition Monitoring System for Impregnated Diamond Bits in Rock Drilling Applications

    NASA Astrophysics Data System (ADS)

    Perez, Santiago; Karakus, Murat; Pellet, Frederic

    2017-05-01

    The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.

  3. Hamming and Accumulator Codes Concatenated with MPSK or QAM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel

    2009-01-01

    In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.

  4. A Ku band 5 bit MEMS phase shifter for active electronically steerable phased array applications

    NASA Astrophysics Data System (ADS)

    Sharma, Anesh K.; Gautam, Ashu K.; Farinelli, Paola; Dutta, Asudeb; Singh, S. G.

    2015-03-01

    The design, fabrication and measurement of a 5 bit Ku band MEMS phase shifter in different configurations, i.e. a coplanar waveguide and microstrip, are presented in this work. The development architecture is based on the hybrid approach of switched and loaded line topologies. All the switches are monolithically manufactured on a 200 µm high resistivity silicon substrate using 4 inch diameter wafers. The first three bits (180°, 90° and 45°) are realized using switched microstrip lines and series ohmic MEMS switches whereas the fourth and fifth bits (22.5° and 11.25°) consist of microstrip line sections loaded by shunt ohmic MEMS devices. Individual bits are fabricated and evaluated for performance and the monolithic device is a 5 bit Ku band (16-18 GHz) phase shifter with very low average insertion loss of the order of 3.3 dB and a return loss better than 15 dB over the 32 states with a chip area of 44 mm2. A total phase shift of 348.75° with phase accuracy within 3° is achieved over all of the states. The performance of individual bits has been optimized in order to achieve an integrated performance so that they can be implemented into active electronically steerable antennas for phased array applications.

  5. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  6. FastBit: Interactively Searching Massive Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reducesmore » the response time and enables interactive exploration on terabytes of data.« less

  7. An algorithm to compute the sequency ordered Walsh transform

    NASA Technical Reports Server (NTRS)

    Larsen, H.

    1976-01-01

    A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.

  8. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  9. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting

    2010-12-01

    We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  10. Technology Development and Field Trials of EGS Drilling Systems at Chocolate Mountain

    DOE Data Explorer

    Steven Knudsen

    2012-01-01

    Polycrystalline diamond compact (PDC) bits are routinely used in the oil and gas industry for drilling medium to hard rock but have not been adopted for geothermal drilling, largely due to past reliability issues and higher purchase costs. The Sandia Geothermal Research Department has recently completed a field demonstration of the applicability of advanced synthetic diamond drill bits for production geothermal drilling. Two commercially-available PDC bits were tested in a geothermal drilling program in the Chocolate Mountains in Southern California. These bits drilled the granitic formations with significantly better Rate of Penetration (ROP) and bit life than the roller cone bit they are compared with. Drilling records and bit performance data along with associated drilling cost savings are presented herein. The drilling trials have demonstrated PDC bit drilling technology has matured for applicability and improvements to geothermal drilling. This will be especially beneficial for development of Enhanced Geothermal Systems whereby resources can be accessed anywhere within the continental US by drilling to deep, hot resources in hard, basement rock formations.

  11. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    PubMed

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.

  12. Quasi-elastic light scattering: Signal storage, correlation, and spectrum analysis under control of an 8-bit microprocessor

    NASA Astrophysics Data System (ADS)

    Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter

    1987-03-01

    The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.

  13. Perceptibility curve test for digital radiographs before and after correction for attenuation and correction for attenuation and visual response.

    PubMed

    Li, G; Welander, U; Yoshiura, K; Shi, X-Q; McDavid, W D

    2003-11-01

    Two digital image processing methods, correction for X-ray attenuation and correction for attenuation and visual response, have been developed. The aim of the present study was to compare digital radiographs before and after correction for attenuation and correction for attenuation and visual response by means of a perceptibility curve test. Radiographs were exposed of an aluminium test object containing holes ranging from 0.03 mm to 0.30 mm with increments of 0.03 mm. Fourteen radiographs were exposed with the Dixi system (Planmeca Oy, Helsinki, Finland) and twelve radiographs were exposed with the F1 iOX system (Fimet Oy, Monninkylä, Finland) from low to high exposures covering the full exposure ranges of the systems. Radiographs obtained from the Dixi and F1 iOX systems were 12 bit and 8 bit images, respectively. Original radiographs were then processed for correction for attenuation and correction for attenuation and visual response. Thus, two series of radiographs were created. Ten viewers evaluated all the radiographs in the same random order under the same viewing conditions. The object detail having the lowest perceptible contrast was recorded for each observer. Perceptibility curves were plotted according to the mean of observer data. The perceptibility curves for processed radiographs obtained with the F1 iOX system are higher than those for originals in the exposure range up to the peak, where the curves are basically the same. For radiographs exposed with the Dixi system, perceptibility curves for processed radiographs are higher than those for originals for all exposures. Perceptibility curves show that for 8 bit radiographs obtained from the F1 iOX system, the contrast threshold was increased in processed radiographs up to the peak, while for 12 bit radiographs obtained with the Dixi system, the contrast threshold was increased in processed radiographs for all exposures. When comparisons were made between radiographs corrected for attenuation and corrected for attenuation and visual response, basically no differences were found. Radiographs processed for correction for attenuation and correction for attenuation and visual response may improve perception, especially for 12 bit originals.

  14. Camouflaging in Digital Image for Secure Communication

    NASA Astrophysics Data System (ADS)

    Jindal, B.; Singh, A. P.

    2013-06-01

    The present paper reports on a new type of camouflaging in digital image for hiding crypto-data using moderate bit alteration in the pixel. In the proposed method, cryptography is combined with steganography to provide a two layer security to the hidden data. The novelty of the algorithm proposed in the present work lies in the fact that the information about hidden bit is reflected by parity condition in one part of the image pixel. The remaining part of the image pixel is used to perform local pixel adjustment to improve the visual perception of the cover image. In order to examine the effectiveness of the proposed method, image quality measuring parameters are computed. In addition to this, security analysis is also carried by comparing the histograms of cover and stego images. This scheme provides a higher security as well as robustness to intentional as well as unintentional attacks.

  15. Pulse transmission transmitter including a higher order time derivate filter

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-09-23

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission transmitter includes: a clock; a pseudorandom polynomial generator coupled to the clock, the pseudorandom polynomial generator having a polynomial load input; an exclusive-OR gate coupled to the pseudorandom polynomial generator, the exclusive-OR gate having a serial data input; a programmable delay circuit coupled to both the clock and the exclusive-OR gate; a pulse generator coupled to the programmable delay circuit; and a higher order time derivative filter coupled to the pulse generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  16. Extensibility Experiments with the Software Life-Cycle Support Environment

    DTIC Science & Technology

    1991-11-01

    APRICOT ) and Bit- Oriented Message Definer (BMD); and three from the Ada Software Repository (ASR) at White Sands-the NASA/Goddard Space Flight Center...Graphical Kernel System (GKS). c. AMS - The Automated Measurement System tool supports the definition, collec- tion, and reporting of quality metric...Ada Primitive Order Compilation Order Tool ( APRICOT ) 2. Bit-Oriented Message Definer (BMD) 3. LGEN: A Language Generator Tool 4. I"ilc Chc-ker 5

  17. Optimization of Mud Hammer Drilling Performance--A Program to Benchmark the Viability of Advanced Mud Hammer Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2006-03-01

    Operators continue to look for ways to improve hard rock drilling performance through emerging technologies. A consortium of Department of Energy, operator and industry participants put together an effort to test and optimize mud driven fluid hammers as one emerging technology that has shown promise to increase penetration rates in hard rock. The thrust of this program has been to test and record the performance of fluid hammers in full scale test conditions including, hard formations at simulated depth, high density/high solids drilling muds, and realistic fluid power levels. This paper details the testing and results of testing two 7more » 3/4 inch diameter mud hammers with 8 1/2 inch hammer bits. A Novatek MHN5 and an SDS Digger FH185 mud hammer were tested with several bit types, with performance being compared to a conventional (IADC Code 537) tricone bit. These tools functionally operated in all of the simulated downhole environments. The performance was in the range of the baseline ticone or better at lower borehole pressures, but at higher borehole pressures the performance was in the lower range or below that of the baseline tricone bit. A new drilling mode was observed, while operating the MHN5 mud hammer. This mode was noticed as the weight on bit (WOB) was in transition from low to high applied load. During this new ''transition drilling mode'', performance was substantially improved and in some cases outperformed the tricone bit. Improvements were noted for the SDS tool while drilling with a more aggressive bit design. Future work includes the optimization of these or the next generation tools for operating in higher density and higher borehole pressure conditions and improving bit design and technology based on the knowledge gained from this test program.« less

  18. Analog Correlator Based on One Bit Digital Correlator

    NASA Technical Reports Server (NTRS)

    Prokop, Norman (Inventor); Krasowski, Michael (Inventor)

    2017-01-01

    A two input time domain correlator may perform analog correlation. In order to achieve high throughput rates with reduced or minimal computational overhead, the input data streams may be hard limited through adaptive thresholding to yield two binary bit streams. Correlation may be achieved through the use of a Hamming distance calculation, where the distance between the two bit streams approximates the time delay that separates them. The resulting Hamming distance approximates the correlation time delay with high accuracy.

  19. Investigation into the Impacts of Migration to Emergent NSA Suite B Encryption Standards

    DTIC Science & Technology

    2009-06-01

    detailed statistical information on the difference between the 1024-bit keys and 2048-bit keys. D. ENCRYPTION TAXONOMY The modern field of...because they had already published their ideas globally and most 6 countries bar retroactive patenting of open source concepts. In September 2000, the...order of p operations in a finite field of numbers as large as p itself. If exhaustive search were the best attack on these systems, then bit

  20. Steganography on quantum pixel images using Shannon entropy

    NASA Astrophysics Data System (ADS)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  1. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  2. A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations

    NASA Technical Reports Server (NTRS)

    Dydson, Roger W.; Goodrich, John W.

    2000-01-01

    Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.

  3. Towards Terabit Memories

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Memories have been the major yardstick for the continuing validity of Moore's law. In single-transistor-per-Bit dynamic random-access memories (DRAM), the number of bits per chip pretty much gives us the number of transistors. For decades, DRAM's have offered the largest storage capacity per chip. However, DRAM does not scale any longer, both in density and voltage, severely limiting its power efficiency to 10 fJ/b. A differential DRAM would gain four-times in density and eight-times in energy. Static CMOS RAM (SRAM) with its six transistors/cell is gaining in reputation because it scales well in cell size and operating voltage so that its fundamental advantage of speed, non-destructive read-out and low-power standby could lead to just 2.5 electrons/bit in standby and to a dynamic power efficiency of 2aJ/b. With a projected 2020 density of 16 Gb/cm², the SRAM would be as dense as normal DRAM and vastly better in power efficiency, which would mean a major change in the architecture and market scenario for DRAM versus SRAM. Non-volatile Flash memory have seen two quantum jumps in density well beyond the roadmap: Multi-Bit storage per transistor and high-density TSV (through-silicon via) technology. The number of electrons required per Bit on the storage gate has been reduced since their first realization in 1996 by more than an order of magnitude to 400 electrons/Bit in 2010 for a complexity of 32Gbit per chip at the 32 nm node. Chip stacking of eight chips with TSV has produced a 32GByte solid-state drive (SSD). A stack of 32 chips with 2 b/cell at the 16 nm node will reach a density of 2.5 Terabit/cm². Non-volatile memory with a density of 10 × 10 nm²/Bit is the target for widespread development. Phase-change memory (PCM) and resistive memory (RRAM) lead in cell density, and they will reach 20 Gb/cm² in 2D and higher with 3D chip stacking. This is still almost an order-of-magnitude less than Flash. However, their read-out speed is ~10-times faster, with as yet little data on their energy/b. As a read-out memory with unparalleled retention and lifetime, the ROM with electron-beam direct-write-lithography (Chap. 8) should be considered for its projected 2D density of 250 Gb/cm², a very small read energy of 0.1 μW/Gb/s. The lithography write-speed 10 ms/Terabit makes this ROM a serious contentender for the optimum in non-volatile, tamper-proof storage.

  4. Magnet/Hall-Effect Random-Access Memory

    NASA Technical Reports Server (NTRS)

    Wu, Jiin-Chuan; Stadler, Henry L.; Katti, Romney R.

    1991-01-01

    In proposed magnet/Hall-effect random-access memory (MHRAM), bits of data stored magnetically in Perm-alloy (or equivalent)-film memory elements and read out by using Hall-effect sensors to detect magnetization. Value of each bit represented by polarity of magnetization. Retains data for indefinite time or until data rewritten. Speed of Hall-effect sensors in MHRAM results in readout times of about 100 nanoseconds. Other characteristics include high immunity to ionizing radiation and storage densities of order 10(Sup6)bits/cm(Sup 2) or more.

  5. A novel image encryption algorithm using chaos and reversible cellular automata

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luan, Dapeng

    2013-11-01

    In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.

  6. Heat Generation During Bone Drilling: A Comparison Between Industrial and Orthopaedic Drill Bits.

    PubMed

    Hein, Christopher; Inceoglu, Serkan; Juma, David; Zuckerman, Lee

    2017-02-01

    Cortical bone drilling for preparation of screw placement is common in multiple surgical fields. The heat generated while drilling may reach thresholds high enough to cause osteonecrosis. This can compromise implant stability. Orthopaedic drill bits are several orders more expensive than their similarly sized, publicly available industrial counterparts. We hypothesize that an industrial bit will generate less heat during drilling, and the bits will not generate more heat after multiple cortical passes. We compared 4 4.0 mm orthopaedic and 1 3.97 mm industrial drill bits. Three types of each bit were drilled into porcine femoral cortices 20 times. The temperature of the bone was measured with thermocouple transducers. The heat generated during the first 5 drill cycles for each bit was compared to the last 5 cycles. These data were analyzed with analysis of covariance. The industrial drill bit generated the smallest mean increase in temperature (2.8 ± 0.29°C) P < 0.0001. No significant difference was identified comparing the first 5 cortices drilled to the last 5 cortices drilled for each bit. The P-values are as follows: Bosch (P = 0.73), Emerge (P = 0.09), Smith & Nephew (P = 0.08), Stryker (P = 0.086), and Synthes (P = 0.16). The industrial bit generated less heat during drilling than its orthopaedic counterparts. The bits maintained their performance after 20 drill cycles. Consideration should be given by manufacturers to design differences that may contribute to a more efficient cutting bit. Further investigation into the reuse of these drill bits may be warranted, as our data suggest their efficiency is maintained after multiple uses.

  7. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    NASA Astrophysics Data System (ADS)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.

  8. Modulation and synchronization technique for MF-TDMA system

    NASA Technical Reports Server (NTRS)

    Faris, Faris; Inukai, Thomas; Sayegh, Soheil

    1994-01-01

    This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.

  9. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  10. Topology-changing shape optimization with the genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lamberson, Steven E., Jr.

    The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.

  11. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  12. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  13. Experimental bit commitment based on quantum communication and special relativity.

    PubMed

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented.

  14. On VLSI Design of Rank-Order Filtering using DCRAM Architecture

    PubMed Central

    Lin, Meng-Chun; Dung, Lan-Rong

    2009-01-01

    This paper addresses on VLSI design of rank-order filtering (ROF) with a maskable memory for real-time speech and image processing applications. Based on a generic bit-sliced ROF algorithm, the proposed design uses a special-defined memory, called the dual-cell random-access memory (DCRAM), to realize major operations of ROF: threshold decomposition and polarization. Using the memory-oriented architecture, the proposed ROF processor can benefit from high flexibility, low cost and high speed. The DCRAM can perform the bit-sliced read, partial write, and pipelined processing. The bit-sliced read and partial write are driven by maskable registers. With recursive execution of the bit-slicing read and partial write, the DCRAM can effectively realize ROF in terms of cost and speed. The proposed design has been implemented using TSMC 0.18 μm 1P6M technology. As shown in the result of physical implementation, the core size is 356.1 × 427.7μm2 and the VLSI implementation of ROF can operate at 256 MHz for 1.8V supply. PMID:19865599

  15. Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation

    NASA Technical Reports Server (NTRS)

    Swift, G.

    2002-01-01

    JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.

  16. Performance comparison between 8 and 14 bit-depth imaging in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.

    2011-03-01

    We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.

  17. Numerical simulation study on the optimization design of the crown shape of PDC drill bit.

    PubMed

    Ju, Pei; Wang, Zhenquan; Zhai, Yinghu; Su, Dongyu; Zhang, Yunchi; Cao, Zhaohui

    The design of bit crown is an important part of polycrystalline diamond compact (PDC) bit design, although predecessors have done a lot of researches on the design principles of PDC bit crown, the study of the law about rock-breaking energy consumption according to different bit crown shape is not very systematic, and the mathematical model of design is over-simplified. In order to analyze the relation between rock-breaking energy consumption and bit crown shape quantificationally, the paper puts forward an idea to take "per revolution-specific rock-breaking work" as objective function, and analyzes the relationship between rock properties, inner cone angle, outer cone arc radius, and per revolution-specific rock-breaking work by means of explicit dynamic finite element method. Results show that the change law between per revolution-specific rock-breaking work and the radius of gyration is similar for rocks with different properties, it is beneficial to decrease rock-breaking energy consumption by decreasing inner cone angle or outer cone arc radius. Of course, we should also consider hydraulic structure and processing technology in the optimization design of PDC bit crown.

  18. Blue phase-change recording at high data densities and data rates

    NASA Astrophysics Data System (ADS)

    Dekker, Martijn K.; Pfeffer, Nicola; Kuijper, Maarten; Ubbens, Igolt P.; Coene, Wim M. J.; Meinders, E. R.; Borg, Herman J.

    2000-09-01

    For the DVR system with the use of a blue laser diode (wavelength 405 nm) we developed (12 cm) discs with a total capacity of 22.4 GB. The land/groove track pitch is 0.30 micrometers and the channel bit length is 87 nm. The DVR system uses a d equals 1 code. These phase change discs can be recorded at continuous angular velocity at a maximum of 50 Mbps user data rate (including all format and ECC overhead) and meet the system specifications. Fast growth determined phase change materials (FGM) are used for the active layer. In order to apply these FGM discs at small track pitch special attention has been paid to the issue of thermal cross-write. Finally routes towards higher capacities such as advanced bit detection schemes and the use of a smaller track pitch are considered. These show the feasibility in the near future of at least 26.0 GB on a disc for the DVR system with a blue laser diode.

  19. Hardware multiplier processor

    DOEpatents

    Pierce, Paul E.

    1986-01-01

    A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.

  20. Hardware multiplier processor

    DOEpatents

    Pierce, P.E.

    A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.

  1. Acquisition and Retaining Granular Samples via a Rotating Coring Bit

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart

    2013-01-01

    This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the frictional force must be greater than the weight of the sample. The bit can be designed with an internal sleeve to serve as a container for granular samples. This tube-shaped component can be extracted upon completion of the sampling, and the bottom can be capped by placing the bit onto a corklike component. Then, upon removal of the internal tube, the top section can be sealed. The novel features of this device are: center dot A mechanism of acquiring and retaining granular samples using a coring bit without a closed door. center dot An acquisition bit that has internal structure such as a waffle pattern for compartmentalizing or helical internal flute to propel the sample inside the bit and help in acquiring and retaining granular samples. center dot A bit with an internal spiral into which the various particles wedge. center dot A design that provides a method of testing frictional properties of the granular samples and potentially segregating particles based on size and density. A controlled acceleration or deceleration may be used to drop the least-frictional particles or to eventually shear the unconsolidated material near the bit center.

  2. High resolution A/D conversion based on piecewise conversion at lower resolution

    DOEpatents

    Terwilliger, Steve [Albuquerque, NM

    2012-06-05

    Piecewise conversion of an analog input signal is performed utilizing a plurality of relatively lower bit resolution A/D conversions. The results of this piecewise conversion are interpreted to achieve a relatively higher bit resolution A/D conversion without sampling frequency penalty.

  3. Scalable modulation technology and the tradeoff of reach, spectral efficiency, and complexity

    NASA Astrophysics Data System (ADS)

    Bosco, Gabriella; Pilori, Dario; Poggiolini, Pierluigi; Carena, Andrea; Guiomar, Fernando

    2017-01-01

    Bandwidth and capacity demand in metro, regional, and long-haul networks is increasing at several tens of percent per year, driven by video streaming, cloud computing, social media and mobile applications. To sustain this traffic growth, an upgrade of the widely deployed 100-Gbit/s long-haul optical systems, based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) modulation format associated with coherent detection and digital signal processing (DSP), is mandatory. In fact, optical transport techniques enabling a per-channel bit rate beyond 100 Gbit/s have recently been the object of intensive R and D activities, aimed at both improving the spectral efficiency and lowering the cost per bit in fiber transmission systems. In this invited contribution, we review the different available options to scale the per-channel bit-rate to 400 Gbit/s and beyond, i.e. symbol-rate increase, use of higher-order quadrature amplitude modulation (QAM) modulation formats and use of super-channels with DSP-enabled spectral shaping and advanced multiplexing technologies. In this analysis, trade-offs of system reach, spectral efficiency and transceiver complexity are addressed. Besides scalability, next generation optical networks will require a high degree of flexibility in the transponders, which should be able to dynamically adapt the transmission rate and bandwidth occupancy to the light path characteristics. In order to increase the flexibility of these transponders (often referred to as "flexponders"), several advanced modulation techniques have recently been proposed, among which sub-carrier multiplexing, hybrid formats (over time, frequency and polarization), and constellation shaping. We review these techniques, highlighting their limits and potential in terms of performance, complexity and flexibility.

  4. Hash Bit Selection for Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Junfeng He; Shih-Fu Chang

    2017-11-01

    To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.

  5. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  6. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  7. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  8. APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study

    NASA Astrophysics Data System (ADS)

    Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak

    2017-04-01

    In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.

  9. Ka-Band, Multi-Gigabit-Per-Second Transceiver

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Wintucky, Edwin G.; Smith, Francis J.; Harris, Johnny M.; Landon, David G.; Haddadin, Osama S.; McIntire, William K.; Sun, June Y.

    2011-01-01

    A document discusses a multi-Gigabit-per-second, Ka-band transceiver with a software-defined modem (SDM) capable of digitally encoding/decoding data and compensating for linear and nonlinear distortions in the end-to-end system, including the traveling-wave tube amplifier (TWTA). This innovation can increase data rates of space-to-ground communication links, and has potential application to NASA s future spacebased Earth observation system. The SDM incorporates an extended version of the industry-standard DVB-S2, and LDPC rate 9/10 FEC codec. The SDM supports a suite of waveforms, including QPSK, 8-PSK, 16-APSK, 32- APSK, 64-APSK, and 128-QAM. The Ka-band and TWTA deliver an output power on the order of 200 W with efficiency greater than 60%, and a passband of at least 3 GHz. The modem and the TWTA together enable a data rate of 20 Gbps with a low bit error rate (BER). The payload data rates for spacecraft in NASA s integrated space communications network can be increased by an order of magnitude (>10 ) over current state-of-practice. This innovation enhances the data rate by using bandwidth-efficient modulation techniques, which transmit a higher number of bits per Hertz of bandwidth than the currently used quadrature phase shift keying (QPSK) waveforms.

  10. Deterministic switching of a magnetoelastic single-domain nano-ellipse using bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Cheng-Yen; Sepulveda, Abdon; Keller, Scott

    2016-03-21

    In this paper, a fully coupled analytical model between elastodynamics with micromagnetics is used to study the switching energies using voltage induced mechanical bending of a magnetoelastic bit. The bit consists of a single domain magnetoelastic nano-ellipse deposited on a thin film piezoelectric thin film (500 nm) attached to a thick substrate (0.5 mm) with patterned electrodes underneath the nano-dot. A voltage applied to the electrodes produces out of plane deformation with bending moments induced in the magnetoelastic bit modifying the magnetic anisotropy. To minimize the energy, two design stages are used. In the first stage, the geometry and bias field (H{submore » b}) of the bit are optimized to minimize the strain energy required to rotate between two stable states. In the second stage, the bit's geometry is fixed, and the electrode position and control mechanism is optimized. The electrical energy input is about 200 (aJ) which is approximately two orders of magnitude lower than spin transfer torque approaches.« less

  11. Design and testing of coring bits on drilling lunar rock simulant

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan

    2017-02-01

    Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.

  12. Compensation for first-order polarization-mode dispersion by using a novel tunable compensator

    NASA Astrophysics Data System (ADS)

    Qiu, Feng; Ning, Tigang; Pei, Shanshan; Xing, Yujun; Jian, Shuisheng

    2005-01-01

    Polarization-related impairments have become a critical issue for high-data-rate optical systems, particularly when considering polarization-mode dispersion (PMD). Consequently, compensation of PMD, especially for the first-order PMD is necessary to maintain adequate performance in long-haul systems at a high bit rate of 10 Gb/s or beyond. In this paper, we successfully demonstrated automatic and tunable compensation for first-order polarization-mode dispersion. Furthermore, we reported the statistical assessment of this tunable compensator at 10 Gbit/s. Experimental results, including bit error rate measurements, are successfully compared with theory, therefore demonstrating the compensator efficiency at 10 Gbit/s. The first-order PMD was max 274 ps before PMD compensation, and it was lower than 7ps after PMD compensation.

  13. Single Piezo-Actuator Rotary-Hammering Drill

    NASA Technical Reports Server (NTRS)

    Sherrit, Stewart; Bao, Xiaoqi; Badescu, Mircea; Bar-Cohen, Yoseph

    2011-01-01

    This innovation comprises a compact drill that uses low-axial preload, via vibrations, that fractures the rock under the bit kerf, and rotates the bit to remove the powdered cuttings while augmenting the rock fracture via shear forces. The vibrations fluidize the powered cuttings inside the flutes around the bit, reducing the friction with the auger surface. These combined actions reduce the consumed power and the heating of the drilled medium, helping to preserve the pristine content of the produced samples. The drill consists of an actuator that simultaneously impacts and rotates the bit by applying force and torque via a single piezoelectric stack actuator without the need for a gearbox or lever mechanism. This reduces the development/fabrication cost and complexity. The piezoelectric actuator impacts the surface and generates shear forces, fragmenting the drilled medium directly under the bit kerf by exceeding the tensile and/or shear strength of the struck surface. The percussive impact action of the actuator leads to penetration of the medium by producing a zone of finely crushed rock directly underneath the struck location. This fracturing process is highly enhanced by the shear forces from the rotation and twisting action. To remove the formed cuttings, the bit is constructed with an auger on its internal or external surface. One of the problems with pure hammering is that, as the teeth become embedded in the sample, the drilling efficiency drops unless the teeth are moved away from the specific footprint location. By rotating the teeth, they are moved to areas that were not fragmented, and thus the rock fracturing is enhanced via shear forces. The shear motion creates ripping or chiseling action to produce larger fragments to increase the drilling efficiency, and to reduce the required power. The actuator of the drill consists of a piezoelectric stack that vibrates the horn. The stack is compressed by a bolt between the backing and the horn in order to prevent it from being subjected to tensile stress that will cause it to fail. The backing is intended to transfer the generated mechanical vibrations towards the horn. In order to cause rotation, the horn is configured asymmetrically with helical segments and, upon impacting the bit, it introduces longitudinal along the axis of the actuator and tangential force causing twisting action that rotates the bit. The longitudinal component of the vibrations of the stack introduces percussion impulses between the bit and the rock to fracture it when the ultimate strain is exceeded under the bit.

  14. Discriminating Real from Make-Believe on Television: A Developmental Study.

    ERIC Educational Resources Information Center

    Condry, John; Freund, Susan

    In order to determine when children distinguish the real from the fictional in television programing, 170 adults and 157 children from 2nd, 4th, and 6th grades were shown 40 "bits" of television, each 5 seconds in length and representative of a wide range of program types. Subjects were asked to classify as real or make-believe 18 "factual" bits,…

  15. HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Radtke; David Glowka; Man Mohan Rai

    2008-03-31

    Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hardmore » and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole coiled tube drilling offers the opportunity to dramatically cut producers' exploration risk to a level comparable to that of drilling development wells. Together, such efforts hold great promise for economically recovering a sizeable portion of the estimated remaining shallow (less than 5,000 feet subsurface) oil resource in the United States. The DOE estimates this U.S. targeted shallow resource at 218 billion barrels. Furthermore, the smaller 'footprint' of the lightweight rigs utilized for microhole drilling and the accompanying reduced drilling waste disposal volumes offer the bonus of added environmental benefits. DOE analysis shows that microhole technology has the potential to cut exploratory drilling costs by at least a third and to slash development drilling costs in half.« less

  16. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.

  17. Effect of monitor display on detection of approximal caries lesions in digital radiographs.

    PubMed

    Isidor, S; Faaborg-Andersen, M; Hintze, H; Kirkevang, L-L; Frydenberg, M; Haiter-Neto, F; Wenzel, A

    2009-12-01

    The aim was to compare the accuracy of five flat panel monitors for detection of approximal caries lesions. Five flat panel monitors, Mermaid Ventura (15 inch, colour flat panel, 1024 x 768, 32 bit, analogue), Olórin VistaLine (19 inch, colour, 1280 x 1024, 32 bit, digital), Samsung SyncMaster 203B (20 inch, colour, 1024 x 768, 32 bit, analogue), Totoku ME251i (21 inch, greyscale, 1400 x 1024, 32 bit, digital) and Eizo FlexScan MX190 (19 inch, colour, 1280 x 1024, 32 bit, digital), were assessed. 160 approximal surfaces of human teeth were examined with a storage phosphor plate system (Digora FMX, Soredex) and assessed by seven observers for the presence of caries lesions. Microscopy of the teeth served as validation for the presence/absence of a lesion. The sensitivities varied between observers (range 7-25%) but the variation between the monitors was not large. The Samsung monitor obtained a significantly higher sensitivity than the Mermaid and Olórin monitors (P<0.02) and a lower specificity than the Eizo and Totoku monitors (P<0.05). There were no significant differences between any other monitors. The percentage of correct scores was highest for the Eizo monitor and significantly higher than for the Mermaid and Olórin monitors (P<0.03). There was no clear relationship between the diagnostic accuracy and the resolution or price of the monitor. The Eizo monitor was associated with the overall highest percentage of correct scores. The standard analogue flat panel monitor, Samsung, had higher sensitivity and lower specificity than some of the other monitors, but did not differ in overall accuracy for detection of carious lesions.

  18. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  19. Binary CMOS image sensor with a gate/body-tied MOSFET-type photodetector for high-speed operation

    NASA Astrophysics Data System (ADS)

    Choi, Byoung-Soo; Jo, Sung-Hyun; Bae, Myunghan; Kim, Sang-Hwan; Shin, Jang-Kyoo

    2016-05-01

    In this paper, a binary complementary metal oxide semiconductor (CMOS) image sensor with a gate/body-tied (GBT) metal oxide semiconductor field effect transistor (MOSFET)-type photodetector is presented. The sensitivity of the GBT MOSFET-type photodetector, which was fabricated using the standard CMOS 0.35-μm process, is higher than the sensitivity of the p-n junction photodiode, because the output signal of the photodetector is amplified by the MOSFET. A binary image sensor becomes more efficient when using this photodetector. Lower power consumptions and higher speeds of operation are possible, compared to the conventional image sensors using multi-bit analog to digital converters (ADCs). The frame rate of the proposed image sensor is over 2000 frames per second, which is higher than those of the conventional CMOS image sensors. The output signal of an active pixel sensor is applied to a comparator and compared with a reference level. The 1-bit output data of the binary process is determined by this level. To obtain a video signal, the 1-bit output data is stored in the memory and is read out by horizontal scanning. The proposed chip is composed of a GBT pixel array (144 × 100), binary-process circuit, vertical scanner, horizontal scanner, and readout circuit. The operation mode can be selected from between binary mode and multi-bit mode.

  20. An efficient CU partition algorithm for HEVC based on improved Sobel operator

    NASA Astrophysics Data System (ADS)

    Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Sun, Gang; Yang, Yunsheng

    2018-04-01

    As the latest video coding standard, High Efficiency Video Coding (HEVC) achieves over 50% bit rate reduction with similar video quality compared with previous standards H.264/AVC. However, the higher compression efficiency is attained at the cost of significantly increasing computational load. In order to reduce the complexity, this paper proposes a fast coding unit (CU) partition technique to speed up the process. To detect the edge features of each CU, a more accurate improved Sobel filtering is developed and performed By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. Compared with the reference software HM16.7, experimental results indicate the proposed algorithm can lessen the encoding time up to 44.09% on average, with a negligible bit rate increase of 0.24%, and quality losses lower 0.03 dB, respectively. In addition, the proposed algorithm gets a better trade-off between complexity and rate-distortion among the other proposed works.

  1. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  2. ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems

    PubMed Central

    Expósito, Roberto R.

    2018-01-01

    Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/. PMID:29608567

  3. ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems.

    PubMed

    González-Domínguez, Jorge; Expósito, Roberto R

    2018-01-01

    Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/.

  4. Differential Fault Analysis on CLEFIA with 128, 192, and 256-Bit Keys

    NASA Astrophysics Data System (ADS)

    Takahashi, Junko; Fukunaga, Toshinori

    This paper describes a differential fault analysis (DFA) attack against CLEFIA. The proposed attack can be applied to CLEFIA with all supported keys: 128, 192, and 256-bit keys. DFA is a type of side-channel attack. This attack enables the recovery of secret keys by injecting faults into a secure device during its computation of the cryptographic algorithm and comparing the correct ciphertext with the faulty one. CLEFIA is a 128-bit blockcipher with 128, 192, and 256-bit keys developed by the Sony Corporation in 2007. CLEFIA employs a generalized Feistel structure with four data lines. We developed a new attack method that uses this characteristic structure of the CLEFIA algorithm. On the basis of the proposed attack, only 2 pairs of correct and faulty ciphertexts are needed to retrieve the 128-bit key, and 10.78 pairs on average are needed to retrieve the 192 and 256-bit keys. The proposed attack is more efficient than any previously reported. In order to verify the proposed attack and estimate the calculation time to recover the secret key, we conducted an attack simulation using a PC. The simulation results show that we can obtain each secret key within three minutes on average. This result shows that we can obtain the entire key within a feasible computational time.

  5. Acoustically assisted spin-transfer-torque switching of nanomagnets: An energy-efficient hybrid writing scheme for non-volatile memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biswas, Ayan K.; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha

    We show that the energy dissipated to write bits in spin-transfer-torque random access memory can be reduced by an order of magnitude if a surface acoustic wave (SAW) is launched underneath the magneto-tunneling junctions (MTJs) storing the bits. The SAW-generated strain rotates the magnetization of every MTJs' soft magnet from the easy towards the hard axis, whereupon passage of a small spin-polarized current through a target MTJ selectively switches it to the desired state with > 99.99% probability at room temperature, thereby writing the bit. The other MTJs return to their original states at the completion of the SAW cycle.

  6. Acceptable bit-rates for human face identification from CCTV imagery

    NASA Astrophysics Data System (ADS)

    Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker

    2013-01-01

    The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.

  7. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  8. Implementation of digital equality comparator circuit on memristive memory crossbar array using material implication logic

    NASA Astrophysics Data System (ADS)

    Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al

    2018-03-01

    One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.

  9. Femtosecond-laser-induced periodic surface structures on magnetic layer targets: The roles of femtosecond-laser interaction and of magnetization

    NASA Astrophysics Data System (ADS)

    Czajkowski, Klaus; Ratzke, Markus; Varlamova, Olga; Reif, Juergen

    2017-09-01

    We investigate femtosecond laser induced periodic surface structures (LIPSS) on a complex multilayer target, namely a 20-GB computer hard disk (HD), consisting of a metallic substrate, a magnetic layer, and a thin polymeric protective layer. Depending on the dose (fluence × number of pulses) first the polymeric cover layer is completely removed, revealing a periodic surface modulation of the magnetic layer which seems not to be induced by the laser action. At higher dose, the magnetic layer morphology is strongly modified by laser-induced periodic structures (LIPS) and, finally, kind of an etch stop is reached at the bottom of the magnetic layer. The LIPS shows very high modulation depth below and above the original surface level. In the present work, the role of magnetization and magneto-mechanic forces in the structure formation process is studied by monitoring the bit-wise magnetization of the HD with a magnetic force microscope. It is shown that the structures at low laser dose are reflecting the magnetic bits. At higher dose the magnetic influence appears to be extinguished on the account of LIPS. This suggests a transient overcoming the Curie temperature and an associated loss of magnetic order. The results compare well with our model of LIPS/LIPSS formation by self-organized relaxation from a laser-induced thermodynamic instability.

  10. Strategies for the Evolution of Sex

    NASA Astrophysics Data System (ADS)

    Erzan, Ayse

    2002-03-01

    Using a bit-string model of evolution we find a successful route to diploidy and sex in simple organisms, for a step-like fitness function. Assuming that an excess of deleterious mutations triggers the conversion of haploids to diploidy and sex, we find that only one pair of sexual organisms can take over a finite population, if they engage in sexual reproduction under unfavorable conditions, and otherwise perform mitosis. Then, a haploid-diploid (HD) cycle is established, with an abbreviated haploid phase, as in present day sexual reproduction. If crossover is allowed during meiosis, HD cycles of arbitrary duration can be maintained. We find that the sexual population has a higher mortality rate than asexual diploids, but also a relaxation rate that is an order of magnitude higher. As a result, sexuals have a higher adaptability and lower mutational load on the average, since they can select out the undesirable genes much faster.

  11. Use of the DPP4BIT System for the Management of Hospital Medical Equipment.

    PubMed

    Tsoromokos, Dimitrios; Tsaloukidis, Nikolaos; Zarakovitis, Dimitrios; Lazakidou, Athina

    2017-01-01

    The Information and Communication Technologies (ICT) combined with the development of innovative skills within the broader health sector, can significantly improve and upgrade health care quality services. The proposed DDP4BIT system supports an alternative channel for digital information recording and equipment handling of Biomedical Technology Departments (BITs) of Health Care Units. This technology is ideal for all types of procedures based on handwritten forms that are commonly used in Health Care Units. The collection of useful statistics for analyzing and exporting data indicators is used in order to reduce ratios, such as operating time ratio, ideal operating time indicator, number of repetitive quality failures, total maintenance cost, etc. and supports decision-making.

  12. Design of a 0.13-μm CMOS cascade expandable ΣΔ modulator for multi-standard RF telecom systems

    NASA Astrophysics Data System (ADS)

    Morgado, Alonso; del Río, Rocío; de la Rosa, José M.

    2007-05-01

    This paper reports a 130-nm CMOS programmable cascade ΣΔ modulator for multi-standard wireless terminals, capable of operating on three standards: GSM, Bluetooth and UMTS. The modulator is reconfigured at both architecture- and circuit- level in order to adapt its performance to the different standards specifications with optimized power consumption. The design of the building blocks is based upon a top-down CAD methodology that combines simulation and statistical optimization at different levels of the system hierarchy. Transistor-level simulations show correct operation for all standards, featuring 13-bit, 11.3-bit and 9-bit effective resolution within 200-kHz, 1-MHz and 4-MHz bandwidth, respectively.

  13. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  14. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  15. A new optical post-equalization based on self-imaging

    NASA Astrophysics Data System (ADS)

    Guizani, S.; Cheriti, A.; Razzak, M.; Boulslimani, Y.; Hamam, H.

    2005-09-01

    Driven by the world's growing need for communication bandwidth, progress is constantly being reported in building newer fibers that are capable of handling the rapid increase in traffic. However, building an optical fiber link is a major investment, one that is very expensive to replace. A major impairment that restricts the achievement of higher bit rates with standard single mode fiber is chromatic dispersion. This is particularly problematic for systems operating in the 1550 nm band, where the chromatic dispersion limit decreases rapidly in inverse proportion to the square of the bit rate. For the first time, to the best of our knowledge, this document illustrates a new optical technique to post compensate optically the chromatic dispersion in fiber using temporal Talbot effect in ranges exceeding the 40G bit/s. We propose a new optical post equalization solutions based on the self imaging of Talbot effect.

  16. A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application

    NASA Astrophysics Data System (ADS)

    Roy, Sounak; Banerjee, Swapna

    2018-03-01

    This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.

  17. A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application

    NASA Astrophysics Data System (ADS)

    Roy, Sounak; Banerjee, Swapna

    2018-06-01

    This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.

  18. Lossless compression of AVIRIS data: Comparison of methods and instrument constraints

    NASA Technical Reports Server (NTRS)

    Roger, R. E.; Arnold, J. F.; Cavenor, M. C.; Richards, J. A.

    1992-01-01

    A family of lossless compression methods, allowing exact image reconstruction, are evaluated for compressing Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) image data. The methods are used on Differential Pulse Code Modulation (DPCM). The compressed data have an entropy of order 6 bits/pixel. A theoretical model indicates that significantly better lossless compression is unlikely to be achieved because of limits caused by the noise in the AVIRIS channels. AVIRIS data differ from data produced by other visible/near-infrared sensors, such as LANDSAT-TM or SPOT, in several ways. Firstly, the data are recorded at a greater resolution (12 bits, though packed into 16-bit words). Secondly, the spectral channels are relatively narrow and provide continuous coverage of the spectrum so that the data in adjacent channels are generally highly correlated. Thirdly, the noise characteristics of the AVIRIS are defined by the channels' Noise Equivalent Radiances (NER's), and these NER's show that, at some wavelengths, the least significant 5 or 6 bits of data are essentially noise.

  19. An ablative pulsed plasma thruster with a segmented anode

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Ren, Junxue; Tang, Haibin; Ling, William Yeong Liang; York, Thomas M.

    2018-01-01

    An ablative pulsed plasma thruster (APPT) design with a ‘segmented anode’ is proposed in this paper. We aim to examine the effect that this asymmetric electrode configuration (a normal cathode and a segmented anode) has on the performance of an APPT. The magnetic field of the discharge arc, plasma density in the exit plume, impulse bit, and thrust efficiency were studied using a magnetic probe, Langmuir probe, thrust stand, and mass bit measurements, respectively. When compared with conventional symmetric parallel electrodes, the segmented anode APPT shows an improvement in the impulse bit of up to 28%. The thrust efficiency is also improved by 49% (from 5.3% to 7.9% for conventional and segmented designs, respectively). Long-exposure broadband emission images of the discharge morphology show that compared with a normal anode, a segmented anode results in clear differences in the luminous discharge morphology and better collimation of the plasma. The magnetic probe data indicate that the segmented anode APPT exhibits a higher current density in the discharge arc. Furthermore, Langmuir probe data collected from the central exit plane show that the peak electron density is 75% higher than with conventional parallel electrodes. These results are believed to be fundamental to the physical mechanisms behind the increased impulse bit of an APPT with a segmented electrode.

  20. Experimental realization of the analogy of quantum dense coding in classical optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhenwei; Sun, Yifan; Li, Pengyun

    2016-06-15

    We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less

  1. Opportunistic Beamforming with Wireless Powered 1-bit Feedback Through Rectenna Array

    NASA Astrophysics Data System (ADS)

    Krikidis, Ioannis

    2015-11-01

    This letter deals with the opportunistic beamforming (OBF) scheme for multi-antenna downlink with spatial randomness. In contrast to conventional OBF, the terminals return only 1-bit feedback, which is powered by wireless power transfer through a rectenna array. We study two fundamental topologies for the combination of the rectenna elements; the direct-current combiner and the radio-frequency combiner. The beam outage probability is derived in closed form for both combination schemes, by using high order statistics and stochastic geometry.

  2. A Dynamic Model for C3 Information Incorporating the Effects of Counter C3

    DTIC Science & Technology

    1980-12-01

    birth and death rates exactly cancel one another and H = 0. Although this simple first order linear system is not very sophisti- cated, we see...per hour and refer to the average behavior of the entire system ensemble much as species birth and death rates are typically measured in births (or...unit time) iii) VTX, VIY ; Uncertainty Death Rates resulting from data inputs (bits/bit per unit time) 3 -1 iv) YYV» YvY > Counter C

  3. A 32-bit NMOS microprocessor with a large register file

    NASA Astrophysics Data System (ADS)

    Sherburne, R. W., Jr.; Katevenis, M. G. H.; Patterson, D. A.; Sequin, C. H.

    1984-10-01

    Two scaled versions of a 32-bit NMOS reduced instruction set computer CPU, called RISC II, have been implemented on two different processing lines using the simple Mead and Conway layout rules with lambda values of 2 and 1.5 microns (corresponding to drawn gate lengths of 4 and 3 microns), respectively. The design utilizes a small set of simple instructions in conjunction with a large register file in order to provide high performance. This approach has resulted in two surprisingly powerful single-chip processors.

  4. Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity

    DTIC Science & Technology

    2007-09-01

    transmits a 32,767 bit pseudo -random “short” code that repeats 37.5 times per second. Since the pseudo -random bit pattern and modulation scheme are... correlation process takes two “ sample windows,” both of which are ν = 16 samples wide and are spaced N = 64 samples apart, and compares them. When the...technique in (3.4) is a necessary step in order to get a more accurate estimate of the sample shift from the symbol boundary correlator in (3.1). Figure

  5. Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem

    NASA Astrophysics Data System (ADS)

    Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad

    2013-12-01

    In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.

  6. Correlative Magnetic Imaging of Heat-Assisted Magnetic Recording Media in Cross Section Using Lorentz TEM and MFM

    DOE PAGES

    Kim, Taeho Roy; Phatak, Charudatta; Petford-Long, Amanda K.; ...

    2017-10-23

    In order to increase the storage density of hard disk drives, a detailed understanding of the magnetic structure of the granular magnetic layer is essential. Here, we demonstrate an experimental procedure of imaging recorded bits on heat-assisted magnetic recording (HAMR) media in cross section using Lorentz transmission electron microscopy (TEM). With magnetic force microscopy and focused ion beam (FIB), we successfully targeted a single track to prepare cross-sectional TEM specimens. Then, we characterized the magnetic structure of bits with their precise location and orientation using Fresnel mode of Lorentz TEM. Here, this method can promote understanding of the correlation betweenmore » bits and their material structure in HAMR media to design better the magnetic layer.« less

  7. Correlative Magnetic Imaging of Heat-Assisted Magnetic Recording Media in Cross Section Using Lorentz TEM and MFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Taeho Roy; Phatak, Charudatta; Petford-Long, Amanda K.

    In order to increase the storage density of hard disk drives, a detailed understanding of the magnetic structure of the granular magnetic layer is essential. Here, we demonstrate an experimental procedure of imaging recorded bits on heat-assisted magnetic recording (HAMR) media in cross section using Lorentz transmission electron microscopy (TEM). With magnetic force microscopy and focused ion beam (FIB), we successfully targeted a single track to prepare cross-sectional TEM specimens. Then, we characterized the magnetic structure of bits with their precise location and orientation using Fresnel mode of Lorentz TEM. Here, this method can promote understanding of the correlation betweenmore » bits and their material structure in HAMR media to design better the magnetic layer.« less

  8. Miniaturized module for the wireless transmission of measurements with Bluetooth.

    PubMed

    Roth, H; Schwaibold, M; Moor, C; Schöchlin, J; Bolz, A

    2002-01-01

    The wiring of patients for obtaining medical measurements has many disadvantages. In order to limit these, a miniaturized module was developed which digitalizes analog signals and sends the signal wirelessly to the receiver using Bluetooth. Bluetooth is especially suitable for this application because distances of up to 10 m are possible with low power consumption and robust transmission with encryption. The module consists of a Bluetooth chip, which is initialized in such a way by a microcontroller that connections from other bluetooth receivers can be accepted. The signals are then transmitted to the distant end. The maximum bit rate of the 23 mm x 30 mm module is 73.5 kBit/s. At 4.7 kBit/s, the current consumption is 12 mA.

  9. Extracting hidden messages in steganographic images

    DOE PAGES

    Quach, Tu-Thach

    2014-07-17

    The eventual goal of steganalytic forensic is to extract the hidden messages embedded in steganographic images. A promising technique that addresses this problem partially is steganographic payload location, an approach to reveal the message bits, but not their logical order. It works by finding modified pixels, or residuals, as an artifact of the embedding process. This technique is successful against simple least-significant bit steganography and group-parity steganography. The actual messages, however, remain hidden as no logical order can be inferred from the located payload. This paper establishes an important result addressing this shortcoming: we show that the expected mean residualsmore » contain enough information to logically order the located payload provided that the size of the payload in each stego image is not fixed. The located payload can be ordered as prescribed by the mean residuals to obtain the hidden messages without knowledge of the embedding key, exposing the vulnerability of these embedding algorithms. We provide experimental results to support our analysis.« less

  10. Real-time echocardiogram transmission protocol based on regions and visualization modes.

    PubMed

    Cavero, Eva; Alesanco, Álvaro; García, José

    2014-09-01

    This paper proposes an Echocardiogram Transmission Protocol (ETP) for real-time end-to-end transmission of echocardiograms over IP networks. The ETP has been designed taking into account the echocardiogram characteristics of each visualized region, encoding each region according to its data type, visualization characteristics and diagnostic importance in order to improve the coding and thus the transmission efficiency. Furthermore, each region is sent separately and different error protection techniques can be used for each region. This leads to an efficient use of resources and provides greater protection for those regions with more clinical information. Synchronization is implemented for regions that change over time. The echocardiogram composition is different for each device. The protocol is valid for all echocardiogram devices thanks to the incorporation of configuration information which includes the composition of the echocardiogram. The efficiency of the ETP has been proved in terms of the number of bits sent with the proposed protocol. The codec and transmission rates used for the regions of interest have been set according to previous recommendations. Although the saving in the codified bits depends on the video composition, a coding gain higher than 7% with respect to without using ETP has been achieved.

  11. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  12. Parallel efficient rate control methods for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  13. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  14. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  15. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  16. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  17. High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.

    This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.

  18. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  19. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  20. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide

    PubMed Central

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P. K. A.

    2014-01-01

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W−1/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems. PMID:25417847

  1. 24-Hour Relativistic Bit Commitment.

    PubMed

    Verbanis, Ephanielle; Martin, Anthony; Houlmann, Raphaël; Boso, Gianluca; Bussières, Félix; Zbinden, Hugo

    2016-09-30

    Bit commitment is a fundamental cryptographic primitive in which a party wishes to commit a secret bit to another party. Perfect security between mistrustful parties is unfortunately impossible to achieve through the asynchronous exchange of classical and quantum messages. Perfect security can nonetheless be achieved if each party splits into two agents exchanging classical information at times and locations satisfying strict relativistic constraints. A relativistic multiround protocol to achieve this was previously proposed and used to implement a 2-millisecond commitment time. Much longer durations were initially thought to be insecure, but recent theoretical progress showed that this is not so. In this Letter, we report on the implementation of a 24-hour bit commitment solely based on timed high-speed optical communication and fast data processing, with all agents located within the city of Geneva. This duration is more than 6 orders of magnitude longer than before, and we argue that it could be extended to one year and allow much more flexibility on the locations of the agents. Our implementation offers a practical and viable solution for use in applications such as digital signatures, secure voting and honesty-preserving auctions.

  2. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide.

    PubMed

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P K A

    2014-11-24

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W(-1)/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems.

  3. An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution

    NASA Astrophysics Data System (ADS)

    Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray

    2013-09-01

    In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. The final results obtained after steganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.

  4. SOPanG: online text searching over a pan-genome.

    PubMed

    Cislak, Aleksander; Grabowski, Szymon; Holub, Jan

    2018-06-22

    The many thousands of high-quality genomes available nowadays imply a shift from single genome to pan-genomic analyses. A basic algorithmic building brick for such a scenario is online search over a collection of similar texts, a problem with surprisingly few solutions presented so far. We present SOPanG, a simple tool for exact pattern matching over an elastic-degenerate string, a recently proposed simplified model for the pan-genome. Thanks to bit-parallelism, it achieves pattern matching speeds above 400MB/s, more than an order of magnitude higher than of other software. SOPanG is available for free from: https://github.com/MrAlexSee/sopang. Supplementary data are available at Bioinformatics online.

  5. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  6. Image steganography based on 2k correction and coherent bit length

    NASA Astrophysics Data System (ADS)

    Sun, Shuliang; Guo, Yongning

    2014-10-01

    In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.

  7. Two-dimensional signal processing using a morphological filter for holographic memory

    NASA Astrophysics Data System (ADS)

    Kondo, Yo; Shigaki, Yusuke; Yamamoto, Manabu

    2012-03-01

    Today, along with the wider use of high-speed information networks and multimedia, it is increasingly necessary to have higher-density and higher-transfer-rate storage devices. Therefore, research and development into holographic memories with three-dimensional storage areas is being carried out to realize next-generation large-capacity memories. However, in holographic memories, interference between bits, which affect the detection characteristics, occurs as a result of aberrations such as the deviation of a wavefront in an optical system. In this study, we pay particular attention to the nonlinear factors that cause bit errors, where filters with a Volterra equalizer and the morphologies are investigated as a means of signal processing.

  8. Effects of network node consolidation in optical access and aggregation networks on costs and power consumption

    NASA Astrophysics Data System (ADS)

    Lange, Christoph; Hülsermann, Ralf; Kosiankowski, Dirk; Geilhardt, Frank; Gladisch, Andreas

    2010-01-01

    The increasing demand for higher bit rates in access networks requires fiber deployment closer to the subscriber resulting in fiber-to-the-home (FTTH) access networks. Besides higher access bit rates optical access network infrastructure and related technologies enable the network operator to establish larger service areas resulting in a simplified network structure with a lower number of network nodes. By changing the network structure network operators want to benefit from a changed network cost structure by decreasing in short and mid term the upfront investments for network equipment due to concentration effects as well as by reducing the energy costs due to a higher energy efficiency of large network sites housing a high amount of network equipment. In long term also savings in operational expenditures (OpEx) due to the closing of central office (CO) sites are expected. In this paper different architectures for optical access networks basing on state-of-the-art technology are analyzed with respect to network installation costs and power consumption in the context of access node consolidation. Network planning and dimensioning results are calculated for a realistic network scenario of Germany. All node consolidation scenarios are compared against a gigabit capable passive optical network (GPON) based FTTH access network operated from the conventional CO sites. The results show that a moderate reduction of the number of access nodes may be beneficial since in that case the capital expenditures (CapEx) do not rise extraordinarily and savings in OpEx related to the access nodes are expected. The total power consumption does not change significantly with decreasing number of access nodes but clustering effects enable a more energyefficient network operation and optimized power purchase order quantities leading to benefits in energy costs.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wasner, Evan; Bearden, Sean; Žutić, Igor, E-mail: zigor@buffalo.edu

    Digital operation of lasers with injected spin-polarized carriers provides an improved operation over their conventional counterparts with spin-unpolarized carriers. Such spin-lasers can attain much higher bit rates, crucial for optical communication systems. The overall quality of a digital signal in these two types of lasers is compared using eye diagrams and quantified by improved Q-factors and bit-error-rates in spin-lasers. Surprisingly, an optimal performance of spin-lasers requires finite, not infinite, spin-relaxation times, giving a guidance for the design of future spin-lasers.

  10. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    NASA Astrophysics Data System (ADS)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm for a hemispherical and a triangular wave point cloud.

  11. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  12. Design of High Speed and Low Offset Dynamic Latch Comparator in 0.18 µm CMOS Process

    PubMed Central

    Rahman, Labonnah Farzana; Reaz, Mamun Bin Ibne; Yin, Chia Chieu; Ali, Mohammad Alauddin Mohammad; Marufuzzaman, Mohammad

    2014-01-01

    The cross-coupled circuit mechanism based dynamic latch comparator is presented in this research. The comparator is designed using differential input stages with regenerative S-R latch to achieve lower offset, lower power, higher speed and higher resolution. In order to decrease circuit complexity, a comparator should maintain power, speed, resolution and offset-voltage properly. Simulations show that this novel dynamic latch comparator designed in 0.18 µm CMOS technology achieves 3.44 mV resolution with 8 bit precision at a frequency of 50 MHz while dissipating 158.5 µW from 1.8 V supply and 88.05 µA average current. Moreover, the proposed design propagates as fast as 4.2 nS with energy efficiency of 0.7 fJ/conversion-step. Additionally, the core circuit layout only occupies 0.008 mm2. PMID:25299266

  13. Reducing temperature elevation of robotic bone drilling.

    PubMed

    Feldmann, Arne; Wandel, Jasmin; Zysset, Philippe

    2016-12-01

    This research work aims at reducing temperature elevation of bone drilling. An extensive experimental study was conducted which focused on the investigation of three main measures to reduce the temperature elevation as used in industry: irrigation, interval drilling and drill bit designs. Different external irrigation rates (0 ml/min, 15 ml/min, 30 ml/min), continuously drilled interval lengths (2 mm, 1 mm, 0.5 mm) as well as two drill bit designs were tested. A custom single flute drill bit was designed with a higher rake angle and smaller chisel edge to generate less heat compared to a standard surgical drill bit. A new experimental setup was developed to measure drilling forces and torques as well as the 2D temperature field at any depth using a high resolution thermal camera. The results show that external irrigation is a main factor to reduce temperature elevation due not primarily to its effect on cooling but rather due to the prevention of drill bit clogging. During drilling, the build up of bone material in the drill bit flutes result in excessive temperatures due to an increase in thrust forces and torques. Drilling in intervals allows the removal of bone chips and cleaning of flutes when the drill bit is extracted as well as cooling of the bone in-between intervals which limits the accumulation of heat. However, reducing the length of the drilled interval was found only to be beneficial for temperature reduction using the newly designed drill bit due to the improved cutting geometry. To evaluate possible tissue damage caused by the generated heat increase, cumulative equivalent minutes (CEM43) were calculated and it was found that the combination of small interval length (0.5 mm), high irrigation rate (30 ml/min) and the newly designed drill bit was the only parameter combination which allowed drilling below the time-thermal threshold for tissue damage. In conclusion, an optimized drilling method has been found which might also enable drilling in more delicate procedures such as that performed during minimally invasive robotic cochlear implantation. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. A kilobyte rewritable atomic memory

    NASA Astrophysics Data System (ADS)

    Kalff, Floris; Rebergen, Marnix; Fahrenfort, Nora; Girovsky, Jan; Toskovic, Ranko; Lado, Jose; FernáNdez-Rossier, JoaquíN.; Otte, Sander

    The ability to manipulate individual atoms by means of scanning tunneling microscopy (STM) opens op opportunities for storage of digital data on the atomic scale. Recent achievements in this direction include data storage based on bits encoded in the charge state, the magnetic state, or the local presence of single atoms or atomic assemblies. However, a key challenge at this stage is the extension of such technologies into large-scale rewritable bit arrays. We demonstrate a digital atomic-scale memory of up to 1 kilobyte (8000 bits) using an array of individual surface vacancies in a chlorine terminated Cu(100) surface. The chlorine vacancies are found to be stable at temperatures up to 77 K. The memory, crafted using scanning tunneling microscopy at low temperature, can be read and re-written automatically by means of atomic-scale markers, and offers an areal density of 502 Terabits per square inch, outperforming state-of-the-art hard disk drives by three orders of magnitude.

  15. Partially pre-calculated weights for the backpropagation learning regime and high accuracy function mapping using continuous input RAM-based sigma-pi nets.

    PubMed

    Neville, R S; Stonham, T J; Glover, R J

    2000-01-01

    In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma-pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital "Higher Order" sigma-pi nodes and studies continuous input RAM-based sigma-pi units. The units are trained with the backpropagation learning regime to learn functions to a high accuracy. The neural model is the sigma-pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for mapping accurate real-valued functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output functions in the range Y epsilon [0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma-pi node which enables the provision of high accuracy outputs. The sigma-pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma-pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma-pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma-pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of mapping them to RAM-based hardware using 'ring memories'. Finally, we study the sigma-pi units' ability to generalise once they are trained to map real-valued, high accuracy, continuous functions. We use sigma-pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289-303).

  16. Self-recovery fragile watermarking algorithm based on SPHIT

    NASA Astrophysics Data System (ADS)

    Xin, Li Ping

    2015-12-01

    A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.

  17. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  18. Design and evaluation of a high-performance broadband fiber access based on coarse wavelength division multiplexing

    NASA Astrophysics Data System (ADS)

    R. Horche, Paloma; del Rio Campos, Carmina

    2004-10-01

    The proliferation of high-bandwidth applications has created a growing interest in upgrading networks to deliver broadband services to homes and small businesses between network providers. There has to be a great efficiency between the total cost of the infrastructures and the services that can be offered to the end users. Coarse Wavelength Division Multiplexing (CWDM) is an ideal solution to the tradeoff between cost and capacity. This technology uses all or part of the 1270 to 1610 nm wavelength fiber range with optical channel separation about 20 nm. The problem in CWDM systems is that for a given reach the performance is not equal for all of transmitted channels because of the very different fiber attenuation and dispersion characteristics for each channel. In this work, by means of an Optical Communication System Design Software, we study a CWDM network configuration, for lengths of up to 100 km, in order to achieve low Bit Error Rate (BER) performance for all optical channels. We show that the type of fiber used will have an impact on both the performance of the systems and on the bit rate of each optical channel. In the study, we use both on the already laid and widely deployed singlemode ITU-T G.652 optical fibers and on the latest "water-peak-suppressed" versions of the same fiber as well as G.655 fibers. We have used two types of DML. One is strongly adiabatic chirp dominated and another is strongly transient chirp dominated. The analysis has demonstrated that all the studied fibers have a similar performance when laser strongly adiabatic chirp dominated is used for lengths of up to 40 Km and that fibers with negative sign of dispersion has a higher performance for long distance, at high bit rates and throughout the spectral range analyzed. An important contribution of this work is that it has demonstrated that when DML are used it produces a dispersion accommodation that is function of the fiber length, wavelength and bit rate. This could put in danger the quality of a system CWDM if it is not designed carefully.

  19. Use of higher order signal moments and high speed digital sampling technique for neutron flux measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baers, L.B.; Gutierrez, T.R.; Mendoza, R.A.

    1993-08-01

    The second (conventional variance or Campbell signal) , , , the third , and the modified fourth order [minus] 3*[sup 2] etc. central signal moments associated with the amplified (K) and filtered currents [i[sub 1], i[sub 2], x = K * (i[sub 2]-),] from two electrodes of an ex-core neutron sensitive fission detector have been measured versus the reactor power of the 1 MW TRIGA reactor in Mexico City. Two channels of a high speed (400 kHz) multiplexing data sampler and A/D converter with 12 bit resolution and one megawords buffer memory were used. The data were further retrieved intomore » a PC and estimates for auto- and cross-correlation moments up to the fifth order, coherence (/[radical]), skewness (/([radical]/)[sup 3]), excess (/[sup 2] - 3) etc. quantities were calculated off-line. A five mode operation of the detector was achieved including the conventional counting rates and currents in agreement with the theory and the authors previous results with analogue techniques. The signals were proportional to the neutron flux and reactor power in some flux ranges. The suppression of background noise is improved and the lower limit of the measurement range is extended as the order of moment is increased, in agreement with the theory. On the other hand the statistical uncertainty is increased. At increasing flux levels it was statistically more difficult to obtain flux estimates based on the higher order ([>=]3) moments.« less

  20. New scene change control scheme based on pseudoskipped picture

    NASA Astrophysics Data System (ADS)

    Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.

    1997-01-01

    A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.

  1. Improved LSB matching steganography with histogram characters reserved

    NASA Astrophysics Data System (ADS)

    Chen, Zhihong; Liu, Wenyao

    2008-03-01

    This letter bases on the researches of LSB (least significant bit, i.e. the last bit of a binary pixel value) matching steganographic method and the steganalytic method which aims at histograms of cover images, and proposes a modification to LSB matching. In the LSB matching, if the LSB of the next cover pixel matches the next bit of secret data, do nothing; otherwise, choose to add or subtract one from the cover pixel value at random. In our improved method, a steganographic information table is defined and records the changes which embedded secrete bits introduce in. Through the table, the next LSB which has the same pixel value will be judged to add or subtract one dynamically in order to ensure the histogram's change of cover image is minimized. Therefore, the modified method allows embedding the same payload as the LSB matching but with improved steganographic security and less vulnerability to attacks compared with LSB matching. The experimental results of the new method show that the histograms maintain their attributes, such as peak values and alternative trends, in an acceptable degree and have better performance than LSB matching in the respects of histogram distortion and resistance against existing steganalysis.

  2. Reducing Bits in Electrodeposition Process of Commercial Vehicle - A Case Study

    NASA Astrophysics Data System (ADS)

    Rahim, Nabiilah Ab; Hamedon, Zamzuri; Mohd Turan, Faiz; Iskandar, Ismed

    2016-02-01

    Painting process is critical in commercial vehicle manufacturing process for protection and decorative. The good quality on painted body is important to reduce repair cost and achieve customer satisfaction. In order to achieve the good quality, it is important to reduce the defect at the first process in painting process which is electrodeposition process. The Pareto graph and cause and effect diagram in the seven QC tools is utilized to reduce the electrodeposition defects. The main defects in the electrodeposition process in this case study are the bits. The 55% of the bits are iron filings. The iron filings which come from the metal assembly process at the body shop are minimised by controlling the spot welding parameter, defect control and standard body cleaning process. However the iron filings are still remained on the body and carry over to the paint shop. The remained iron filings on the body are settled inside the dipping tank and removed by filtration system and magnetic separation. The implementation of filtration system and magnetic separation improved 27% of bits and reduced 42% of sanding man hour with a total saving of RM38.00 per unit.

  3. Spatial distributions of core and intact glycerol dialkyl glycerol tetraethers (GDGTs) in the Columbia River basin, Washington: Insights into origin and implications for the BIT index

    NASA Astrophysics Data System (ADS)

    French, David W.; Huguet, Carme; Wakeham, Stuart; Turich, Courtney; Carlson, Laura T.; Ingalls, Anitra E.

    2015-04-01

    Branched and isoprenoid glycerol dialkyl glycerol tetraethers (GDGTs) are used to reconstruct carbon flow from terrestrial landscapes to the ocean in a proxy called the branched vs isoprenoid tetraether index, or BIT Index. The index is based on analysis of core GDGTs from non-living material that originate from the cell membranes of bacteria living in soils and archaea living primarily in the marine environment. However, uncertainty in the identity and location of branched GDGTs (BrGDGTs) producing organisms and the likely production of isoprenoid GDGTs (IsoGDGTs) in terrestrial environments hinders interpretation of the BIT Index. Since BrGDGTs remain our only tool to study BrGDGT producing organisms, it is particularly important to use the intact form of BrGDGTs, present in living cells, to infer organism distributions. In situ production within riverine, lacustrine, and marine environments is currently thought to be possible, yet few measures of intact BrGDGTs (I-BrGDGTs) are available to confirm this. Here we assess the spatial distribution of both core and intact GDGTs throughout the Columbia River basin and nearby areas in Washington and Oregon in order to elucidate source environments for these lipids. The presence of I-BrGDGTs throughout the studied soils, rivers and estuaries suggests in situ production across the continuum from soil to marine environments. Likewise, intact crenarchaeol, the marine endmember isoprenoidal GDGT used in the BIT index, was present in all samples. Widespread production of each GDGT class along terrestrial carbon transport paths likely alters the BIT Index along this continuum. The core to intact GDGT ratios and the weak correlation between I-GDGT derived BIT values and carbon isotope signatures suggest a mixture of allocthonous and autochthonous sources of GDGTs in riverine and marine environments. Our findings highlight the need for further work into the provenance of GDGTs to improve the BIT index and other environmental proxies that rely on these compounds.

  4. Experimental investigation of distinguishable and non-distinguishable grayscales applicable in active-matrix organic light-emitting diodes for quality engineering

    NASA Astrophysics Data System (ADS)

    Yang, Henglong; Chang, Wen-Cheng; Lin, Yu-Hsuan; Chen, Ming-Hong

    2017-08-01

    The distinguishable and non-distinguishable 6-bit (64) grayscales of green and red organic light-emitting diode (OLED) were experimentally investigated by using high-sensitive photometric instrument. The feasibility of combining external detection system for quality engineering to compensate the grayscale loss based on preset grayscale tables was also investigated by SPICE simulation. The degradation loss of OLED deeply affects image quality as grayscales become inaccurate. The distinguishable grayscales are indicated as those brightness differences and corresponding current increments are differentiable by instrument. The grayscales of OLED in 8-bit (256) or higher may become nondistinguishable as current or voltage increments are in the same order of noise level in circuitry. The distinguishable grayscale tables for individual red, green, blue, and white colors can be experimentally established as preset reference for quality engineering (QE) in which the degradation loss is compensated by corresponding grayscale numbers shown in preset table. The degradation loss of each OLED colors is quantifiable by comparing voltage increments to those in preset grayscale table if precise voltage increments are detectable during operation. The QE of AMOLED can be accomplished by applying updated grayscale tables. Our preliminary simulation result revealed that it is feasible to quantify degradation loss in terms of grayscale numbers by using external detector circuitry.

  5. A low power, low noise Programmable Analog Front End (PAFE) for biopotential measurements.

    PubMed

    Adimulam, Mahesh Kumar; Divya, A; Tejaswi, K; Srinivas, M B

    2017-07-01

    A low power Programmable Analog Front End (PAFE) for biopotential measurements is presented in this paper. The PAFE circuit processes electrocardiogram (ECG), electromyography (EMG) and electroencephalogram (EEG) signals with higher accuracy. It consists mainly of improved transconductance programmable gain instrumentational amplifier (PGIA), programmable high pass filter (PHPF), and second order low pass filter (SLPF). A 15-bit programmable 5-stage successive approximation analog-to-digital converter (SAR-ADC) is implemented for improving the performance, whose power consumption is reduced due to multiple stages and by OTA/Comparator sharing technique between the stages. The power consumption is further reduced by operating the analog portion of PAFE on 0.5V supply voltage and digital portion on 0.3V supply voltage generated internally through a voltage regulator. The proposed low power PAFE has been fabricated in 180nm standard CMOS process. The performance parameters of PAFE in 15-bit mode are found to be, gain of 31-70 dB, input referred noise of 1.15 μVrms, CMRR of 110 dB, PSRR of 104 dB, and signal-to-noise distortion ratio (SNDR) of 83.5dB. The power consumption of the design is 1.1 μW @ 0.5 V supply voltage and it occupies a core silicon area of 1.2 mm 2 .

  6. CD uniformity control for thick resist process

    NASA Astrophysics Data System (ADS)

    Huang, Chi-hao; Liu, Yu-Lin; Wang, Weihung; Yang, Mars; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2017-03-01

    In order to meet the increasing storage capacity demand and reduce bit cost of NAND flash memories, 3D stacked flash cell array has been proposed. In constructing 3D NAND flash memories, the higher bit number per area is achieved by increasing the number of stacked layers. Thus the so-called "staircase" patterning to form electrical connection between memory cells and word lines has become one of the primarily critical processes in 3D memory manufacture. To provide controllable critical dimension (CD) with good uniformity involving thick photo-resist has also been of particular concern for staircase patterning. The CD uniformity control has been widely investigated with relatively thinner resist associated with resolution limit dimension but thick resist coupling with wider dimension. This study explores CD uniformity control associated with thick photo-resist processing. Several critical parameters including exposure focus, exposure dose, baking condition, pattern size and development recipe, were found to strongly correlate with the thick photo-resist profile accordingly affecting the CD uniformity control. To minimize the within-wafer CD variation, the slightly tapered resist profile is proposed through well tailoring the exposure focus and dose together with optimal development recipe. Great improvements on DCD (ADI CD) and ECD (AEI CD) uniformity as well as line edge roughness were achieved through the optimization of photo resist profile.

  7. Acquisition of shape information in working memory, as a function of viewing time and number of consecutive images: evidence for a succession of discrete storage classes.

    PubMed

    Ninio, J

    1998-07-01

    The capacity of visual working memory was investigated using abstract images that were slightly distorted NxN (with generally N=8) square lattices of black or white randomly selected elements. After viewing an image, or a sequence of images, the subjects viewed couples of images containing the test image and a distractor image derived from the first one by changing the black or white value of q randomly selected elements. The number q was adjusted in each experiment to the difficulty of the task and the abilities of the subject. The fraction of recognition errors, given q and N was used to evaluate the number M of bits memorized by the subject. For untrained subjects, this number M varied in a biphasic manner as a function of the time t of presentation of the test image: it was on average 13 bits for 1 s, 16 bits for 2 to 5 s, and 20 bits for 8 s. The slow pace of acquisition, from 1 to 8 s, seems due to encoding difficulties, and not to channel capacity limitations. Beyond 8 s, M(t), accurately determined for one subject, followed a square root law, in agreement with 19th century observations on the memorization of lists of digits. When two consecutive 8x8 images were viewed and tested in the same order, the number of memorized bits was downshifted by a nearly constant amount, independent of t, and equal on average to 6-7 bits. Across the subjects, the shift was independent of M. When two consecutive test images were related, the recognition errors decreased for both images, whether the testing was performed in the presentation or the reverse order. Studies involving three subjects, indicate that, when viewing m consecutive images, the average amount of information captured per image varies with m in a stepwise fashion. The first two step boundaries were around m=3 and m=9-12. The data are compatible with a model of organization of working memory in several successive layers containing increasing numbers of units, the more remote a unit, the lower the rate at which it may acquire encoded information. Copyright 1998 Elsevier Science B.V.

  8. Study on the Effect of Diamond Grain Size on Wear of Polycrystalline Diamond Compact Cutter

    NASA Astrophysics Data System (ADS)

    Abdul-Rani, A. M.; Che Sidid, Adib Akmal Bin; Adzis, Azri Hamim Ab

    2018-03-01

    Drilling operation is one of the most crucial step in oil and gas industry as it proves the availability of oil and gas under the ground. Polycrystalline Diamond Compact (PDC) bit is a type of bit which is gaining popularity due to its high Rate of Penetration (ROP). However, PDC bit can easily wear off especially when drilling hard rock. The purpose of this study is to identify the relationship between the grain sizes of the diamond and wear rate of the PDC cutter using simulation-based study with FEA software (ABAQUS). The wear rates of a PDC cutter with a different diamond grain sizes were calculated from simulated cuttings of cutters against granite. The result of this study shows that the smaller the diamond grain size, the higher the wear resistivity of PDC cutter.

  9. A Parametric Study for the Design of an Optimized Ultrasonic Percussive Planetary Drill Tool.

    PubMed

    Li, Xuan; Harkness, Patrick; Worrall, Kevin; Timoney, Ryan; Lucas, Margaret

    2017-03-01

    Traditional rotary drilling for planetary rock sampling, in situ analysis, and sample return are challenging because the axial force and holding torque requirements are not necessarily compatible with lightweight spacecraft architectures in low-gravity environments. This paper seeks to optimize an ultrasonic percussive drill tool to achieve rock penetration with lower reacted force requirements, with a strategic view toward building an ultrasonic planetary core drill (UPCD) device. The UPCD is a descendant of the ultrasonic/sonic driller/corer technique. In these concepts, a transducer and horn (typically resonant at around 20 kHz) are used to excite a toroidal free mass that oscillates chaotically between the horn tip and drill base at lower frequencies (generally between 10 Hz and 1 kHz). This creates a series of stress pulses that is transferred through the drill bit to the rock surface, and while the stress at the drill-bit tip/rock interface exceeds the compressive strength of the rock, it causes fractures that result in fragmentation of the rock. This facilitates augering and downward progress. In order to ensure that the drill-bit tip delivers the greatest effective impulse (the time integral of the drill-bit tip/rock pressure curve exceeding the strength of the rock), parameters such as the spring rates and the mass of the free mass, the drill bit and transducer have been varied and compared in both computer simulation and practical experiment. The most interesting findings and those of particular relevance to deep drilling indicate that increasing the mass of the drill bit has a limited (or even positive) influence on the rate of effective impulse delivered.

  10. Measurement-Device-Independent Quantum Key Distribution over 200 km

    NASA Astrophysics Data System (ADS)

    Tang, Yan-Lin; Yin, Hua-Lei; Chen, Si-Jing; Liu, Yang; Zhang, Wei-Jun; Jiang, Xiao; Zhang, Lu; Wang, Jian; You, Li-Xing; Guan, Jian-Yu; Yang, Dong-Xu; Wang, Zhen; Liang, Hao; Zhang, Zhen; Zhou, Nan; Ma, Xiongfeng; Chen, Teng-Yun; Zhang, Qiang; Pan, Jian-Wei

    2014-11-01

    Measurement-device-independent quantum key distribution (MDIQKD) protocol is immune to all attacks on detection and guarantees the information-theoretical security even with imperfect single-photon detectors. Recently, several proof-of-principle demonstrations of MDIQKD have been achieved. Those experiments, although novel, are implemented through limited distance with a key rate less than 0.1 bit /s . Here, by developing a 75 MHz clock rate fully automatic and highly stable system and superconducting nanowire single-photon detectors with detection efficiencies of more than 40%, we extend the secure transmission distance of MDIQKD to 200 km and achieve a secure key rate 3 orders of magnitude higher. These results pave the way towards a quantum network with measurement-device-independent security.

  11. Mars Science Laboratory Drill

    NASA Technical Reports Server (NTRS)

    Okon, Avi B.; Brown, Kyle M.; McGrath, Paul L.; Klein, Kerry J.; Cady, Ian W.; Lin, Justin Y.; Ramirez, Frank E.; Haberland, Matt

    2012-01-01

    This drill (see Figure 1) is the primary sample acquisition element of the Mars Science Laboratory (MSL) that collects powdered samples from various types of rock (from clays to massive basalts) at depths up to 50 mm below the surface. A rotary-percussive sample acquisition device was developed with an emphasis on toughness and robustness to handle the harsh environment on Mars. It is the first rover-based sample acquisition device to be flight-qualified (see Figure 2). This drill features an autonomous tool change-out on a mobile robot, and novel voice-coil-based percussion. The drill comprises seven subelements. Starting at the end of the drill, there is a bit assembly that cuts the rock and collects the sample. Supporting the bit is a subassembly comprising a chuck mechanism to engage and release the new and worn bits, respectively, and a spindle mechanism to rotate the bit. Just aft of that is a percussion mechanism, which generates hammer blows to break the rock and create the dynamic environment used to flow the powdered sample. These components are mounted to a translation mechanism, which provides linear motion and senses weight-on-bit with a force sensor. There is a passive-contact sensor/stabilizer mechanism that secures the drill fs position on the rock surface, and flex harness management hardware to provide the power and signals to the translating components. The drill housing serves as the primary structure of the turret, to which the additional tools and instruments are attached. The drill bit assembly (DBA) is a passive device that is rotated and hammered in order to cut rock (i.e. science targets) and collect the cuttings (powder) in a sample chamber until ready for transfer to the CHIMRA (Collection and Handling for Interior Martian Rock Analysis). The DBA consists of a 5/8-in. (.1.6- cm) commercial hammer drill bit whose shank has been turned down and machined with deep flutes designed for aggressive cutting removal. Surrounding the shank of the bit is a thick-walled maraging steel collection tube allowing the powdered sample to be augured up the hole into the sample chamber. For robustness, the wall thickness of the DBA was maximized while still ensuring effective sample collection. There are four recesses in the bit tube that are used to retain the fresh bits in their bit box. The rotating bit is supported by a back-to-back duplex bearing pair within a housing that is connected to the outer DBA housing by two titanium diaphragms. The only bearings on the drill in the sample flow are protected by a spring-energized seal, and an integrated shield that diverts the ingested powdered sample from the moving interface. The DBA diaphragms provide radial constraint of the rotating bit and form the sample chambers. Between the diaphragms there is a sample exit tube from which the sample is transferred to the CHIMRA. To ensure that the entire collected sample is retained, no matter the orientation of the drill with respect to gravity during sampling, the pass-through from the forward to the aft chamber resides opposite to the exit tube.

  12. On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A

    NASA Astrophysics Data System (ADS)

    Blake, J. B.; Mandel, R.

    1987-02-01

    The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.

  13. Compact battery-less information terminal (CoBIT) for location-based support systems

    NASA Astrophysics Data System (ADS)

    Nishimura, Takuichi; Itoh, Hideo; Yamamoto, Yoshinobu; Nakashima, Hideyuki

    2002-06-01

    The target of ubiquitous computing environment is to support users to get necessary information and services in a situation-dependent form. Therefore, we propose a location-based information support system by using Compact Battery-less Information Terminal (CoBIT). A CoBIT can communicate with the environmental system and with the user by only the energy supply from the environment. It has a solar cell and get a modulated light from an environmental optical beam transmitter. The current from the solar cell is directly (or through passive circuit) introduced into an earphone, which generates sound for the user. The current is also used to make vibration, LED signal or electrical stimulus on the skin. The sizes of CoBITs are about 2cm in diameter, 3cm in length, which can be hanged on ears conveniently. The cost of it would be only about 1 dollar if produced massively. The CoBIT also has sheet type corner reflector, which reflect optical beam back in the direction of the light source. Therefore the environmental system can easily detect the terminal position and direction as well as some simple signs from the user by multiple cameras with infra-red LEDs. The system identifies the sign by the modulated patterns of the reflected light, which the user makes by occluding the reflector by hand. The environmental system also recognizes other objects using other sensors and displays video information on a nearby monitor in order to realize situated support.

  14. Pattern transfer from nanoparticle arrays

    NASA Astrophysics Data System (ADS)

    Hogg, Charles R., III

    This project contributes to the long-term extensibility of bit-patterned media (BPM), by removing obstacles to using a new and smaller class of self-assembling materials: surfactant-coated nanoparticles. Self-assembly rapidly produces regular patterns of small features over large areas. If these patterns can be used as templates for magnetic bits, the resulting media would have both high capacity and high bit density. The data storage industry has identified block copolymers (BCP) as the self-assembling technology for the first generation of BPM. Arrays of surfactant-coated nanoparticles have long shown higher feature densities than BCP, but their patterns could not previously be transferred into underlying substrates. I identify one key obstacle that has prevented this pattern transfer: the particles undergo a disordering transition during etching which I have called "cracking". I compare several approaches to measuring the degree of cracking, and I develop two novel techniques for preventing it and allowing pattern transfer. I demonstrate two different kinds of pattern transfer: positive (dots) and negative (antidots). To make dots, I etch the substrate between the particles with a directional CF4-based reactive ion etch (RIE). I find the ultrasmall gaps (just 2 nm) cause a tremendous slowdown in the etch rate, by a factor of 10 or more---an observation of fundamental significance for any pattern transfer at ultrahigh bit densities. Antidots are made by depositing material in the interstices, then removing the particles to leave behind a contiguous inorganic lattice. This lattice can itself be used as an etch mask for CF4-based RIE, in order to increase the height contrast. The antidot process promises great generality in choice of materials, both for the antidot lattice and the particles themselves; here, I present lattices of Al and Cr, ternplated from arrays of 13.7 nm-diameter Fe3O4 or 30 nm-diameter MnO nanoparticles. The fidelity of transfer is also noticeably better for antidots than for dots, making antidots the more promising technique for industrial applications. The smallest period for which I have shown pattern transfer (15.7 nm) is comparable to (but slightly smaller than) the smallest period currently shown for pattern transfer from block copolymers (17 nm); hence, my results compare favorably with the state of the art. Ultimately, by demonstrating that surfactant-coated nanoparticles can be used as pattern masks, this work increases their viability as an option to continue the exponential growth of bit density in magnetic storage media.

  15. Laboratory Equipment for Investigation of Coring Under Mars-like Conditions

    NASA Astrophysics Data System (ADS)

    Zacny, K.; Cooper, G.

    2004-12-01

    To develop a suitable drill bit and set of operating conditions for Mars sample coring applications, it is essential to make tests under conditions that match those of the mission. The goal of the laboratory test program was to determine the drilling performance of diamond-impregnated bits under simulated Martian conditions, particularly those of low pressure and low temperature in a carbon dioxide atmosphere. For this purpose, drilling tests were performed in a vacuum chamber kept at a pressure of 5 torr. Prior to drilling, a rock, soil or a clay sample was cooled down to minus 80 degrees Celsius (Zacny et al, 2004). Thus, all Martian conditions, except the low gravity were simulated in the controlled environment. Input drilling parameters of interest included the weight on bit and rotational speed. These two independent variables were controlled from a PC station. The dependent variables included the bit reaction torque, the depth of the bit inside the drilled hole and the temperatures at various positions inside the drilled sample, in the center of the core as it was being cut and at the bit itself. These were acquired every second by a data acquisition system. Additional information such as the rate of penetration and the drill power were calculated after the test was completed. The weight of the rock and the bit prior to and after the test were measured to aid in evaluating the bit performance. In addition, the water saturation of the rock was measured prior to the test. Finally, the bit was viewed under the Scanning Electron Microscope and the Stereo Optical Microscope. The extent of the bit wear and its salient features were captured photographically. The results revealed that drilling or coring under Martian conditions in a water saturated rock is different in many respects from drilling on Earth. This is mainly because the Martian atmospheric pressure is in the vicinity of the pressure at the triple point of water. Thus ice, heated by contact with the rotating bit, sublimed and released water vapor. The volumetric expansion of ice turning into a vapor was over 150 000 times. This continuously generated volume of gas effectively cleared the freeze-dried rock cuttings from the bottom of the hole. In addition, the subliming ice provided a powerful cooling effect that kept the bit cold and preserved the core in its original state. Keeping the rock core below freezing also reduced drastically the chances of cross contamination. To keep the bit cool in near vacuum conditions where convective cooling is poor, some intermittent stops would have to be made. Under virtually the same drilling conditions, coring under Martian low temperature and pressure conditions consumed only half the power while doubling the rate of penetration as compared to drilling under Earth atmospheric conditions. However, the rate of bit wear was much higher under Martian conditions (Zacny and Cooper, 2004) References Zacny, K. A., M. C. Quayle, and G. A. Cooper (2004), Laboratory drilling under Martian conditions yields unexpected results, J. Geophys. Res., 109, E07S16, doi:10.1029/2003JE002203. Zacny, K. A., and G. A. Cooper (2004), Investigation of diamond-impregnated drill bit wear while drilling under Earth and Mars conditions, J. Geophys. Res., 109, E07S10, doi:10.1029/2003JE002204. Acknowledgments The research supported by the NASA Astrobiology, Science and Technology Instrument Development (ASTID) program.

  16. Computer Series, 17: Bits and Pieces, 5.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1981-01-01

    Contains short descriptions of computer programs or hardware that simulate laboratory instruments or results of kinetics experiments, including ones that include experiment error, numerical simulation, first-order kinetic mechanisms, a game for decisionmaking, and simulated mass spectrophotometers. (CS)

  17. Various Attractors, Coexisting Attractors and Antimonotonicity in a Simple Fourth-Order Memristive Twin-T Oscillator

    NASA Astrophysics Data System (ADS)

    Zhou, Ling; Wang, Chunhua; Zhang, Xin; Yao, Wei

    By replacing the resistor in a Twin-T network with a generalized flux-controlled memristor, this paper proposes a simple fourth-order memristive Twin-T oscillator. Rich dynamical behaviors can be observed in the dynamical system. The most striking feature is that this system has various periodic orbits and various chaotic attractors generated by adjusting parameter b. At the same time, coexisting attractors and antimonotonicity are also detected (especially, two full Feigenbaum remerging trees in series are observed in such autonomous chaotic systems). Their dynamical features are analyzed by phase portraits, Lyapunov exponents, bifurcation diagrams and basin of attraction. Moreover, hardware experiments on a breadboard are carried out. Experimental measurements are in accordance with the simulation results. Finally, a multi-channel random bit generator is designed for encryption applications. Numerical results illustrate the usefulness of the random bit generator.

  18. Design, Simulation and Characteristics Research of the Interface Circuit based on nano-polysilicon thin films pressure sensor

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaosong; Zhao, Xiaofeng; Yin, Liang

    2018-03-01

    This paper presents a interface circuit for nano-polysilicon thin films pressure sensor. The interface circuit includes consist of instrument amplifier and Analog-to-Digital converter (ADC). The instrumentation amplifier with a high common mode rejection ratio (CMRR) is implemented by three stages current feedback structure. At the same time, in order to satisfy the high precision requirements of pressure sensor measure system, the 1/f noise corner of 26.5 mHz can be achieved through chopping technology at a noise density of 38.2 nV/sqrt(Hz).Ripple introduced by chopping technology adopt continuous ripple reduce circuit (RRL), which achieves the output ripple level is lower than noise. The ADC achieves 16 bits significant digit by adopting sigma-delta modulator with fourth-order single-bit structure and digital decimation filter, and finally achieves high precision integrated pressure sensor interface circuit.

  19. The elimination of influence of disturbing bodies' coordinates and derivatives discontinuity on the accuracy of asteroid motion simulation

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.; Votchel, I. A.

    2013-12-01

    The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.

  20. Using single buffers and data reorganization to implement a multi-megasample fast Fourier transform

    NASA Technical Reports Server (NTRS)

    Brown, R. D.

    1992-01-01

    Data ordering in large fast Fourier transforms (FFT's) is both conceptually and implementationally difficult. Discribed here is a method of visualizing data orderings as vectors of address bits, which enables the engineer to use more efficient data orderings and reduce double-buffer memory designs. Also detailed are the difficulties and algorithmic solutions involved in FFT lengths up to 4 megasamples (Msamples) and sample rates up to 80 MHz.

  1. Magnetic printing characteristics using master disk with perpendicular magnetic anisotropy

    NASA Astrophysics Data System (ADS)

    Fujiwara, Naoto; Nishida, Yoichi; Ishioka, Toshihide; Sugita, Ryuji; Yasunaga, Tadashi

    With the increase in recording density and capacity of hard-disk drives (HDD), high speed, high precision and low cost servo writing method has become an issue in HDD industry. The magnetic printing was proposed as the ultimate solution for this issue [1-3]. There are two types of magnetic printing methods, which are 'Bit Printing (BP)' and 'Edge Printing (EP)'. BP method is conducted by applying external field whose direction is vertical to the plane of both master disk (Master) and perpendicular magnetic recording (PMR) media (Slave). On the other hand, EP method is conducted by applying external field toward down track direction of both master and slave. In BP for bit length shorter than 100 nm, the SNR of perpendicular anisotropic master was higher than isotropic master. And the SNR of EP for the bit length shorter than 50 nm was demonstrated.

  2. A microprocessor based on a two-dimensional semiconductor.

    PubMed

    Wachter, Stefan; Polyushkin, Dmitry K; Bethge, Ole; Mueller, Thomas

    2017-04-11

    The advent of microcomputers in the 1970s has dramatically changed our society. Since then, microprocessors have been made almost exclusively from silicon, but the ever-increasing demand for higher integration density and speed, lower power consumption and better integrability with everyday goods has prompted the search for alternatives. Germanium and III-V compound semiconductors are being considered promising candidates for future high-performance processor generations and chips based on thin-film plastic technology or carbon nanotubes could allow for embedding electronic intelligence into arbitrary objects for the Internet-of-Things. Here, we present a 1-bit implementation of a microprocessor using a two-dimensional semiconductor-molybdenum disulfide. The device can execute user-defined programs stored in an external memory, perform logical operations and communicate with its periphery. Our 1-bit design is readily scalable to multi-bit data. The device consists of 115 transistors and constitutes the most complex circuitry so far made from a two-dimensional material.

  3. A microprocessor based on a two-dimensional semiconductor

    NASA Astrophysics Data System (ADS)

    Wachter, Stefan; Polyushkin, Dmitry K.; Bethge, Ole; Mueller, Thomas

    2017-04-01

    The advent of microcomputers in the 1970s has dramatically changed our society. Since then, microprocessors have been made almost exclusively from silicon, but the ever-increasing demand for higher integration density and speed, lower power consumption and better integrability with everyday goods has prompted the search for alternatives. Germanium and III-V compound semiconductors are being considered promising candidates for future high-performance processor generations and chips based on thin-film plastic technology or carbon nanotubes could allow for embedding electronic intelligence into arbitrary objects for the Internet-of-Things. Here, we present a 1-bit implementation of a microprocessor using a two-dimensional semiconductor--molybdenum disulfide. The device can execute user-defined programs stored in an external memory, perform logical operations and communicate with its periphery. Our 1-bit design is readily scalable to multi-bit data. The device consists of 115 transistors and constitutes the most complex circuitry so far made from a two-dimensional material.

  4. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  5. NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4

    DTIC Science & Technology

    2012-09-26

    defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets

  6. Low-noise delays from dynamic Brillouin gratings based on perfect Golomb coding of pump waves.

    PubMed

    Antman, Yair; Levanon, Nadav; Zadok, Avi

    2012-12-15

    A method for long variable all-optical delay is proposed and simulated, based on reflections from localized and stationary dynamic Brillouin gratings (DBGs). Inspired by radar methods, the DBGs are inscribed by two pumps that are comodulated by perfect Golomb codes, which reduce the off-peak reflectivity. Compared with random bit sequence coding, Golomb codes improve the optical signal-to-noise ratio (OSNR) of delayed waveforms by an order of magnitude. Simulations suggest a delay of 5  Gb/s data by 9 ns, or 45 bit durations, with an OSNR of 13 dB.

  7. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  8. Holographic shell model: Stack data structure inside black holes?

    NASA Astrophysics Data System (ADS)

    Davidson, Aharon

    2014-03-01

    Rather than tiling the black hole horizon by Planck area patches, we suggest that bits of information inhabit, universally and holographically, the entire black core interior, a bit per a light sheet unit interval of order Planck area difference. The number of distinguishable (tagged by a binary code) configurations, counted within the context of a discrete holographic shell model, is given by the Catalan series. The area entropy formula is recovered, including Cardy's universal logarithmic correction, and the equipartition of mass per degree of freedom is proven. The black hole information storage resembles, in the count procedure, the so-called stack data structure.

  9. Improved classical and quantum random access codes

    NASA Astrophysics Data System (ADS)

    Liabøtrø, O.

    2017-05-01

    A (quantum) random access code ((Q)RAC) is a scheme that encodes n bits into m (qu)bits such that any of the n bits can be recovered with a worst case probability p >1/2 . We generalize (Q)RACs to a scheme encoding n d -levels into m (quantum) d -levels such that any d -level can be recovered with the probability for every wrong outcome value being less than 1/d . We construct explicit solutions for all n ≤d/2m-1 d -1 . For d =2 , the constructions coincide with those previously known. We show that the (Q)RACs are d -parity oblivious, generalizing ordinary parity obliviousness. We further investigate optimization of the success probabilities. For d =2 , we use the measure operators of the previously best-known solutions, but improve the encoding states to give a higher success probability. We conjecture that for maximal (n =4m-1 ,m ,p ) QRACs, p =1/2 {1 +[(√{3}+1)m-1 ] -1} is possible, and show that it is an upper bound for the measure operators that we use. We then compare (n ,m ,pq) QRACs with classical (n ,2 m ,pc) RACs. We can always find pq≥pc , but the classical code gives information about every input bit simultaneously, while the QRAC only gives information about a subset. For several different (n ,2 ,p ) QRACs, we see the same trade-off, as the best p values are obtained when the number of bits that can be obtained simultaneously is as small as possible. The trade-off is connected to parity obliviousness, since high certainty information about several bits can be used to calculate probabilities for parities of subsets.

  10. Digital watermarking for color images in hue-saturation-value color space

    NASA Astrophysics Data System (ADS)

    Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.

    2014-05-01

    This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.

  11. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  12. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  13. Numerical Simulation of Bottomhole Flow Field Structure in Particle Impact Drilling

    NASA Astrophysics Data System (ADS)

    Zhou, Weidong; Huang, Jinsong; Li, Luopeng

    2018-01-01

    In order to quantitatively describe the flow field distribution of the PID drilling bit in the bottomhole working condition, the influence of the fluid properties (pressure and viscosity) on the flow field of the bottom hole and the erosion and wear law of the drill body are compared. The flow field model of the eight - inch semi - vertical borehole drilling bit was established by CFX software. The working state of the jet was returned from the inlet of the drill bit to the nozzle outlet and flowed out at the bottom of the nozzle. The results show that there are irregular three-dimensional motion of collision and bounce after the jetting, resulting in partial impact on the drill body and causing impact and damage to the cutting teeth. The jet of particles emitted by different nozzles interfere with each other and affect the the bottom of the impact pressure; reasonable nozzle position can effectively reduce these interference.

  14. Control mechanism of double-rotator-structure ternary optical computer

    NASA Astrophysics Data System (ADS)

    Kai, SONG; Liping, YAN

    2017-03-01

    Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.

  15. Validation of the VitaBit Sit–Stand Tracker: Detecting Sitting, Standing, and Activity Patterns

    PubMed Central

    Plasqui, Guy

    2018-01-01

    Sedentary behavior (SB) has detrimental consequences and cannot be compensated for through moderate-to-vigorous physical activity (PA). In order to understand and mitigate SB, tools for measuring and monitoring SB are essential. While current direct-to-customer wearables focus on PA, the VitaBit validated in this study was developed to focus on SB. It was tested in a laboratory and in a free-living condition, comparing it to direct observation and to a current best-practice device, the ActiGraph, on a minute-by-minute basis. In the laboratory, the VitaBit yielded specificity and negative predictive rates (NPR) of above 91.2% for sitting and standing, while sensitivity and precision ranged from 74.6% to 85.7%. For walking, all performance values exceeded 97.3%. In the free-living condition, the device revealed performance of over 72.6% for sitting with the ActiGraph as criterion. While sensitivity and precision for standing and walking ranged from 48.2% to 68.7%, specificity and NPR exceeded 83.9%. According to the laboratory findings, high performance for sitting, standing, and walking makes the VitaBit eligible for SB monitoring. As the results are not transferrable to daily life activities, a direct observation study in a free-living setting is recommended. PMID:29543766

  16. Digital Signal Processing For Low Bit Rate TV Image Codecs

    NASA Astrophysics Data System (ADS)

    Rao, K. R.

    1987-06-01

    In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.

  17. Demonstration of optical computing logics based on binary decision diagram.

    PubMed

    Lin, Shiyun; Ishikawa, Yasuhiko; Wada, Kazumi

    2012-01-16

    Optical circuits are low power consumption and fast speed alternatives for the current information processing based on transistor circuits. However, because of no transistor function available in optics, the architecture for optical computing should be chosen that optics prefers. One of which is Binary Decision Diagram (BDD), where signal is processed by sending an optical signal from the root through a serial of switching nodes to the leaf (terminal). Speed of optical computing is limited by either transmission time of optical signals from the root to the leaf or switching time of a node. We have designed and experimentally demonstrated 1-bit and 2-bit adders based on the BDD architecture. The switching nodes are silicon ring resonators with a modulation depth of 10 dB and the states are changed by the plasma dispersion effect. The quality, Q of the rings designed is 1500, which allows fast transmission of signal, e.g., 1.3 ps calculated by a photon escaping time. A total processing time is thus analyzed to be ~9 ps for a 2-bit adder and would scales linearly with the number of bit. It is two orders of magnitude faster than the conventional CMOS circuitry, ~ns scale of delay. The presented results show the potential of fast speed optical computing circuits.

  18. Autosophy: an alternative vision for satellite communication, compression, and archiving

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus; Holtz, Eric; Kalienky, Diana

    2006-08-01

    Satellite communication and archiving systems are now designed according to an outdated Shannon information theory where all data is transmitted in meaningless bit streams. Video bit rates, for example, are determined by screen size, color resolution, and scanning rates. The video "content" is irrelevant so that totally random images require the same bit rates as blank images. An alternative system design, based on the newer Autosophy information theory, is now evolving, which transmits data "contend" or "meaning" in a universally compatible 64bit format. This would allow mixing all multimedia transmissions in the Internet's packet stream. The new systems design uses self-assembling data structures, which grow like data crystals or data trees in electronic memories, for both communication and archiving. The advantages for satellite communication and archiving may include: very high lossless image and video compression, unbreakable encryption, resistance to transmission errors, universally compatible data formats, self-organizing error-proof mass memories, immunity to the Internet's Quality of Service problems, and error-proof secure communication protocols. Legacy data transmission formats can be converted by simple software patches or integrated chipsets to be forwarded through any media - satellites, radio, Internet, cable - without needing to be reformatted. This may result in orders of magnitude improvements for all communication and archiving systems.

  19. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.

  20. Numerical method for high accuracy index of refraction estimation for spectro-angular surface plasmon resonance systems.

    PubMed

    Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G

    2008-11-24

    An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.

  1. Signal Processing Equipment and Techniques for Use in Measuring Ocean Acoustic Multipath Structures

    DTIC Science & Technology

    1983-12-01

    Demodulator 3.4 Digital Demodulator 3.4.1 Number of Bits in the Input A/D Converter Quantization Effects The Demodulator Output Filter Effects of... power caused by ignoring cross spectral term a) First order Butterworth filter b) Second order Butterworth filter 48 3.4 Ordering of e...spectrum 59 3.7 Multiplying D/A Converter input and output spectra a) Input b) Output 60 3.8 Demodulator output spectrum prior to filtering 63

  2. Computer Series, 13: Bits and Pieces, 11.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1982-01-01

    Describes computer programs (with ordering information) on various topics including, among others, modeling of thermodynamics and economics of solar energy, radioactive decay simulation, stoichiometry drill/tutorial (in Spanish), computer-generated safety quiz, medical chemistry computer game, medical biochemistry question bank, generation of…

  3. Magnetic Recording Media Technology for the Tb/in2 Era"

    ScienceCinema

    Bertero, Gerardo [Western Digital

    2017-12-09

    Magnetic recording has been the technology of choice of massive storage of information. The hard-disk drive industry has recently undergone a major technological transition from longitudinal magnetic recording (LMR) to perpendicular magnetic recording (PMR). However, convention perpendicular recording can only support a few new product generations before facing insurmountable physical limits. In order to support sustained recording areal density growth, new technological paradigms, such as energy-assisted recording and bit-patterined media recording are being contemplated and planned. In this talk, we will briefly discuss the LMR-to-PMR transition, the extendibility of current PMR recording, and the nature and merits of new enabling technologies. We will also discuss a technology roadmap toward recording densities approaching 10 Tv/in2, approximately 40 times higher than in current disk drives.

  4. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  5. True randomness from an incoherent source

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2017-11-01

    Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.

  6. Petabyte mass memory system using the Newell Opticel(TM)

    NASA Technical Reports Server (NTRS)

    Newell, Chester W.

    1994-01-01

    A random access system is proposed for digital storage and retrieval of up to a Petabyte of user data. The system is comprised of stacked memory modules using laser heads writing to an optical medium, in a new shirt-pocket-sized optical storage device called the Opticel. The Opticel described is a completely sealed 'black box' in which an optical medium is accelerated and driven at very high rates to accommodate the desired transfer rates, yet in such a manner that wear is virtually eliminated. It essentially emulates a disk, but with storage area up to several orders of magnitude higher. Access time to the first bit can range from a few milliseconds to a fraction of a second, with time to the last bit within a fraction of a second to a few seconds. The actual times are dependent on the capacity of each Opticel, which ranges from 72 Gigabytes to 1.25 Terabytes. Data transfer rate is limited strictly by the head and electronics, and is 15 Megabits per second in the first version. Independent parallel write/read access to each Opticel is provided using dedicated drives and heads. A Petabyte based on the present Opticel and drive design would occupy 120 cubic feet on a footprint of 45 square feet; with further development, it could occupy as little as 9 cubic feet.

  7. Comparison of a row-column speller vs. a novel lateral single-character speller: assessment of BCI for severe motor disabled patients.

    PubMed

    Pires, Gabriel; Nunes, Urbano; Castelo-Branco, Miguel

    2012-06-01

    Non-invasive brain-computer interface (BCI) based on electroencephalography (EEG) offers a new communication channel for people suffering from severe motor disorders. This paper presents a novel P300-based speller called lateral single-character (LSC). The LSC performance is compared to that of the standard row-column (RC) speller. We developed LSC, a single-character paradigm comprising all letters of the alphabet following an event strategy that significantly reduces the time for symbol selection, and explores the intrinsic hemispheric asymmetries in visual perception to improve the performance of the BCI. RC and LSC paradigms were tested by 10 able-bodied participants, seven participants with amyotrophic lateral sclerosis (ALS), five participants with cerebral palsy (CP), one participant with Duchenne muscular dystrophy (DMD), and one participant with spinal cord injury (SCI). The averaged results, taking into account all participants who were able to control the BCI online, were significantly higher for LSC, 26.11 bit/min and 89.90% accuracy, than for RC, 21.91 bit/min and 88.36% accuracy. The two paradigms produced different waveforms and the signal-to-noise ratio was significantly higher for LSC. Finally, the novel LSC also showed new discriminative features. The results suggest that LSC is an effective alternative to RC, and that LSC still has a margin for potential improvement in bit rate and accuracy. The high bit rates and accuracy of LSC are a step forward for the effective use of BCI in clinical applications. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Effect of PDC bit design and confining pressure on bit-balling tendencies while drilling shale using water base mud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hariharan, P.R.; Azar, J.J.

    1996-09-01

    A good majority of all oilwell drilling occurs in shale and other clay-bearing rocks. In the light of relatively fewer studies conducted, the problem of bit-balling in PDC bits while drilling shale has been addressed with the primary intention of attempting to quantify the degree of balling, as well as to investigate the influence of bit design and confining pressures. A series of full-scale laboratory drilling tests under simulated down hole conditions were conducted utilizing seven different PDC bits in Catoosa shale. Test results have indicated that the non-dimensional parameter R{sub d} [(bit torque).(weight-on-bit)/(bit diameter)] is a good indicator ofmore » the degree of bit-balling and that it correlated well with Specific-Energy. Furthermore, test results have shown bit-profile and bit-hydraulic design to be key parameters of bit design that dictate the tendency of balling in shales under a given set of operating conditions. A bladed bit was noticed to ball less compared to a ribbed or open-faced bit. Likewise, related to bit profile, test results have indicated that the parabolic profile has a lesser tendency to ball compared to round and flat profiles. The tendency of PDC bits to ball was noticed to increase with increasing confining pressures for the set of drilling conditions used.« less

  9. ELECTROSTATIC MEMORY SYSTEM

    DOEpatents

    Chu, J.C.

    1958-09-23

    An improved electrostatic memory system is de scribed fer a digital computer wherein a plarality of storage tubes are adapted to operate in either of two possible modes. According to the present irvention, duplicate storage tubes are provided fur each denominational order of the several binary digits. A single discriminator system is provided between corresponding duplicate tubes to determine the character of the infurmation stored in each. If either tube produces the selected type signal, corresponding to binazy "1" in the preferred embodiment, a "1" is regenerated in both tubes. In one mode of operation each bit of information is stored in two corresponding tubes, while in the other mode of operation each bit is stored in only one tube in the conventional manner.

  10. A sparse matrix algorithm on the Boolean vector machine

    NASA Technical Reports Server (NTRS)

    Wagner, Robert A.; Patrick, Merrell L.

    1988-01-01

    VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.

  11. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    NASA Astrophysics Data System (ADS)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  12. LANDSAT D local user terminal study

    NASA Technical Reports Server (NTRS)

    Alexander, L.; Louie, M.; Spencer, R.; Stow, W. K.

    1976-01-01

    The effect of the changes incorporated in the LANDSAT D system on the ability of a local user terminal to receive, record and process data in real time was studied. Alternate solutions to the problems raised by these changes were evaluated. A loading analysis was performed in order to determine the quantities of data that a local user terminal (LUT) would be interested in receiving and processing. The number of bits in an MSS and a TM scene were calculated along with the number of scenes per day that an LUT might require for processing. These then combined to a total number of processed bits/day for an LUT as a function of sensor and coverage circle radius.

  13. A Study of Specific Fracture Energy at Percussion Drilling

    NASA Astrophysics Data System (ADS)

    A, Shadrina; T, Kabanova; V, Krets; L, Saruev

    2014-08-01

    The paper presents experimental studies of rock failure provided by percussion drilling. Quantification and qualitative analysis were carried out to estimate critical values of rock failure depending on the hammer pre-impact velocity, types of drill bits and cylindrical hammer parameters (weight, length, diameter), and turn angle of a drill bit. Obtained data in this work were compared with obtained results by other researchers. The particle-size distribution in granite-cutting sludge was analyzed in this paper. Statistical approach (Spearmen's rank-order correlation, multiple regression analysis with dummy variables, Kruskal-Wallis nonparametric test) was used to analyze the drilling process. Experimental data will be useful for specialists engaged in simulation and illustration of rock failure.

  14. Experiences modeling ocean circulation problems on a 30 node commodity cluster with 3840 GPU processor cores.

    NASA Astrophysics Data System (ADS)

    Hill, C.

    2008-12-01

    Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.

  15. Along-strike structural variation and thermokinematic development of the Cenozoic Bitlis-Zagros fold-thrust belt, Turkey and Iraqi Kurdistan

    NASA Astrophysics Data System (ADS)

    Barber, Douglas E.; Stockli, Daniel F.; Koshnaw, Renas I.; Tamar-Agha, Mazin Y.; Yilmaz, Ismail O.

    2016-04-01

    The Bitlis-Zagros orogen in northern Iraq is a principal element of the Arabia-Eurasia continent collision and is characterized by the lateral intersection of two structural domains: the NW-SE trending Zagros proper system of Iran and the E-W trending Bitlis fold-thrust belt of Turkey and Syria. While these components in northern Iraq share a similar stratigraphic framework, they exhibit along-strike variations in the width and style of tectonic zones, fold morphology and trends, and structural inheritance. However, the distinctions of the Bitlis and Zagros segments remains poorly understood in terms of timing and deformation kinematics as well as first-order controls on fold-thrust development. Structural and stratigraphic study and seismic data combined with low-T thermochronometry provide the basis for reconstructions of the Bitlis-Zagros fold-thrust belt in southeastern Turkey and northern Iraq to elucidate the kinematic and temporal relationship of these two systems. Balanced cross-sections were constructed and incrementally restored to quantify the deformational evolution and use as input for thermokinematic models (FETKIN) to generate thermochronometric ages along the topographic surface of each cross-section line. The forward modeled thermochronometric ages from were then compared to new and previously published apatite and zircon (U-Th)/He and fission-track ages from southeastern Turkey and northern Iraq to test the validity of the timing, rate, and fault-motion geometry associated with each reconstruction. The results of these balanced theromokinematic restorations integrated with constraints from syn-tectonic sedimentation suggest that the Zagros belt between Erbil and Suleimaniyah was affected by an initial phase of Late Cretaceous exhumation related to the Proto-Zagros collision. During the main Zagros phase, deformation advanced rapidly and in-sequence from the Main Zagros Fault to the thin-skinned frontal thrusts (Kirkuk, Shakal, Qamar) from middle to latest Miocene times, followed by out-of-sequence development of the Mountain Front Flexure (Qaradagh anticline) by ~5 Ma. In contrast, initial exhumation in the northern Bitlis belt occurred by mid-Eocene time, followed by collisional deformation that propagated southward into northern Iraqi Kurdistan during the middle to late Miocene. Plio-Pleistocene deformation was partitioned into out-of-sequence reactivation of the Ora thrust along the Iraq-Turkey border, concurrent with development of the Sinjar and Abdulaziz inversion structures at the edge of the Bitlis deformation front. Overall, these data suggest the Bitlis and Zagros trends evolved relatively independently during Cretaceous and early Cenozoic times, resulting in very different structural and stratigraphic inheritance, before being affected contemporaneously by major phase of in-sequence shortening during middle to latest Miocene and out-of-sequence deformation since the Pliocene. Limited seismic sections corroborate the notion that the structural style and trend of the Bitlis fold belt is dominated by inverted Mesozoic extensional faults, whereas the Zagros structures are interpreted mostly as fault-propagation folds above a Triassic décollement. These pre-existing heterogeneities in the Bitlis contributed to the lower shortening estimates, variable anticline orientation, and irregular fold spacing and the fundamentally different orientations of the Zagros-Bitlis belt in Iraqi Kurdistan and Turkey.

  16. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  17. A Concurrent Smalltalk Compiler for the Message-Driven Processor

    DTIC Science & Technology

    1988-05-01

    apj with bits from low-bit (inclusive) to high-bit (exclusive) set. ;;;Low-bit defaults to zero. (defmacro brange (high-bit &optional low-bit) (list...n2) (null (cddr num))) (aetg bits (b+ bits (if (>- nl n2) ( brange (1+ nl) n2) ( brange (1+ n2) ni)))) (error "Bad bmap range: -S" flu.)))) (t (error...vlocs) flat ((vlive (b- finst-vllv* mast) *I.( brange firat-context-slot-nun))) (next (inst-next last))) (if (bempty vlive) (delete-module module inat

  18. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  19. Effects of plastic bits on the condition and behaviour of captive-reared pheasants.

    PubMed

    Butler, D A; Davis, C

    2010-03-27

    Between 2005 and 2007, data were collected from game farms across England and Wales to examine the effects of the use of bits on the physiological condition and behaviour of pheasants. On each site, two pheasant pens kept in the same conditions were randomly allocated to either use bits or not. The behaviour and physiological conditions of pheasants in each treatment pen were assessed on the day of bitting and weekly thereafter until release. Detailed records of feed usage, medications and mortality were also kept. Bits halved the number of acts of bird-on-bird pecking, but they doubled the incidence of headshaking and scratching. Bits caused nostril inflammation and bill deformities in some birds, particularly after seven weeks of age. In all weeks after bitting, feather condition was poorer in non-bitted pheasants than in those fitted with bits. Less than 3 per cent of bitted birds had damaged skin, but in the non-bitted pens this figure increased over time to 23 per cent four weeks later. Feed use and mortality did not differ between bitted and non-bitted birds.

  20. Low-Power Differential SRAM design for SOC Based on the 25-um Technology

    NASA Astrophysics Data System (ADS)

    Godugunuri, Sivaprasad; Dara, Naveen; Sambasiva Nayak, R.; Nayeemuddin, Md; Singh, Yadu, Dr.; Veda, R. N. S. Sunil

    2017-08-01

    In recent, the SOC styles area unit the vast complicated styles in VLSI these SOC styles having important low-power operations problems, to comprehend this we tend to enforced low-power SRAM. However these SRAM Architectures critically affects the entire power of SOC and competitive space. To beat the higher than disadvantages, during this paper, a low-power differential SRAM design is planned. The differential SRAM design stores multiple bits within the same cell, operates at minimum in operation low-tension and space per bit. The differential SRAM design designed supported the 25-um technology using Tanner-EDA Tool.

  1. New PDC bit design reduces vibrational problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mensa-Wilmot, G.; Alexander, W.L.

    1995-05-22

    A new polycrystalline diamond compact (PDC) bit design combines cutter layout, load balancing, unsymmetrical blades and gauge pads, and spiraled blades to reduce problematic vibrations without limiting drilling efficiency. Stabilization improves drilling efficiency and also improves dull characteristics for PDC bits. Some PDC bit designs mitigate one vibrational mode (such as bit whirl) through drilling parameter manipulation yet cause or excite another vibrational mode (such as slip-stick). An alternative vibration-reducing concept which places no limitations on the operational environment of a PDC bit has been developed to ensure optimization of the bit`s available mechanical energy. The paper discusses bit stabilization,more » vibration reduction, vibration prevention, cutter arrangement, load balancing, blade layout, spiraled blades, and bit design.« less

  2. Radio Astronomy Software Defined Receiver Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vacaliuc, Bogdan; Leech, Marcus; Oxley, Paul

    The paper describes a Radio Astronomy Software Defined Receiver (RASDR) that is currently under development. RASDR is targeted for use by amateurs and small institutions where cost is a primary consideration. The receiver will operate from HF thru 2.8 GHz. Front-end components such as preamps, block down-converters and pre-select bandpass filters are outside the scope of this development and will be provided by the user. The receiver includes RF amplifiers and attenuators, synthesized LOs, quadrature down converters, dual 8 bit ADCs and a Signal Processor that provides firmware processing of the digital bit stream. RASDR will interface to a usermore » s PC via a USB or higher speed Ethernet LAN connection. The PC will run software that provides processing of the bit stream, a graphical user interface, as well as data analysis and storage. Software should support MAC OS, Windows and Linux platforms and will focus on such radio astronomy applications as total power measurements, pulsar detection, and spectral line studies.« less

  3. A novel attack method about double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  4. A microprocessor based on a two-dimensional semiconductor

    PubMed Central

    Wachter, Stefan; Polyushkin, Dmitry K.; Bethge, Ole; Mueller, Thomas

    2017-01-01

    The advent of microcomputers in the 1970s has dramatically changed our society. Since then, microprocessors have been made almost exclusively from silicon, but the ever-increasing demand for higher integration density and speed, lower power consumption and better integrability with everyday goods has prompted the search for alternatives. Germanium and III–V compound semiconductors are being considered promising candidates for future high-performance processor generations and chips based on thin-film plastic technology or carbon nanotubes could allow for embedding electronic intelligence into arbitrary objects for the Internet-of-Things. Here, we present a 1-bit implementation of a microprocessor using a two-dimensional semiconductor—molybdenum disulfide. The device can execute user-defined programs stored in an external memory, perform logical operations and communicate with its periphery. Our 1-bit design is readily scalable to multi-bit data. The device consists of 115 transistors and constitutes the most complex circuitry so far made from a two-dimensional material. PMID:28398336

  5. Security of quantum key distribution with multiphoton components

    PubMed Central

    Yin, Hua-Lei; Fu, Yao; Mao, Yingqiu; Chen, Zeng-Bing

    2016-01-01

    Most qubit-based quantum key distribution (QKD) protocols extract the secure key merely from single-photon component of the attenuated lasers. However, with the Scarani-Acin-Ribordy-Gisin 2004 (SARG04) QKD protocol, the unconditionally secure key can be extracted from the two-photon component by modifying the classical post-processing procedure in the BB84 protocol. Employing the merits of SARG04 QKD protocol and six-state preparation, one can extract secure key from the components of single photon up to four photons. In this paper, we provide the exact relations between the secure key rate and the bit error rate in a six-state SARG04 protocol with single-photon, two-photon, three-photon, and four-photon sources. By restricting the mutual information between the phase error and bit error, we obtain a higher secure bit error rate threshold of the multiphoton components than previous works. Besides, we compare the performances of the six-state SARG04 with other prepare-and-measure QKD protocols using decoy states. PMID:27383014

  6. Development of an Innovative Algorithm for Aerodynamics-Structure Interaction Using Lattice Boltzmann Method

    NASA Technical Reports Server (NTRS)

    Mei, Ren-Wei; Shyy, Wei; Yu, Da-Zhi; Luo, Li-Shi; Rudy, David (Technical Monitor)

    2001-01-01

    The lattice Boltzmann equation (LBE) is a kinetic formulation which offers an alternative computational method capable of solving fluid dynamics for various systems. Major advantages of the method are owing to the fact that the solution for the particle distribution functions is explicit, easy to implement, and the algorithm is natural to parallelize. In this final report, we summarize the works accomplished in the past three years. Since most works have been published, the technical details can be found in the literature. Brief summary will be provided in this report. In this project, a second-order accurate treatment of boundary condition in the LBE method is developed for a curved boundary and tested successfully in various 2-D and 3-D configurations. To evaluate the aerodynamic force on a body in the context of LBE method, several force evaluation schemes have been investigated. A simple momentum exchange method is shown to give reliable and accurate values for the force on a body in both 2-D and 3-D cases. Various 3-D LBE models have been assessed in terms of efficiency, accuracy, and robustness. In general, accurate 3-D results can be obtained using LBE methods. The 3-D 19-bit model is found to be the best one among the 15-bit, 19-bit, and 27-bit LBE models. To achieve desired grid resolution and to accommodate the far field boundary conditions in aerodynamics computations, a multi-block LBE method is developed by dividing the flow field into various blocks each having constant lattice spacing. Substantial contribution to the LBE method is also made through the development of a new, generalized lattice Boltzmann equation constructed in the moment space in order to improve the computational stability, detailed theoretical analysis on the stability, dispersion, and dissipation characteristics of the LBE method, and computational studies of high Reynolds number flows with singular gradients. Finally, a finite difference-based lattice Boltzmann method is developed for inviscid compressible flows.

  7. Enhanced protocol for real-time transmission of echocardiograms over wireless channels.

    PubMed

    Cavero, Eva; Alesanco, Alvaro; García, Jose

    2012-11-01

    This paper presents a methodology to transmit clinical video over wireless networks in real-time. A 3-D set partitioning in hierarchical trees compression prior to transmission is proposed. In order to guarantee the clinical quality of the compressed video, a clinical evaluation specific to each video modality has to be made. This evaluation indicates the minimal transmission rate necessary for an accurate diagnosis. However, the channel conditions produce errors and distort the video. A reliable application protocol is therefore proposed using a hybrid solution in which either retransmission or retransmission combined with forward error correction (FEC) techniques are used, depending on the channel conditions. In order to analyze the proposed methodology, the 2-D mode of an echocardiogram has been assessed. A bandwidth of 200 kbps is necessary to guarantee its clinical quality. The transmission using the proposed solution and retransmission and FEC techniques working separately have been simulated and compared in high-speed uplink packet access (HSUPA) and worldwide interoperability for microwave access (WiMAX) networks. The proposed protocol achieves guaranteed clinical quality for bit error rates higher than with the other protocols, being for a mobile speed of 60 km/h up to 3.3 times higher for HSUPA and 10 times for WiMAX.

  8. Critique of a Hughes shuttle Ku-band data sampler/bit synchronizer

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.

    1980-01-01

    An alternative bit synchronizer proposed for shuttle was analyzed in a noise-free environment by considering the basic operation of the loop via timing diagrams and by linearizing the bit synchronizer as an equivalent, continuous, phased-lock loop (PLL). The loop is composed of a high-frequency phase-frequency detector which is capable of detecting both phase and frequency errors and is used to track the clock, and a bit transition detector which attempts to track the transitions of the data bits. It was determined that the basic approach was a good design which, with proper implementation of the accumulator, up/down counter and logic should provide accurate mid-bit sampling with symmetric bits. However, when bit asymmetry occurs, the bit synchronizer can lock up with a large timing error, yet be quasi-stable (timing will not change unless the clock and bit sequence drift). This will result in incorrectly detecting some bits.

  9. Bit-1 is an essential regulator of myogenic differentiation

    PubMed Central

    Griffiths, Genevieve S.; Doe, Jinger; Jijiwa, Mayumi; Van Ry, Pam; Cruz, Vivian; de la Vega, Michelle; Ramos, Joe W.; Burkin, Dean J.; Matter, Michelle L.

    2015-01-01

    Muscle differentiation requires a complex signaling cascade that leads to the production of multinucleated myofibers. Genes regulating the intrinsic mitochondrial apoptotic pathway also function in controlling cell differentiation. How such signaling pathways are regulated during differentiation is not fully understood. Bit-1 (also known as PTRH2) mutations in humans cause infantile-onset multisystem disease with muscle weakness. We demonstrate here that Bit-1 controls skeletal myogenesis through a caspase-mediated signaling pathway. Bit-1-null mice exhibit a myopathy with hypotrophic myofibers. Bit-1-null myoblasts prematurely express muscle-specific proteins. Similarly, knockdown of Bit-1 expression in C2C12 myoblasts promotes early differentiation, whereas overexpression delays differentiation. In wild-type mice, Bit-1 levels increase during differentiation. Bit-1-null myoblasts exhibited increased levels of caspase 9 and caspase 3 without increased apoptosis. Bit-1 re-expression partially rescued differentiation. In Bit-1-null muscle, Bcl-2 levels are reduced, suggesting that Bcl-2-mediated inhibition of caspase 9 and caspase 3 is decreased. Bcl-2 re-expression rescued Bit-1-mediated early differentiation in Bit-1-null myoblasts and C2C12 cells with knockdown of Bit-1 expression. These results support an unanticipated yet essential role for Bit-1 in controlling myogenesis through regulation of Bcl-2. PMID:25770104

  10. Randomizer for High Data Rates

    NASA Technical Reports Server (NTRS)

    Garon, Howard; Sank, Victor J.

    2018-01-01

    NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others.

  11. Perceptually tuned low-bit-rate video codec for ATM networks

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien

    1996-02-01

    In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.

  12. Robust relativistic bit commitment

    NASA Astrophysics Data System (ADS)

    Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony

    2016-12-01

    Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.

  13. Imprint lithography template technology for bit patterned media (BPM)

    NASA Astrophysics Data System (ADS)

    Lille, J.; Patel, K.; Ruiz, R.; Wu, T.-W.; Gao, H.; Wan, Lei; Zeltzer, G.; Dobisz, E.; Albrecht, T. R.

    2011-11-01

    Bit patterned media (BPM) for magnetic recording has emerged as a promising technology to deliver thermally stable magnetic storage at densities beyond 1Tb/in2. Insertion of BPM into hard disk drives will require the introduction of nanoimprint lithography and other nanofabrication processes for the first time. In this work, we focus on nanoimprint and nanofabrication challenges that are being overcome in order to produce patterned media. Patterned media has created the need for new tools and processes, such as an advanced rotary e-beam lithography tool and block copolymer integration. The integration of block copolymer is through the use of a chemical contrast pattern on the substrate which guides the alignment of di-block copolymers. Most of the work on directed self assembly for patterned media applications has, until recently, concentrated on the formation of circular dot patterns in a hexagonal close packed lattice. However, interactions between the read head and media favor a bit aspect ratio (BAR) greater than one. This design constraint has motivated new approaches for using self-assembly to create suitable high-BAR master patterns and has implications for template fabrication.

  14. Expeditious reconciliation for practical quantum key distribution

    NASA Astrophysics Data System (ADS)

    Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.

    2004-08-01

    The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.

  15. Efficient heralding of O-band passively spatial-multiplexed photons for noise-tolerant quantum key distribution.

    PubMed

    Liu, Mao Tong; Lim, Han Chuen

    2014-09-22

    When implementing O-band quantum key distribution on optical fiber transmission lines carrying C-band data traffic, noise photons that arise from spontaneous Raman scattering or insufficient filtering of the classical data channels could cause the quantum bit-error rate to exceed the security threshold. In this case, a photon heralding scheme may be used to reject the uncorrelated noise photons in order to restore the quantum bit-error rate to a low level. However, the secure key rate would suffer unless one uses a heralded photon source with sufficiently high heralding rate and heralding efficiency. In this work we demonstrate a heralded photon source that has a heralding efficiency that is as high as 74.5%. One disadvantage of a typical heralded photon source is that the long deadtime of the heralding detector results in a significant drop in the heralding rate. To counter this problem, we propose a passively spatial-multiplexed configuration at the heralding arm. Using two heralding detectors in this configuration, we obtain an increase in the heralding rate by 37% and a corresponding increase in the heralded photon detection rate by 16%. We transmit the O-band photons over 10 km of noisy optical fiber to observe the relation between quantum bit-error rate and noise-degraded second-order correlation function of the transmitted photons. The effects of afterpulsing when we shorten the deadtime of the heralding detectors are also observed and discussed.

  16. Towards a Quantum Computer?

    NASA Astrophysics Data System (ADS)

    Bellac, Michel Le

    2014-11-01

    In everyday life, practically all the information which is processed, exchanged or stored is coded in the form of discrete entities called bits, which take two values only, by convention 0 and 1. With the present technology for computers and optical fibers, bits are carried by electrical currents and electromagnetic waves corresponding to macroscopic fluxes of electrons and photons, and they are stored in memories of various kinds, for example, magnetic memories. Although quantum physics is the basic physics which underlies the operation of a transistor (Chapter 6) or of a laser (Chapter 4), each exchanged or processed bit corresponds to a large number of elementary quantum systems, and its behavior can be described classically due to the strong interaction with the environment (Chapter 9). For about thirty years, physicists have learned to manipulate with great accuracy individual quantum systems: photons, electrons, neutrons, atoms, and so forth, which opens the way to using two-state quantum systems, such as the polarization states of a photon (Chapter 2) or the two energy levels of an atom or an ion (Chapter 4) in order to process, exchange or store information. In § 2.3.2, we used the two polarization states of a photon, vertical (V) and horizontal (H), to represent the values 0 and 1 of a bit and to exchange information. In what follows, it will be convenient to use Dirac's notation (see Appendix A.2.2 for more details), where a vertical polarization state is denoted by |V> or |0> and a horizontal one by |H> or |1>, while a state with arbitrary polarization will be denoted by |ψ>. The polarization states of a photon give one possible realization of a quantum bit, or for short a qubit. Thanks to the properties of quantum physics, quantum computers using qubits, if they ever exist, would outperform classical computers for some specific, but very important, problems. In Sections 8.1 and 8.2, we describe some typical quantum algorithms and, in order to do so, we shall not be able to avoid some technical developments. However, these two sections may be skipped in a first reading, as they are not necessary for understanding the more general considerations of Sections 8.3 and 8.4.

  17. Self-Advancing Step-Tap Drills

    NASA Technical Reports Server (NTRS)

    Pettit, Donald R.; Camarda, Charles J.; Penner, Ronald K.; Franklin, Larry D.

    2007-01-01

    Self-advancing tool bits that are hybrids of drills and stepped taps make it possible to form threaded holes wider than about 1/2 in. (about 13 mm) without applying any more axial force than is necessary for forming narrower pilot holes. These self-advancing stepped-tap drills were invented for use by space-suited astronauts performing repairs on reinforced carbon/carbon space-shuttle leading edges during space walks, in which the ability to apply axial drilling forces is severely limited. Self-advancing stepped-tap drills could also be used on Earth for making wide holes without applying large axial forces. A self-advancing stepped-tap drill (see figure) includes several sections having progressively larger diameters, typically in increments between 0.030 and 0.060 in. (between about 0.8 and about 1.5 mm). The tip section, which is the narrowest, is a pilot drill bit that typically has a diameter between 1/8 and 3/16 in. (between about 3.2 and about 4.8 mm). The length of the pilot-drill section is chosen, according to the thickness of the object to be drilled and tapped, so that the pilot hole is completed before engagement of the first tap section. Provided that the cutting-edge geometry of the drill bit is optimized for the material to be drilled, only a relatively small axial force [typically of the order of a few pounds (of the order of 10 newtons)] must be applied during drilling of the pilot hole. Once the first tap section engages the pilot hole, it is no longer necessary for the drill operator to apply axial force: the thread engagement between the tap and the workpiece provides the axial force to advance the tool bit. Like the pilot-drill section, each tap section must be long enough to complete its hole before engagement of the next, slightly wider tap section. The precise values of the increments in diameter, the thread pitch, the rake angle of the tap cutting edge, and other geometric parameters of the tap sections must be chosen, in consideration of the workpiece material and thickness, to prevent stripping of threads during the drilling/tapping operation. A stop-lip or shoulder at the shank end of the widest tap section prevents further passage of the tool bit through the hole.

  18. Comparison between sparsely distributed memory and Hopfield-type neural network models

    NASA Technical Reports Server (NTRS)

    Keeler, James D.

    1986-01-01

    The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.

  19. Development and testing of the infrared interferometer spectrometer for the Mariner Mars 1971 spacecraft

    NASA Technical Reports Server (NTRS)

    Hanel, R. H.; Schlachman, B.; Vanous, D.; Rogers, D.; Taylor, J. H.

    1971-01-01

    The design, development and testing of the infrared interferometer spectrometer is reported with emphasis on the unique features of the Mariner instrument as compared to previous IRIS instruments flown on the Nimbus meteorological research satellites. The interferometer functions in the spectral range from 50 microns to 6.3 microns. A noise equivalent radiance of 0.5 X 10 to the -7th power W/sq cm/ster/cm has been achieved. Major improvements that were implemented included the cesium iodide beamsplitter and electronic features to suppress the effect of vibration on the Michelson mirror motion and digital filtering through the summation of increased sampling of the infrared signal. A bit error detection and correction scheme was also implemented in order to recover the science data with a higher level of confidence over the telecommunication link.

  20. Influence of vibration on the coupling efficiency in spatial receiver and its compensation method

    NASA Astrophysics Data System (ADS)

    Hu, Qinggui; Mu, Yining

    2018-04-01

    In order to analyze the loss of the free-space optical receiver caused by the vibration, we set up the coordinate systems on both the receiving lens surface and the receiving fiber surface, respectively. Then, with Gauss optics theory, the coupling efficiency equation is obtained. And the solution is calculated with MATLAB® software. To lower the impact of the vibration, the directional tapered communication fiber receiver is proposed. In the next step, the sample was produced and two experiments were done. The first experiment shows that the coupling efficiency of the receiver is higher than that of the traditional one. The second experiment shows the bit error rate of the new receiver is lower. Both of the experiments show the new receiver could improve the receiving system's tolerance with the vibration.

  1. High Temperature 300°C Directional Drilling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Kamalesh; Aaron, Dick; Macpherson, John

    2015-07-31

    Many countries around the world, including the USA, have untapped geothermal energy potential. Enhanced Geothermal Systems (EGS) technology is needed to economically utilize this resource. Temperatures in some EGS reservoirs can exceed 300°C. To effectively utilize EGS resources, an array of injector and production wells must be accurately placed in the formation fracture network. This requires a high temperature directional drilling system. Most commercial services for directional drilling systems are rated for 175°C while geothermal wells require operation at much higher temperatures. Two U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) projects have been initiated to develop a 300°Cmore » capable directional drilling system, the first developing a drill bit, directional motor, and drilling fluid, and the second adding navigation and telemetry systems. This report is for the first project, “High Temperature 300°C Directional Drilling System, including drill bit, directional motor and drilling fluid, for enhanced geothermal systems,” award number DE-EE0002782. The drilling system consists of a drill bit, a directional motor, and drilling fluid. The DOE deliverables are three prototype drilling systems. We have developed three drilling motors; we have developed four roller-cone and five Kymera® bits; and finally, we have developed a 300°C stable drilling fluid, along with a lubricant additive for the metal-to-metal motor. Metal-to-metal directional motors require coatings to the rotor and stator for wear and corrosion resistance, and this coating research has been a significant part of the project. The drill bits performed well in the drill bit simulator test, and the complete drilling system has been tested drilling granite at Baker Hughes’ Experimental Test Facility in Oklahoma. The metal-to-metal motor was additionally subjected to a flow loop test in Baker Hughes’ Celle Technology Center in Germany, where it ran for more than 100 hours.« less

  2. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  3. Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.

    PubMed

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2011-12-01

    The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.

  4. High-Density Near-Field Optical Disc Recording

    NASA Astrophysics Data System (ADS)

    Shinoda, Masataka; Saito, Kimihiro; Ishimoto, Tsutomu; Kondo, Takao; Nakaoki, Ariyoshi; Ide, Naoki; Furuki, Motohiro; Takeda, Minoru; Akiyama, Yuji; Shimouma, Takashi; Yamamoto, Masanobu

    2005-05-01

    We developed a high-density near-field optical recording disc system using a solid immersion lens. The near-field optical pick-up consists of a solid immersion lens with a numerical aperture of 1.84. The laser wavelength for recording is 405 nm. In order to realize the near-field optical recording disc, we used a phase-change recording media and a molded polycarbonate substrate. A clear eye pattern of 112 GB capacity with 160 nm track pitch and 50 nm bit length was observed. The equivalent areal density is 80.6 Gbit/in2. The bottom bit error rate of 3 tracks-write was 4.5× 10-5. The readout power margin and the recording power margin were ± 30.4% and ± 11.2%, respectively.

  5. A 10 Gb/s laser driver in 130 nm CMOS technology for high energy physics applications

    DOE PAGES

    Zhang, T.; Tavernier, F.; Moreira, P.; ...

    2015-02-19

    The GigaBit Laser Driver (GBLD) is a key on-detector component of the GigaBit Transceiver (GBT) system at the transmitter side. We have developed a 10 Gb/s GBLD (GBLD10) in a 130 nm CMOS technology, as part of the design efforts towards the upgrade of the electrical components of the LHC experiments. The GBLD10 is based on the distributed-amplifier (DA) architecture and achieves data rates up to 10 Gb/s. It is capable of driving VCSELs with modulation currents up to 12 mA. Furthermore, a pre-emphasis function has been included in the proposed laser driver in order to compensate for the capacitivemore » load and channel losses.« less

  6. A novel P300-based brain-computer interface stimulus presentation paradigm: moving beyond rows and columns

    PubMed Central

    Townsend, G.; LaPallo, B.K.; Boulay, C.B.; Krusienski, D.J.; Frye, G.E.; Hauser, C.K.; Schwartz, N.E.; Vaughan, T.M.; Wolpaw, J.R.; Sellers, E.W.

    2010-01-01

    Objective An electroencephalographic brain-computer interface (BCI) can provide a non-muscular means of communication for people with amyotrophic lateral sclerosis (ALS) or other neuromuscular disorders. We present a novel P300-based BCI stimulus presentation – the checkerboard paradigm (CBP). CBP performance is compared to that of the standard row/column paradigm (RCP) introduced by Farwell and Donchin (1988). Methods Using an 8×9 matrix of alphanumeric characters and keyboard commands, 18 participants used the CBP and RCP in counter-balanced fashion. With approximately 9 – 12 minutes of calibration data, we used a stepwise linear discriminant analysis for online classification of subsequent data. Results Mean online accuracy was significantly higher for the CBP, 92%, than for the RCP, 77%. Correcting for extra selections due to errors, mean bit rate was also significantly higher for the CBP, 23 bits/min, than for the RCP, 17 bits/min. Moreover, the two paradigms produced significantly different waveforms. Initial tests with three advanced ALS participants produced similar results. Furthermore, these individuals preferred the CBP to the RCP. Conclusions These results suggest that the CBP is markedly superior to the RCP in performance and user acceptability. Significance The CBP has the potential to provide a substantially more effective BCI than the RCP. This is especially important for people with severe neuromuscular disabilities. PMID:20347387

  7. BIT BY BIT: A Game Simulating Natural Language Processing in Computers

    ERIC Educational Resources Information Center

    Kato, Taichi; Arakawa, Chuichi

    2008-01-01

    BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

  8. Is the straddle effect in contrast perception limited to second-order spatial vision?

    PubMed Central

    Graham, Norma V.; Wolfson, S. Sabina

    2018-01-01

    Previous work on the straddle effect in contrast perception (Foley, 2011; Graham & Wolfson, 2007; Wolfson & Graham, 2007, 2009) has used visual patterns and observer tasks of the type known as spatially second-order. After adaptation of about 1 s to a grid of Gabor patches all at one contrast, a second-order test pattern composed of two different test contrasts can be easy or difficult to perceive correctly. When the two test contrasts are both a bit less (or both a bit greater) than the adapt contrast, observers perform very well. However, when the two test contrasts straddle the adapt contrast (i.e., one of the test contrasts is greater than the adapt contrast and the other is less), performance drops dramatically. To explain this drop in performance—the straddle effect—we have suggested a contrast-comparison process. We began to wonder: Are second-order patterns necessary for the straddle effect? Here we show that the answer is “no”. We demonstrate the straddle effect using spatially first-order visual patterns and several different observer tasks. We also see the effect of contrast normalization using first-order visual patterns here, analogous to our prior findings with second-order visual patterns. We did find one difference between first- and second-order tasks: Performance in the first-order tasks was slightly lower. This slightly lower performance may be due to slightly greater memory load. For many visual scenes, the important quantity in human contrast processing may not be monotonic with physical contrast but may be something more like the unsigned difference between current contrast and recent average contrast. PMID:29904790

  9. Bit selection using field drilling data and mathematical investigation

    NASA Astrophysics Data System (ADS)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  10. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    NASA Astrophysics Data System (ADS)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  11. Boring apparatus capable of boring straight holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, C.R.

    The invention relates to a rock boring assembly for producing a straight hole for use in a drill string above a pilot boring bit of predetermined diameter smaller than the desired final hole size. The boring assembly comprises a small conical boring bit and a larger conical boring, the conical boring bits mounted on lower and upper ends of an enlongated spacer, respectively, and the major effective cutting diameters of each of the conical boring bits being at least 10% greater than the minor effective cutting diameter of the respective bit. The spacer has a cross-section resistant bending and spacesmore » the conical boring bits apart a distance at least 5 times the major cutting diameter of the small conical boring bit, thereby spacing the pivot points provided by the two conical boring bits to limit bodily angular deflection of the assembly and providing a substantial moment arm to resist lateral forces applied to the assembly by the pilot bit and drill string. The spacing between the conical bits is less than about 20 times the major cutting diameter of the lower conical boring bit to enable the spacer to act as a bend-resistant beam to resist angular deflection of the axis of either of the conical boring bits relative to the other when it receives uneven lateral force due to non-uniformity of cutting conditions about the circumference of the bit. Advantageously the boring bits also are self-advancing and feature skewed rollers. 7 claims.« less

  12. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  13. A Compression Algorithm for Field Programmable Gate Arrays in the Space Environment

    DTIC Science & Technology

    2011-12-01

    Bit 1 ,Bit 0P  . (V.3) Equation (V.3) is implemented with a string of XOR gates and Bit Basher blocks, as shown in Figure 31. As discussed in...5], the string of Bit Basher blocks are used to separate each 35-bit value into 35 one-bit values, and the string of XOR gates is used to

  14. Drag bit construction

    DOEpatents

    Hood, M.

    1986-02-11

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face. 4 figs.

  15. Drag bit construction

    DOEpatents

    Hood, Michael

    1986-01-01

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face.

  16. Acquisition and Post-Processing of Immunohistochemical Images.

    PubMed

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  17. A novel PON-based mobile distributed cluster of antennas approach to provide impartial and broadband services to end users

    NASA Astrophysics Data System (ADS)

    Sana, Ajaz; Saddawi, Samir; Moghaddassi, Jalil; Hussain, Shahab; Zaidi, Syed R.

    2010-01-01

    In this research paper we propose a novel Passive Optical Network (PON) based Mobile Worldwide Interoperability for Microwave Access (WiMAX) access network architecture to provide high capacity and performance multimedia services to mobile WiMAX users. Passive Optical Networks (PON) networks do not require powered equipment; hence they cost lower and need less network management. WiMAX technology emerges as a viable candidate for the last mile solution. In the conventional WiMAX access networks, the base stations and Multiple Input Multiple Output (MIMO) antennas are connected by point to point lines. Ideally in theory, the Maximum WiMAX bandwidth is assumed to be 70 Mbit/s over 31 miles. In reality, WiMAX can only provide one or the other as when operating over maximum range, bit error rate increases and therefore it is required to use lower bit rate. Lowering the range allows a device to operate at higher bit rates. Our focus in this research paper is to increase both range and bit rate by utilizing distributed cluster of MIMO antennas connected to WiMAX base stations with PON based topologies. A novel quality of service (QoS) algorithm is also proposed to provide admission control and scheduling to serve classified traffic. The proposed architecture presents flexible and scalable system design with different performance requirements and complexity.

  18. Preserving privacy of online digital physiological signals using blind and reversible steganography.

    PubMed

    Shiu, Hung-Jr; Lin, Bor-Sing; Huang, Chien-Hung; Chiang, Pei-Ying; Lei, Chin-Laung

    2017-11-01

    Physiological signals such as electrocardiograms (ECG) and electromyograms (EMG) are widely used to diagnose diseases. Presently, the Internet offers numerous cloud storage services which enable digital physiological signals to be uploaded for convenient access and use. Numerous online databases of medical signals have been built. The data in them must be processed in a manner that preserves patients' confidentiality. A reversible error-correcting-coding strategy will be adopted to transform digital physiological signals into a new bit-stream that uses a matrix in which is embedded the Hamming code to pass secret messages or private information. The shared keys are the matrix and the version of the Hamming code. An online open database, the MIT-BIH arrhythmia database, was used to test the proposed algorithms. The time-complexity, capacity and robustness are evaluated. Comparisons of several evaluations subject to related work are also proposed. This work proposes a reversible, low-payload steganographic scheme for preserving the privacy of physiological signals. An (n,  m)-hamming code is used to insert (n - m) secret bits into n bits of a cover signal. The number of embedded bits per modification is higher than in comparable methods, and the computational power is efficient and the scheme is secure. Unlike other Hamming-code based schemes, the proposed scheme is both reversible and blind. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Fast and Flexible Successive-Cancellation List Decoders for Polar Codes

    NASA Astrophysics Data System (ADS)

    Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.

    2017-11-01

    Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.

  20. Plain packaging and indirect expropriation of trademark rights under BITs: does FCTC help to establish a right to regulate tobacco products?

    PubMed

    Lo, Chang-Fa

    2012-12-01

    Recently the giant tobacco company Philip Morris served its notice to launch an investor-to-state dispute settlement proceeding against the Australian Government for its introduction of plain packaging requirements on tobacco products. It is an important event in the field of intellectual property, investment and international health law. The fundamental questions involved are whether the restriction of trademark rights as a result of the plain packaging requirement is a compensable indirect expropriation under BITs or whether it falls within the scope of government's right to regulate and thus become not compensable. This paper is of the view that the requirement of plain packaging will deprive the essential value or core function of trademark rights and thus constitutes an indirect expropriation under BITs. However, such indirect expropriation meets the public interest requirement and the necessity requirement. The paper further argues that sovereign States have an inherent right to regulate domestic economic activities. Since the pain packaging requirements provided in the FCTC Guidelines are expected to protect the value of human lives and health, the protected values clearly outweigh the affected commercial interests of tobacco companies. Also the justification for host States to adopt a plain packaging policy is strong. Thus, the interpreters of BITs need to pay higher respect to the host State's sovereign power concerning its right to regulate tobacco products for a legitimate purpose. The conclusion of the paper is that the host States should enjoy a defense of the right to regulate to refuse compensation. The author believes that this is the only reasonable conclusion to avoid possible conflicts between different treaty systems (BITs and the FCTC) and between different legal systems and fields (trademark law, investment law and international health law).

  1. PDC-bit performance under simulated borehole conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, E.E.; Azar, J.J.

    1993-09-01

    Laboratory drilling tests were used to investigate the effects of pressure on polycrystalline-diamond-compact (PDC) drill-bit performance. Catoosa shale core samples were drilled with PDC and roller-cone bits at up to 1,750-psi confining pressure. All tests were conducted in a controlled environment with a full-scale laboratory drilling system. Test results indicate, that under similar operating conditions, increases in confining pressure reduce PDC-bit performance as much as or more than conventional-rock-bit performance. Specific energy calculations indicate that a combination of rock strength, chip hold-down, and bit balling may have reduced performance. Quantifying the degree to which pressure reduces PDC-bit performance will helpmore » researchers interpret test results and improve bit designs and will help drilling engineers run PDC bits more effectively in the field.« less

  2. Accurate Iris Recognition at a Distance Using Stabilized Iris Encoding and Zernike Moments Phase Features.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2014-07-10

    Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance.

  3. LOBSTER - The Next Generation

    NASA Astrophysics Data System (ADS)

    Schwenk, Arne

    2016-04-01

    Since 1997 K.U.M. GmbH designs and manufactures Ocean Bottom Seismometer. During the last three years we designed a new instrument which is presented here. Higher resolution, higher accuracy and less power consumption led to an unique instrument, the worlds smallest broadband longterm OBS. Key data are: 32 bit, 143dB, 300mW, 120 sec, 200kg deployment weight, size of half a palette.

  4. Hey! A Flea Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Flea Bit Me! KidsHealth / For Kids / Hey! A Flea Bit Me! Print en español ¡Ay! ¡ ... 30% DEET. More on this topic for: Kids Hey! A Gnat Bit Me! Hey! A Bedbug Bit ...

  5. Hey! A Louse Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Louse Bit Me! KidsHealth / For Kids / Hey! A Louse Bit Me! Print en español ¡Ay! ¡ ... topic for: Kids Lice Aren't So Nice Hey! A Gnat Bit Me! Hey! A Flea Bit ...

  6. PDC bits: What`s needed to meet tomorrow`s challenge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, T.M.; Sinor, L.A.

    1994-12-31

    When polycrystalline diamond compact (PDC) bits were introduced in the mid-1970s they showed tantalizingly high penetration rates in laboratory drilling tests. Single cutter tests indicated that they had the potential to drill very hard rocks. Unfortunately, 20 years later we`re still striving to reach the potential that these bits seem to have. Many problems have been overcome, and PDC bits have offered capabilities not possible with roller cone bits. PDC bits provide the most economical bit choice in many areas, but their limited durability has hampered their application in many other areas.

  7. Performance analysis of three-dimensional-triple-level cell and two-dimensional-multi-level cell NAND flash hybrid solid-state drives

    NASA Astrophysics Data System (ADS)

    Sakaki, Yukiya; Yamada, Tomoaki; Matsui, Chihiro; Yamaga, Yusuke; Takeuchi, Ken

    2018-04-01

    In order to improve performance of solid-state drives (SSDs), hybrid SSDs have been proposed. Hybrid SSDs consist of more than two types of NAND flash memories or NAND flash memories and storage-class memories (SCMs). However, the cost of hybrid SSDs adopting SCMs is more expensive than that of NAND flash only SSDs because of the high bit cost of SCMs. This paper proposes unique hybrid SSDs with two-dimensional (2D) horizontal multi-level cell (MLC)/three-dimensional (3D) vertical triple-level cell (TLC) NAND flash memories to achieve higher cost-performance. The 2D-MLC/3D-TLC hybrid SSD achieves up to 31% higher performance than the conventional 2D-MLC/2D-TLC hybrid SSD. The factors of different performance between the proposed hybrid SSD and the conventional hybrid SSD are analyzed by changing its block size, read/write/erase latencies, and write unit of 3D-TLC NAND flash memory, by means of a transaction-level modeling simulator.

  8. Mise en oeuvre et caracterisation d'une methode d'injection de pannes a haut niveau d'abstraction

    NASA Astrophysics Data System (ADS)

    Robache, Remi

    Nowadays, the effects of cosmic rays on electronics are well known. Different studies have demonstrated that neutrons are the main cause of non-destructive errors in embedded circuits on airplanes. Moreover, the reduction of transistor sizes is making all circuits more sensitive to those effects. Radiation tolerant circuits are sometimes used in order to improve the robustness of circuits. However, those circuits are expensive and their technologies often lag a few generations behind compared to non-tolerant circuits. Designers prefer to use conventional circuits with mitigation techniques to improve the tolerance to soft errors. It is necessary to analyse and verify the dependability of a circuit throughout its design process. Conventional design methodologies need to be adapted in order to evaluate the tolerance to non-destructive errors caused by radiations. Nowadays, designers need new tools and new methodologies to validate their mitigation strategies if they are to meet system requirements. In this thesis, we are proposing a new methodology allowing to capture the faulty behavior of a circuit at a low level of abstraction and to apply it at a higher level. In order to do that, we are introducing the new concept of faulty behavior Signatures that allows creating, at a high level of abstraction (system level) models that reflect with high fidelity the faulty behavior of a circuit learned at a low level of abstraction, at gate level. We successfully replicated the faulty behavior of an 8 bit adder and multiplier with Simulink, with respectively a correlation coefficient of 98.53% and 99.86%. We are proposing a methodology that permits to generate a library of faulty components, with Simulink, allowing designers to verify the dependability of their models early in the design flow. We are presenting and analyzing our results obtained for three different circuits throughout this thesis. Within the framework of this project a paper was published at the NEWCAS 2013 conference (Robache et al., 2013). This works presents the new concept of faulty behavior Signature, the methodology for generating Signatures we developed and also our experiments with an 8bit multiplier.

  9. A digital-type fluxgate magnetometer using a sigma-delta digital-to-analog converter for a sounding rocket experiment

    NASA Astrophysics Data System (ADS)

    Iguchi, Kyosuke; Matsuoka, Ayako

    2014-07-01

    One of the design challenges for future magnetospheric satellite missions is optimizing the mass, size, and power consumption of the instruments to meet the mission requirements. We have developed a digital-type fluxgate (DFG) magnetometer that is anticipated to have significantly less mass and volume than the conventional analog-type. Hitherto, the lack of a space-grade digital-to-analog converter (DAC) with good accuracy has prevented the development of a high-performance DFG. To solve this problem, we developed a high-resolution DAC using parts whose performance was equivalent to existing space-grade parts. The developed DAC consists of a 1-bit second-order sigma-delta modulator and a fourth-order analog low-pass filter. We tested the performance of the DAC experimentally and found that it had better than 17-bits resolution in 80% of the measurement range, and the linearity error was 2-13.3 of the measurement range. We built a DFG flight model (in which this DAC was embedded) for a sounding rocket experiment as an interim step in the development of a future satellite mission. The noise of this DFG was 0.79 nTrms at 0.1-10 Hz, which corresponds to a roughly 17-bit resolution. The results show that the sigma-delta DAC and the DFG had a performance that is consistent with our optimized design, and the noise was as expected from the noise simulation. Finally, we have confirmed that the DFG worked successfully during the flight of the sounding rocket.

  10. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    PubMed

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  11. Blue Laser Diode Enables Underwater Communication at 12.4 Gbps

    PubMed Central

    Wu, Tsai-Chen; Chi, Yu-Chieh; Wang, Huai-Yung; Tsai, Cheng-Ting; Lin, Gong-Ru

    2017-01-01

    To enable high-speed underwater wireless optical communication (UWOC) in tap-water and seawater environments over long distances, a 450-nm blue GaN laser diode (LD) directly modulated by pre-leveled 16-quadrature amplitude modulation (QAM) orthogonal frequency division multiplexing (OFDM) data was employed to implement its maximal transmission capacity of up to 10 Gbps. The proposed UWOC in tap water provided a maximal allowable communication bit rate increase from 5.2 to 12.4 Gbps with the corresponding underwater transmission distance significantly reduced from 10.2 to 1.7 m, exhibiting a bit rate/distance decaying slope of −0.847 Gbps/m. When conducting the same type of UWOC in seawater, light scattering induced by impurities attenuated the blue laser power, thereby degrading the transmission with a slightly higher decay ratio of 0.941 Gbps/m. The blue LD based UWOC enables a 16-QAM OFDM bit rate of up to 7.2 Gbps for transmission in seawater more than 6.8 m. PMID:28094309

  12. Realisation and robustness evaluation of a blind spatial domain watermarking technique

    NASA Astrophysics Data System (ADS)

    Parah, Shabir A.; Sheikh, Javaid A.; Assad, Umer I.; Bhat, Ghulam M.

    2017-04-01

    A blind digital image watermarking scheme based on spatial domain is presented and investigated in this paper. The watermark has been embedded in intermediate significant bit planes besides the least significant bit plane at the address locations determined by pseudorandom address vector (PAV). The watermark embedding using PAV makes it difficult for an adversary to locate the watermark and hence adds to security of the system. The scheme has been evaluated to ascertain the spatial locations that are robust to various image processing and geometric attacks JPEG compression, additive white Gaussian noise, salt and pepper noise, filtering and rotation. The experimental results obtained, reveal an interesting fact, that, for all the above mentioned attacks, other than rotation, higher the bit plane in which watermark is embedded more robust the system. Further, the perceptual quality of the watermarked images obtained in the proposed system has been compared with some state-of-art watermarking techniques. The proposed technique outperforms the techniques under comparison, even if compared with the worst case peak signal-to-noise ratio obtained in our scheme.

  13. Information rates of probabilistically shaped coded modulation for a multi-span fiber-optic communication system with 64QAM

    NASA Astrophysics Data System (ADS)

    Fehenberger, Tobias

    2018-02-01

    This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.

  14. A multi-state magnetic memory dependent on the permeability of Metglas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrie, J. R.; Wieland, K. A.; Timmerwilke, J. M.

    A three-state magnetic memory was developed based on differences in the magnetic permeability of a soft ferromagnetic media, Metglas 2826MB (Fe{sub 40}Ni{sub 38}Mo{sub 4}B{sub 18}). By heating bits of a 250 nm thick Metglas film with 70–100 mW of laser power, we were able to tune the local microstructure, and hence, the permeability. Ternary memory states were created by using lower laser power to enhance the initial permeability through localized atomic rearrangement and higher power to reduce the permeability through crystallization. The permeability of the bits was read by detecting variations in an external 32 Oe probe field within 10 μm ofmore » the media via a magnetic tunnel junction read head. Compared to data based on remanent magnetization, these multi-permeability bits have enhanced insensitivity to unexpected field and temperature changes. We found that data was not corrupted after exposure to fields of 1 T or temperatures of 423 K, indicating the effectiveness of this multi-state approach for safely storing large amounts of data.« less

  15. Development of the Low-cost Analog-to-Digital Converter (for nuclear physics experiments) with PC sound card

    NASA Astrophysics Data System (ADS)

    Sugihara, Kenkoh

    2009-10-01

    A low-cost ADC (Analogue-to-Digital Converter) with shaping embedded for undergraduate physics laboratory is developed using a home made circuit and a PC sound card. Even though an ADC is needed as an essential part of an experimental set up, commercially available ones are very expensive and are scarce for undergraduate laboratory experiments. The system that is developed from the present work is designed for a gamma-ray spectroscopy laboratory with NaI(Tl) counters, but not limited. For this purpose, the system performance is set to sampling rate of 1-kHz with 10-bit resolution using a typical PC sound card with 41-kHz or higher sampling rate and 16-bit resolution ADC with an addition of a shaping circuit. Details of the system and the status of development will be presented. Ping circuit and PC soundcard as typical PC sound card has 41.1kHz or heiger sampling rate and 16bit resolution ADCs. In the conference details of the system and the status of development will be presented.

  16. Direct-phase and amplitude digitalization based on free-space interferometry

    NASA Astrophysics Data System (ADS)

    Kleiner, Vladimir; Rudnitsky, Arkady; Zalevsky, Zeev

    2017-12-01

    A novel ADC configuration that can be characterized as a photonic-domain flash analog-to-digital convertor operating based upon free-space interferometry is proposed and analysed. The structure can be used as the front-end of a coherent receiver as well as for other applications. Two configurations are considered: the first, ‘direct free-space interference’, allows simultaneous measuring of the optical phase and amplitude; the second, ‘extraction of the ac component of interference by means of pixel-by-pixel balanced photodetection’, allows only phase digitization but with significantly higher sensitivity. For both proposed configurations, we present Monte Carlo estimations of the performance limitations, due to optical noise and photo-current noise, at sampling rates of 60 giga-samples per second. In terms of bit resolution, we simulated multiple cases with growing complexity of up to 4 bits for the amplitude and up to 6 bits for the phase. The simulations show that the digitization errors in the optical domain can be reduced to levels close to the quantization noise limits. Preliminary experimental results validate the fundamentals of the proposed idea.

  17. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  18. Charge pump-based MOSFET-only 1.5-bit pipelined ADC stage in digital CMOS technology

    NASA Astrophysics Data System (ADS)

    Singh, Anil; Agarwal, Alpana

    2016-10-01

    A simple low-power and low-area metal-oxide-semiconductor field-effect transistor-only fully differential 1.5-bit pipelined analog-to-digital converter stage is proposed and designed in Taiwan Semiconductor Manufacturing Company 0.18 μm-technology using BSIM3v3 parameters with supply voltage of 1.8 V in inexpensive digital complementary metal-oxide semiconductor (CMOS) technology. It is based on charge pump technique to achieve the desired voltage gain of 2, independent of capacitor mismatch and avoiding the need of power hungry operational amplifier-based architecture to reduce the power, Si area and cost. Various capacitances are implemented by metal-oxide semiconductor capacitors, offering compatibility with cheaper digital CMOS process in order to reduce the much required manufacturing cost.

  19. Noise removing in encrypted color images by statistical analysis

    NASA Astrophysics Data System (ADS)

    Islam, N.; Puech, W.

    2012-03-01

    Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.

  20. Ka-Band Phased Array System Characterization

    NASA Technical Reports Server (NTRS)

    Acosta, R.; Johnson, S.; Sands, O.; Lambert, K.

    2001-01-01

    Phased Array Antennas (PAAs) using patch-radiating elements are projected to transmit data at rates several orders of magnitude higher than currently offered with reflector-based systems. However, there are a number of potential sources of degradation in the Bit Error Rate (BER) performance of the communications link that are unique to PAA-based links. Short spacing of radiating elements can induce mutual coupling between radiating elements, long spacing can induce grating lobes, modulo 2 pi phase errors can add to Inter Symbol Interference (ISI), phase shifters and power divider network introduce losses into the system. This paper describes efforts underway to test and evaluate the effects of the performance degrading features of phased-array antennas when used in a high data rate modulation link. The tests and evaluations described here uncover the interaction between the electrical characteristics of a PAA and the BER performance of a communication link.

  1. On Nibbles and Bytes: The Conundrum of Memory for Space Systems - NASA Electronic Parts and Packaging (NEPP) and Efforts in Memories

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.; Ladbury, Ray; Pellish, Jonathan; Sheldon, Douglas; Oldham, Timothy; Berg, Melanie D.; Cohn, Lewis M.

    2009-01-01

    Radiation requirements and trends. TID: 1) >90% of NASA applications are < 100 krads-Si in piecepart requirements. a) Many commercial devices (NVM and SDRAMs) meet or come close to this. b) Charge pump TID tolerance has improved an order magnitude over the last 10 years. 2) There are always a few programs with higher level needs and, of course, defense needs SEL: 1) Prefer none or rates that are considered low risk. a) Latent damage is a bear to deal with. 2) As we re packing cells tighter and even with lower Vdd, we re seeing SEL on commercial devices regularly (<90nm). a) Often in power conversion, I/O, or control areas. SEU: 1) It s not the bit errors, it s the SEFIs errors that are the biggest issues. a) Scrubbing concerns for risk, power, speed.

  2. Optimization of the random multilayer structure to break the random-alloy limit of thermal conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Gu, Chongjie; Ruan, Xiulin, E-mail: ruan@purdue.edu

    2015-02-16

    A low lattice thermal conductivity (κ) is desired for thermoelectrics, and a highly anisotropic κ is essential for applications such as magnetic layers for heat-assisted magnetic recording, where a high cross-plane (perpendicular to layer) κ is needed to ensure fast writing while a low in-plane κ is required to avoid interaction between adjacent bits of data. In this work, we conduct molecular dynamics simulations to investigate the κ of superlattice (SL), random multilayer (RML) and alloy, and reveal that RML can have 1–2 orders of magnitude higher anisotropy in κ than SL and alloy. We systematically explore how the κmore » of SL, RML, and alloy changes relative to each other for different bond strength, interface roughness, atomic mass, and structure size, which provides guidance for choosing materials and structural parameters to build RMLs with optimal performance for specific applications.« less

  3. Low-Loss Photonic Reservoir Computing with Multimode Photonic Integrated Circuits.

    PubMed

    Katumba, Andrew; Heyvaert, Jelle; Schneider, Bendix; Uvin, Sarah; Dambre, Joni; Bienstman, Peter

    2018-02-08

    We present a numerical study of a passive integrated photonics reservoir computing platform based on multimodal Y-junctions. We propose a novel design of this junction where the level of adiabaticity is carefully tailored to capture the radiation loss in higher-order modes, while at the same time providing additional mode mixing that increases the richness of the reservoir dynamics. With this design, we report an overall average combination efficiency of 61% compared to the standard 50% for the single-mode case. We demonstrate that with this design, much more power is able to reach the distant nodes of the reservoir, leading to increased scaling prospects. We use the example of a header recognition task to confirm that such a reservoir can be used for bit-level processing tasks. The design itself is CMOS-compatible and can be fabricated through the known standard fabrication procedures.

  4. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  5. Performance Analysis of OCDMA Based on AND Detection in FTTH Access Network Using PIN & APD Photodiodes

    NASA Astrophysics Data System (ADS)

    Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.

    2011-06-01

    In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.

  6. ZEP520A cold-development technique and tool for ultimate resolution to fabricate 1Xnm bit pattern EB master mold for nano-imprinting lithography for HDD/BPM development

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hideo; Iyama, Hiromasa

    2012-06-01

    Poor solvent developers are effective for resolution enhancement on a polymer-type EB resist such as ZEP520A. Another way is to utilize "cold-development" technique which was accomplished by a dip-development technique usually. We then designed and successfully built a single-wafer spin-development tool for the cold-development down to -10degC in order to dissolve difficulties of the dip-development. The cold-development certainly helped improve ZEP520A resolution and hole CD size uniformity, and achieved 35nm pitch BPM patterns with the standard developer ZED-N50, but not 25nm pitch yet. By employing a poor solvent mixture of iso-Propyl Alcohol (IPA) and Fluoro-Carbon (FC), 25nm pitch BPM patterns were accomplished. However, the cold-development showed almost no improvement on the IPA/FC mixture developer solvent. This paper describes cold-development technique and a tool, as well as its results, for ZEP520A resolution enhancement to fabricate 1Xnm bits (holes) for EB master-mold for Nano-Imprinting Lithography for 1Tbit/inch2 and 25nm pitch Bit Patterned Media development.

  7. Performance test of different 3.5 mm drill bits and consequences for orthopaedic surgery.

    PubMed

    Clement, Hans; Zopf, Christoph; Brandner, Markus; Tesch, Norbert P; Vallant, Rudolf; Puchwein, Paul

    2015-12-01

    Drilling of bones in orthopaedic and trauma surgery is a common procedure. There are yet no recommendations about which drill bits/coating should be preferred and when to change a used drill bit. In preliminary studies typical "drilling patterns" of surgeons concerning used spindle speed and feeding force were recorded. Different feeding forces were tested and abrasion was analysed using magnification and a scanning electron microscope (SEM). Acquired data were used for programming a friction stir welding machine (FSWM). Four drill bits (a default AISI 440A, a HSS, an AISI 440B and a Zirconium-oxide drill bit) were analysed for abrasive wear after 20/40/60 machine-guided and hand-driven drilled holes. Additionally different drill coatings [diamond-like carbon/grafitic (DLC), titanium nitride/carbide (Ti-N)] were tested. The mean applied feeding force by surgeons was 45 ± 15.6 Newton (N). HSS bits were still usable after 51 drill holes. Both coated AISI 440A bits showed considerable breakouts of the main cutting edge after 20 hand-driven drilled holes. The coated HSS bit showed very low abrasive wear. The non-coated AISI 440B bit had a similar durability to the HSS bits. The ZrO2 dental drill bit excelled its competitors (no considerable abrasive wear at >100 holes). If the default AISI 440A drill bit cannot be checked by 20-30× magnification after surgery, it should be replaced after 20 hand-driven drilled holes. Low price coated HSS bits could be a powerful alternative.

  8. Bit-1 Mediates Integrin-dependent Cell Survival through Activation of the NFκB Pathway*

    PubMed Central

    Griffiths, Genevieve S.; Grundl, Melanie; Leychenko, Anna; Reiter, Silke; Young-Robbins, Shirley S.; Sulzmaier, Florian J.; Caliva, Maisel J.; Ramos, Joe W.; Matter, Michelle L.

    2011-01-01

    Loss of properly regulated cell death and cell survival pathways can contribute to the development of cancer and cancer metastasis. Cell survival signals are modulated by many different receptors, including integrins. Bit-1 is an effector of anoikis (cell death due to loss of attachment) in suspended cells. The anoikis function of Bit-1 can be counteracted by integrin-mediated cell attachment. Here, we explored integrin regulation of Bit-1 in adherent cells. We show that knockdown of endogenous Bit-1 in adherent cells decreased cell survival and re-expression of Bit-1 abrogated this effect. Furthermore, reduction of Bit-1 promoted both staurosporine and serum-deprivation induced apoptosis. Indeed knockdown of Bit-1 in these cells led to increased apoptosis as determined by caspase-3 activation and positive TUNEL staining. Bit-1 expression protected cells from apoptosis by increasing phospho-IκB levels and subsequently bcl-2 gene transcription. Protection from apoptosis under serum-free conditions correlated with bcl-2 transcription and Bcl-2 protein expression. Finally, Bit-1-mediated regulation of bcl-2 was dependent on focal adhesion kinase, PI3K, and AKT. Thus, we have elucidated an integrin-controlled pathway in which Bit-1 is, in part, responsible for the survival effects of cell-ECM interactions. PMID:21383007

  9. Effects of pre-conditioning on behavior and physiology of horses during a standardised learning task

    PubMed Central

    Webb, Holly; Starling, Melissa J.; Freire, Rafael; Buckley, Petra; McGreevy, Paul D.

    2017-01-01

    Rein tension is used to apply pressure to control both ridden and unridden horses. The pressure is delivered by equipment such as the bit, which may restrict voluntary movement and cause changes in behavior and physiology. Managing the effects of such pressure on arousal level and behavioral indicators will optimise horse learning outcomes. This study examined the effect of training horses to turn away from bit pressure on cardiac outcomes and behavior (including responsiveness) over the course of eight trials in a standardised learning task. The experimental procedure consisted of a resting phase, treatment/control phase, standardised learning trials requiring the horses (n = 68) to step backwards in response to bit pressure and a recovery phase. As expected, heart rate increased (P = 0.028) when the handler applied rein tension during the treatment phase. The amount of rein tension required to elicit a response during treatment was higher on the left than the right rein (P = 0.009). Total rein tension required for trials reduced (P < 0.001) as they progressed, as did time taken (P < 0.001) and steps taken (P < 0.001). The incidence of head tossing decreased (P = 0.015) with the progression of the trials and was higher (P = 0.018) for the control horses than the treated horses. These results suggest that preparing the horses for the lesson and slightly raising their arousal levels, improved learning outcomes. PMID:28358892

  10. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  11. Causes of wear of PDC bits and ways of improving their wear resistance

    NASA Astrophysics Data System (ADS)

    Timonin, VV; Smolentsev, AS; Shakhtorin, I. O.; Polushin, NI; Laptev, AI; Kushkhabiev, AS

    2017-02-01

    The scope of the paper encompasses basic factors that influence PDC bit efficiency. Feasible ways of eliminating the negatives are illustrated. The wash fluid flow in a standard bit is modeled, the resultant pattern of the bit washing is analyzed, and the recommendations are made on modification of the PDC bit design.

  12. Feasibilty of a Multi-bit Cell Perpendicular Magnetic Tunnel Junction Device

    NASA Astrophysics Data System (ADS)

    Kim, Chang Soo

    The ultimate objective of this research project was to explore the feasibility of making a multi-bit cell perpendicular magnetic tunnel junction (PMTJ) device to increase the storage density of spin-transfer-torque random access memory (STT-RAM). As a first step toward demonstrating a multi-bit cell device, this dissertation contributed a systematic and detailed study of developing a single cell PMTJ device using L10 FePt films. In the beginning of this research, 13 up-and-coming non-volatile memory (NVM) technologies were investigated and evaluated to see whether one of them might outperform NAND flash memories and even HDDs on a cost-per-TB basis in 2020. This evaluation showed that STT-RAM appears to potentially offer superior power efficiency, among other advantages. It is predicted that STTRAM's density could make it a promising candidate for replacing NAND flash memories and possibly HDDs if STTRAM could be improved to store multiple bits per cell. Ta/Mg0 under-layers were used first in order to develop (001) L1 0 ordering of FePt at a low temperature of below 400 °C. It was found that the tradeoff between surface roughness and (001) L10 ordering of FePt makes it difficult to achieve low surface roughness and good perpendicular magnetic properties simultaneously when Ta/Mg0 under-layers are used. It was, therefore, decided to investigate MgO/CrRu under-layers to simultaneously achieve smooth films with good ordering below 400°C. A well ordered 4 nm L10 FePt film with RMS surface roughness close to 0.4 nm, perpendicular coercivity of about 5 kOe, and perpendicular squareness near 1 was obtained at a deposition temperature of 390 °C on a thermally oxidized Si substrate when MgO/CrRu under-layers are used. A PMTJ device was developed by depositing a thin MgO tunnel barrier layer and a top L10 FePt film and then being postannealed at 450 °C for 30 minutes. It was found that the sputtering power needs to be minimized during the thin MgO tunnel barrier deposition because the high sputtering power can degrade perpendicular magnetic anisotropy of the bottom L1 0 FePt film and also increase RMS film surface roughness of the MgO tunnel barrier layer. From a lithographically unpatterned PMTJ sample, MR ratio and RA were measured at room temperature by the CIPT method and found to be 138% and 6.4 kOmicrom2, respectively. A completed PMTJ test pattern with a junction size of 80x40 microm2 was fabricated and showed a measured MR ratio and RA product of 108% and 4~6 kOmicrom 2, respectively. These values agree relatively well with the corresponding values of 138% and 6.4 kOmicrom2 obtained from the unpatterned PMTJ sample measured by a current-in-plane tunneling (CIPT) method.

  13. Demonstrating ultrafast polarization dynamics in spin-VCSELs

    NASA Astrophysics Data System (ADS)

    Lindemann, Markus; Pusch, Tobias; Michalzik, Rainer; Gerhardt, Nils C.; Hofmann, Martin R.

    2018-02-01

    Vertical-cavity surface-emitting lasers (VCSELs) are used for short-haul optical data transmission with increasing bit rates. The optimization involves both enhanced device designs and the use of higher-order modulation formats. In order to improve the modulation bandwidth substantially, the presented work employs spin-pumped VCSELs (spin-VCSELs) and their polarization dynamics instead of relying on intensity-modulated devices. In spin-VCSELs, the polarization state of the emitted light is controllable via spin injection. By optical spin pumping a single-mode VCSEL is forced to emit light composed of both orthogonal linearly polarized fundamental modes. The frequencies of these two modes differ slightly by a value determined by the cavity birefringence. As a result, the circular polarization degree oscillates with their beat frequency, i.e., with the birefringence-induced mode splitting. We used this phenomenon to show so-called polarization oscillations, which are generated by pulsed spin injection. Their frequency represents the polarization dynamics resonance frequency and can be tuned over a wide range via the birefringence, nearly independent from any other laser parameter. In previous work we demonstrated a maximum birefringence-induced mode splitting of more than 250 GHz. In this work, compared to previous publications, we show an almost doubled polarization oscillation frequency of more than 80 GHz. Furthermore, we discuss concepts to achieve even higher values far above 100 GHz.

  14. Adaptive spatial filtering for daytime satellite quantum key distribution

    NASA Astrophysics Data System (ADS)

    Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.

    2014-11-01

    The rate of secure key generation (SKG) in quantum key distribution (QKD) is adversely affected by optical noise and loss in the quantum channel. In a free-space atmospheric channel, the scattering of sunlight into the channel can lead to quantum bit error ratios (QBERs) sufficiently large to preclude SKG. Furthermore, atmospheric turbulence limits the degree to which spatial filtering can reduce sky noise without introducing signal losses. A system simulation quantifies the potential benefit of tracking and higher-order adaptive optics (AO) technologies to SKG rates in a daytime satellite engagement scenario. The simulations are performed assuming propagation from a low-Earth orbit (LEO) satellite to a terrestrial receiver that includes an AO system comprised of a Shack-Hartmann wave-front sensor (SHWFS) and a continuous-face-sheet deformable mirror (DM). The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain waveoptics hardware emulator. Secure key generation rates are then calculated for the decoy state QKD protocol as a function of the receiver field of view (FOV) for various pointing angles. The results show that at FOVs smaller than previously considered, AO technologies can enhance SKG rates in daylight and even enable SKG where it would otherwise be prohibited as a consequence of either background optical noise or signal loss due to turbulence effects.

  15. Image processing on the image with pixel noise bits removed

    NASA Astrophysics Data System (ADS)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  16. Drilling and Caching Architecture for the Mars2020 Mission

    NASA Astrophysics Data System (ADS)

    Zacny, K.

    2013-12-01

    We present a Sample Acquisition and Caching (SAC) architecture for the Mars2020 mission and detail how the architecture meets the sampling requirements described in the Mars2020 Science Definition Team (SDT) report. The architecture uses 'One Bit per Core' approach. Having dedicated bit for each rock core allows a reduction in the number of core transfer steps and actuators and this reduces overall mission risk. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). To enable replacing of core samples, the drill bits are based on the BigTooth bit design. The BigTooth bit cuts a core diameter slightly smaller than the imaginary hole inscribed by the inner surfaces of the bits. Hence the rock core could be much easier ejected along the gravity vector. The architecture also has three additional types of bits that allow analysis of rocks. Rock Abrasion and Brushing Bit (RABBit) allows brushing and grinding of rocks in the same was as Rock Abrasion Tool does on MER. PreView bit allows viewing and analysis of rock core surfaces. Powder and Regolith Acquisition Bit (PRABit) captures regolith and rock powder either for in situ analysis or sample return. PRABit also allows sieving capabilities. The architecture can be viewed here: http://www.youtube.com/watch?v=_-hOO4-zDtE

  17. Active holographic interconnects for interfacing volume storage

    NASA Astrophysics Data System (ADS)

    Domash, Lawrence H.; Schwartz, Jay R.; Nelson, Arthur R.; Levin, Philip S.

    1992-04-01

    In order to achieve the promise of terabit/cm3 data storage capacity for volume holographic optical memory, two technological challenges must be met. Satisfactory storage materials must be developed and the input/output architectures able to match their capacity with corresponding data access rates must also be designed. To date the materials problem has received more attention than devices and architectures for access and addressing. Two philosophies of parallel data access to 3-D storage have been discussed. The bit-oriented approach, represented by recent work on two-photon memories, attempts to store bits at local sites within a volume without affecting neighboring bits. High speed acousto-optic or electro- optic scanners together with dynamically focused lenses not presently available would be required. The second philosophy is that volume optical storage is essentially holographic in nature, and that each data write or read is to be distributed throughout the material volume on the basis of angle multiplexing or other schemes consistent with the principles of holography. The requirements for free space optical interconnects for digital computers and fiber optic network switching interfaces are also closely related to this class of devices. Interconnects, beamlet generators, angle multiplexers, scanners, fiber optic switches, and dynamic lenses are all devices which may be implemented by holographic or microdiffractive devices of various kinds, which we shall refer to collectively as holographic interconnect devices. At present, holographic interconnect devices are either fixed holograms or spatial light modulators. Optically or computer generated holograms (submicron resolution, 2-D or 3-D, encoding 1013 bits, nearly 100 diffraction efficiency) can implement sophisticated mathematical design principles, but of course once fabricated they cannot be changed. Spatial light modulators offer high speed programmability but have limited resolution (512 X 512 pixels, encoding about 106 bits of data) and limited diffraction efficiency. For any application, one must choose between high diffractive performance and programmability.

  18. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  19. Drill bit assembly for releasably retaining a drill bit cutter

    DOEpatents

    Glowka, David A.; Raymond, David W.

    2002-01-01

    A drill bit assembly is provided for releasably retaining a polycrystalline diamond compact drill bit cutter. Two adjacent cavities formed in a drill bit body house, respectively, the disc-shaped drill bit cutter and a wedge-shaped cutter lock element with a removable fastener. The cutter lock element engages one flat surface of the cutter to retain the cutter in its cavity. The drill bit assembly thus enables the cutter to be locked against axial and/or rotational movement while still providing for easy removal of a worn or damaged cutter. The ability to adjust and replace cutters in the field reduces the effect of wear, helps maintains performance and improves drilling efficiency.

  20. Results of NanTroSEIZE Expeditions Stages 1 & 2: Deep-sea Coring Operations on-board the Deep-sea Drilling Vessel Chikyu and Development of Coring Equipment for Stage 3

    NASA Astrophysics Data System (ADS)

    Shinmoto, Y.; Wada, K.; Miyazaki, E.; Sanada, Y.; Sawada, I.; Yamao, M.

    2010-12-01

    The Nankai-Trough Seismogenic Zone Experiment (NanTroSEIZE) has carried out several drilling expeditions in the Kumano Basin off the Kii-Peninsula of Japan with the deep-sea scientific drilling vessel Chikyu. Core sampling runs were carried out during the expeditions using an advanced multiple wireline coring system which can continuously core into sections of undersea formations. The core recovery rate with the Rotary Core Barrel (RCB) system was rather low as compared with other methods such as the Hydraulic Piston Coring System (HPCS) and Extended Shoe Coring System (ESCS). Drilling conditions such as hole collapse and sea conditions such as high ship-heave motions need to be analyzed along with differences in lithology, formation hardness, water depth and coring depth in order to develop coring tools, such as the core barrel or core bit, that will yield the highest core recovery and quality. The core bit is especially important in good recovery of high quality cores, however, the PDC cutters were severely damaged during the NanTroSEIZE Stages 1 & 2 expeditions due to severe drilling conditions. In the Stage 1 (riserless coring) the average core recovery was rather low at 38 % with the RCB and many difficulties such as borehole collapse, stick-slip and stuck pipe occurred, causing the damage of several of the PDC cutters. In Stage 2, a new design for the core bit was deployed and core recovery was improved at 67 % for the riserless system and 85 % with the riser. However, due to harsh drilling conditions, the PDC core bit and all of the PDC cutters were completely worn down. Another original core bit was also deployed, however, core recovery performance was low even for plate boundary core samples. This study aims to identify the influence of the RCB system specifically on the recovery rates at each of the holes drilled in the NanTroSEIZE coring expeditions. The drilling parameters such as weight-on-bit, torque, rotary speed and flow rate, etc., were analyzed and conditions such as formation, tools, and sea conditions which directly affect core recovery have been categorized. Also discussed will be the further development of such coring equipment as the core bit and core barrel for the NanTroSEIZE Stage 3 expeditions, which aim to reach a depth of 7000 m-below the sea floor into harder formations under extreme drilling conditions.

  1. Antiwhirl PDC bits increased penetration rates in Alberta drilling. [Polycrystalline Diamond Compact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrosky, D.; Osmak, G.

    1993-07-05

    The antiwhirl PDC bits and an inhibitive mud system contributed to the quicker drilling of the time-sensitive shales. The hole washouts in the intermediate section were dramatically reduced, resulting in better intermediate casing cement jobs. Also, the use of antirotation PDC-drillable cementing plugs eliminated the need to drill out plugs and float equipment with a steel tooth bit and then trip for the PDC bit. By using an antiwhirl PDC bit, at least one trip was eliminated in the intermediate section. Offset data indicated that two to six conventional bits would have been required to drill the intermediate hole interval.more » The PDC bit was rebuildable and therefore rerunnable even after being used on five wells. In each instance, the cost of replacing chipped cutters was less than the cost of a new insert roller cone bit. The paper describes the antiwhirl bits; the development of the bits; and their application in a clastic sequence, a carbonate sequence, and the Shekilie oil field; the improvement in the rate of penetration; the selection of bottom hole assemblies; washout problems; and drill-out characteristics.« less

  2. Evaluations of bit sleeve and twisted-body bit designs for controlling roof bolter dust

    PubMed Central

    Beck, T.W.

    2015-01-01

    Drilling into coal mine roof strata to install roof bolts has the potential to release substantial quantities of respirable dust. Due to the proximity of drill holes to the breathing zone of roof bolting personnel, dust escaping the holes and avoiding capture by the dust collection system pose a potential respiratory health risk. Controls are available to complement the typical dry vacuum collection system and minimize harmful exposures during the initial phase of drilling. This paper examines the use of a bit sleeve in combination with a dust-hog-type bit to improve dust extraction during the critical initial phase of drilling. A twisted-body drill bit is also evaluated to determine the quantity of dust liberated in comparison with the dust-hog-type bit. Based on the results of our laboratory tests, the bit sleeve may reduce dust emissions by one-half during the initial phase of drilling before the drill bit is fully enclosed by the drill hole. Because collaring is responsible for the largest dust liberations, overall dust emission can also be substantially reduced. The use of a twisted-body bit has minimal improvement on dust capture compared with the commonly used dust-hog-type bit. PMID:26257435

  3. Sample Acquisition and Caching architecture for the Mars Sample Return mission

    NASA Astrophysics Data System (ADS)

    Zacny, K.; Chu, P.; Cohen, J.; Paulsen, G.; Craft, J.; Szwarc, T.

    This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “ One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “ eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.

  4. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  5. Theoretical and subjective bit assignments in transform picture

    NASA Technical Reports Server (NTRS)

    Jones, H. W., Jr.

    1977-01-01

    It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.

  6. Effect of bit wear on hammer drill handle vibration and productivity.

    PubMed

    Antonucci, Andrea; Barr, Alan; Martin, Bernard; Rempel, David

    2017-08-01

    The use of large electric hammer drills exposes construction workers to high levels of hand vibration that may lead to hand-arm vibration syndrome and other musculoskeletal disorders. The aim of this laboratory study was to investigate the effect of bit wear on drill handle vibration and drilling productivity (e.g., drilling time per hole). A laboratory test bench system was used with an 8.3 kg electric hammer drill and 1.9 cm concrete bit (a typical drill and bit used in commercial construction). The system automatically advanced the active drill into aged concrete block under feed force control to a depth of 7.6 cm while handle vibration was measured according to ISO standards (ISO 5349 and 28927). Bits were worn to 4 levels by consecutive hole drilling to 4 cumulative drilling depths: 0, 1,900, 5,700, and 7,600 cm. Z-axis handle vibration increased significantly (p<0.05) from 4.8 to 5.1 m/s 2 (ISO weighted) and from 42.7-47.6 m/s 2 (unweighted) when comparing a new bit to a bit worn to 1,900 cm of cumulative drilling depth. Handle vibration did not increase further with bits worn more than 1900 cm of cumulative drilling depth. Neither x- nor y-axis handle vibration was effected by bit wear. The time to drill a hole increased by 58% for the bit with 5,700 cm of cumulative drilling depth compared to a new bit. Bit wear led to a small but significant increase in both ISO weighted and unweighted z-axis handle vibration. Perhaps more important, bit wear had a large effect on productivity. The effect on productivity will influence a worker's allowable daily drilling time if exposure to drill handle vibration is near the ACGIH Threshold Limit Value. [1] Construction contractors should implement a bit replacement program based on these findings.

  7. BitTorious volunteer: server-side extensions for centrally-managed volunteer storage in BitTorrent swarms.

    PubMed

    Lee, Preston V; Dinu, Valentin

    2015-11-04

    Our publication of the BitTorious portal [1] demonstrated the ability to create a privatized distributed data warehouse of sufficient magnitude for real-world bioinformatics studies using minimal changes to the standard BitTorrent tracker protocol. In this second phase, we release a new server-side specification to accept anonymous philantropic storage donations by the general public, wherein a small portion of each user's local disk may be used for archival of scientific data. We have implementated the server-side announcement and control portions of this BitTorrent extension into v3.0.0 of the BitTorious portal, upon which compatible clients may be built. Automated test cases for the BitTorious Volunteer extensions have been added to the portal's v3.0.0 release, supporting validation of the "peer affinity" concept and announcement protocol introduced by this specification. Additionally, a separate reference implementation of affinity calculation has been provided in C++ for informaticians wishing to integrate into libtorrent-based projects. The BitTorrent "affinity" extensions as provided in the BitTorious portal reference implementation allow data publishers to crowdsource the extreme storage prerequisites for research in "big data" fields. With sufficient awareness and adoption of BitTorious Volunteer-based clients by the general public, the BitTorious portal may be able to provide peta-scale storage resources to the scientific community at relatively insignificant financial cost.

  8. PDC bit hydraulics design, profile are key to reducing balling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hariharan, P.R.; Azar, J.J.

    1996-12-09

    Polycrystalline diamond compact (PDC) bits with a parabolic profile and bladed hydraulic design have a lesser tendency to ball during drilling of reactive shales. PDC bits with ribbed or open-face hydraulic designs and those with flat or rounded profiles tended to ball more often in the bit balling experiments conducted. Experimental work also indicates that PDC hydraulic design seems to have a greater influence on bit balling tendency compared to bit profile design. There are five main factors that affect bit balling: formation type, drilling fluid, drilling hydraulics, bit design, and confining pressures. An equation for specific energy showed thatmore » it could be used to describe the efficiency of the drilling process by examining the amount of energy spent in drilling a unit volume of rock. This concept of specific energy has been used herein to correlate with the parameter Rd, a parameter to quantify the degree of balling.« less

  9. Relationship between visuospatial neglect and kinesthetic deficits after stroke.

    PubMed

    Semrau, Jennifer A; Wang, Jeffery C; Herter, Troy M; Scott, Stephen H; Dukelow, Sean P

    2015-05-01

    After stroke, visuospatial and kinesthetic (sense of limb motion) deficits are common, occurring in approximately 30% and 60% of individuals, respectively. Although both types of deficits affect aspects of spatial processing necessary for daily function, few studies have investigated the relationship between these 2 deficits after stroke. We aimed to characterize the relationship between visuospatial and kinesthetic deficits after stroke using the Behavioral Inattention Test (BIT) and a robotic measure of kinesthetic function. Visuospatial attention (using the BIT) and kinesthesia (using robotics) were measured in 158 individuals an average of 18 days after stroke. In the kinesthetic matching task, the robot moved the participant's stroke-affected arm at a preset direction, speed, and magnitude. Participants mirror-matched the robotic movement with the less/unaffected arm as soon as they felt movement in their stroke affected arm. We found that participants with visuospatial inattention (neglect) had impaired kinesthesia 100% of the time, whereas only 59% of participants without neglect were impaired. For those without neglect, we observed that a higher percentage of participants with lower but passing BIT scores displayed impaired kinesthetic behavior (78%) compared with those participants who scored perfect or nearly perfect on the BIT (49%). The presence of visuospatial neglect after stroke is highly predictive of the presence of kinesthetic deficits. However, the presence of kinesthetic deficits does not necessarily always indicate the presence of visuospatial neglect. Our findings highlight the importance of assessment and treatment of kinesthetic deficits after stroke, especially in patients with visuospatial neglect. © The Author(s) 2014.

  10. Fly Photoreceptors Demonstrate Energy-Information Trade-Offs in Neural Coding

    PubMed Central

    Niven, Jeremy E; Anderson, John C; Laughlin, Simon B

    2007-01-01

    Trade-offs between energy consumption and neuronal performance must shape the design and evolution of nervous systems, but we lack empirical data showing how neuronal energy costs vary according to performance. Using intracellular recordings from the intact retinas of four flies, Drosophila melanogaster, D. virilis, Calliphora vicina, and Sarcophaga carnaria, we measured the rates at which homologous R1–6 photoreceptors of these species transmit information from the same stimuli and estimated the energy they consumed. In all species, both information rate and energy consumption increase with light intensity. Energy consumption rises from a baseline, the energy required to maintain the dark resting potential. This substantial fixed cost, ∼20% of a photoreceptor's maximum consumption, causes the unit cost of information (ATP molecules hydrolysed per bit) to fall as information rate increases. The highest information rates, achieved at bright daylight levels, differed according to species, from ∼200 bits s−1 in D. melanogaster to ∼1,000 bits s−1 in S. carnaria. Comparing species, the fixed cost, the total cost of signalling, and the unit cost (cost per bit) all increase with a photoreceptor's highest information rate to make information more expensive in higher performance cells. This law of diminishing returns promotes the evolution of economical structures by severely penalising overcapacity. Similar relationships could influence the function and design of many neurons because they are subject to similar biophysical constraints on information throughput. PMID:17373859

  11. Laser Linewidth Requirements for Optical Bpsk and Qpsk Heterodyne Lightwave Systems.

    NASA Astrophysics Data System (ADS)

    Boukli-Hacene, Mokhtar

    In this dissertation, optical Binary Phase-Shift Keying (BPSK) and Quadrature Phase-Shift Keying (QPSK) heterodyne communication receivers are investigated. The main objective of this research work is to analyze the performance of these receivers in the presence of laser phase noise and shot noise. The heterodyne optical BPSK is based on the square law carrier recovery (SLCR) scheme for phase detection. The BPSK heterodyne receiver is analyzed assuming a second order linear phase-locked loop (PLL) subsystem and a small phase error. The noise properties are analyzed and the problem of minimizing the effect of noise is addressed. The performance of the receiver is evaluated in terms of the bit error rate (BER), which leads to the analysis of the BER versus the laser linewidth and the number of photons/bit to achieve good performance. Since we cannot track the pure carrier component in the presence of noise, a non-linear model is used to solve the problem of recovery of the carrier. The non -linear system is analyzed in the presence of a low signal -to-noise ratio (SNR). The non-Gaussian noise model represented by its probability density function (PDF) is used to analyze the performance of the receiver, especially the phase error. In addition the effect of the PLL is analyzed by studying the cycle slippage (cs). Finally, the research effort is expanded from BPSK to QPSK systems. The heterodyne optical QPSK based on the fourth power multiplier scheme (FPMS) in conjunction with linear and non-linear PLL model is investigated. Optimum loop and higher power penalty in the presence of phase noise and shot noise are analyzed. It is shown that the QPSK system yields a high speed and high sensitivity coherent means for transmission of information accompanied by a small degradation in the laser linewidth. Comparative analysis of BPSK and QPSK systems leads us to conclude that in terms of laser linewidth, bit rate, phase error and power penalty, the QPSK system is more sensitive than the BPSK system and suffers less from higher power penalty. The BPSK and QPSK heterodyne receivers used in the uncoded scheme demand a realistic laser linewidth. Since the laser linewidth is the critical measure of the performance of a receiver, a convolutional code applied to QPSK of the system is used to improve the sensitivity of the system. The effect of coding is particularly important as means of relaxing the laser linewidth requirement. The validity and usefulness of the analysis presented in the dissertation is supported by computer simulations.

  12. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  13. "It's a Bit Taboo": A Qualitative Study of Norwegian Adolescents' Perceptions of Mental Healthcare Services

    ERIC Educational Resources Information Center

    Tharaldsen, Kjersti Balle; Stallard, Paul; Cuijpers, Pim; Bru, Edvin; Bjaastad, Jon Fauskanger

    2017-01-01

    The aim of this study is to investigate adolescents' perspectives on mental healthcare services. Based on theoretical perspectives concerning barriers for help-seeking, individual interviews were carried out in order to obtain the adolescents' perspectives on knowledge of services for mental health problems, potential barriers for help-seeking,…

  14. Design challenges of EO polymer based leaky waveguide deflector for 40 Gs/s all-optical analog-to-digital converters

    NASA Astrophysics Data System (ADS)

    Hadjloum, Massinissa; El Gibari, Mohammed; Li, Hongwu; Daryoush, Afshin S.

    2016-08-01

    Design challenges and performance optimization of an all-optical analog-to-digital converter (AOADC) is presented here. The paper addresses both microwave and optical design of a leaky waveguide optical deflector using electro-optic (E-O) polymer. The optical deflector converts magnitude variation of the applied RF voltage into variation of deflection angle out of a leaky waveguide optical beam using the linear E-O effect (Pockels effect) as part of the E-O polymer based optical waveguide. This variation of deflection angle as result of the applied RF signal is then quantized using optical windows followed by an array of high-speed photodetectors. We optimized the leakage coefficient of the leaky waveguide and its physical length to achieve the best trade-off between bandwidth and the deflected optical beam resolution, by improving the phase velocity matching between lightwave and microwave on one hand and using pre-emphasis technique to compensate for the RF signal attenuation on the other hand. In addition, for ease of access from both optical and RF perspective, a via-hole less broad bandwidth transition is designed between coplanar pads and coupled microstrip (CPW-CMS) driving electrodes. With the best reported E-O coefficient of 350 pm/V, the designed E-O deflector should allow an AOADC operating over 44 giga-samples-per-seconds with an estimated effective resolution of 6.5 bits on RF signals with Nyquist bandwidth of 22 GHz. The overall DC power consumption of all components used in this AOADC is of order of 4 W and is dominated by power consumption in the power amplifier to generate a 20 V RF voltage in 50 Ohm system. A higher sampling rate can be achieved at similar bits of resolution by interleaving a number of this elementary AOADC at the expense of a higher power consumption.

  15. Remote drill bit loader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokos, J.A.

    1996-12-31

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pinsmore » prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. In typical remote drilling operations, whether in hot cells or water pits, drill bits have been held using a collet or end mill type holder with set screws. In either case, to load or change a drill bit required the use master-slave manipulators to position the bits and tighten the collet or set screws. This requirement eliminated many otherwise useful work areas because they were not equipped with slaves, particularly in water pits.« less

  16. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less

  17. Remote drill bit loader

    DOEpatents

    Dokos, J.A.

    1997-12-30

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. 5 figs.

  18. Region-of-interest determination and bit-rate conversion for H.264 video transcoding

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan

    2013-12-01

    This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.

  19. Remote drill bit loader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokos, James A.

    A drill bit loader for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pins prevent rotationmore » of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned.« less

  20. Remote drill bit loader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokos, J.A.

    A drill bit loader is described for loading a tapered shank of a drill bit into a similarly tapered recess in the end of a drill spindle. The spindle has a transverse slot at the inner end of the recess. The end of the tapered shank of the drill bit has a transverse tang adapted to engage in the slot so that the drill bit will be rotated by the spindle. The loader is in the form of a cylinder adapted to receive the drill bit with the shank projecting out of the outer end of the cylinder. Retainer pinsmore » prevent rotation of the drill bit in the cylinder. The spindle is lowered to extend the shank of the drill bit into the recess in the spindle and the spindle is rotated to align the slot in the spindle with the tang on the shank. A spring unit in the cylinder is compressed by the drill bit during its entry into the recess of the spindle and resiliently drives the tang into the slot in the spindle when the tang and slot are aligned. 5 figs.« less

  1. The theory and implementation of SRT Division. Report No. 230

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkins, III, Daniel E.

    1967-06-01

    To a large extent, this report has been directed to the designer faced with the task of implementing digital division. The author also hopes that this report will support exploration into development of higher radix quotient selection models, which can select 8 quotients bits in parallel.

  2. Polar Misunderstandings: Earth's Dynamic Dynamo

    ERIC Educational Resources Information Center

    DiSpezio, Michael A.

    2011-01-01

    This article discusses the movement of Earth's north and south poles. The Earth's poles may be a bit more complex and dynamic than what many students and teachers believe. With better understanding, offer them up as a rich landscape for higher-level critical analysis and subject integration. Possible curriculum tie-ins include magnets, Earth…

  3. Engineering the Ideal Array (BRIEFING CHARTS)

    DTIC Science & Technology

    2007-03-05

    48 V, f = 10 GHz GaN HEMT Transistor i t Dramatically higher: • Output power • Efficiency • Bandwidth GaN HEMT Power Amplifier lifi ...functions – RF amplifiers – 4-bit phase shifters – Amplitude controllers – Summing network – Power control – Latches for phase state – Address

  4. A Novel Mu Rhythm-based Brain Computer Interface Design that uses a Programmable System on Chip.

    PubMed

    Joshi, Rohan; Saraswat, Prateek; Gajendran, Rudhram

    2012-01-01

    This paper describes the system design of a portable and economical mu rhythm based Brain Computer Interface which employs Cypress Semiconductors Programmable System on Chip (PSoC). By carrying out essential processing on the PSoC, the use of an extra computer is eliminated, resulting in considerable cost savings. Microsoft Visual Studio 2005 and PSoC Designer 5.01 are employed in developing the software for the system, the hardware being custom designed. In order to test the usability of the BCI, preliminary testing is carried out by training three subjects who were able to demonstrate control over their electroencephalogram by moving a cursor present at the center of the screen towards the indicated direction with an average accuracy greater than 70% and a bit communication rate of up to 7 bits/min.

  5. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  6. A reconfigurable medically cohesive biomedical front-end with ΣΔ ADC in 0.18µm CMOS.

    PubMed

    Jha, Pankaj; Patra, Pravanjan; Naik, Jairaj; Acharya, Amit; Rajalakshmi, P; Singh, Shiv Govind; Dutta, Ashudeb

    2015-08-01

    This paper presents a generic programmable analog front-end (AFE) for acquisition and digitization of various biopotential signals. This includes a lead-off detection circuit, an ultra-low current capacitively coupled signal conditioning stage with programmable gain and bandwidth, a new mixed signal automatic gain control (AGC) mechanism and a medically cohesive reconfigurable ΣΔ ADC. The full system is designed in UMC 0.18μm CMOS. The AFE achieves an overall linearity of more 10 bits with 0.47μW power consumption. The ADC provides 2(nd) order noise-shaping while using single integrator and an ENOB of ~11 bits with 5μW power consumption. The system was successfully verified for various ECG signals from PTB database. This system is intended for portable batteryless u-Healthcare devices.

  7. Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.

    1993-01-01

    The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.

  8. Computer program CORDET. [computerized simulation of digital phase-lock loop for Omega navigation receiver

    NASA Technical Reports Server (NTRS)

    Palkovic, R. A.

    1974-01-01

    A FORTRAN 4 computer program provides convenient simulation of an all-digital phase-lock loop (DPLL). The DPLL forms the heart of the Omega navigation receiver prototype. Through the DPLL, the phase of the 10.2 KHz Omega signal is estimated when the true signal phase is contaminated with noise. This investigation has provided a convenient means of evaluating loop performance in a variety of noise environments, and has proved to be a useful tool for evaluating design changes. The goals of the simulation are to: (1) analyze the circuit on a bit-by-bit level in order to evaluate the overall design; (2) see easily the effects of proposed design changes prior to actual breadboarding; and (3) determine the optimum integration time for the DPLL in an environment typical of general aviation conditions.

  9. High-density near-field optical disc recording using phase change media and polycarbonate substrate

    NASA Astrophysics Data System (ADS)

    Shinoda, Masataka; Saito, Kimihiro; Ishimoto, Tsutomu; Kondo, Takao; Nakaoki, Ariyoshi; Furuki, Motohiro; Takeda, Minoru; Akiyama, Yuji; Shimouma, Takashi; Yamamoto, Masanobu

    2004-09-01

    We developed a high density near field optical recording disc system with a solid immersion lens and two laser sources. In order to realize the near field optical recording, we used a phase change recording media and a molded polycarbonate substrate. The near field optical pick-up consists of a solid immersion lens with numerical aperture of 1.84. The clear eye pattern of 90.2 GB capacity (160nm track pitch and 62 nm per bit) was observed. The jitter using a limit equalizer was 10.0 % without cross-talk. The bit error rate using an adaptive PRML with 8 taps was 3.7e-6 without cross-talk. We confirmed that the near field optical disc system is a promising technology for a next generation high density optical disc system.

  10. A Novel Mu Rhythm-based Brain Computer Interface Design that uses a Programmable System on Chip

    PubMed Central

    Joshi, Rohan; Saraswat, Prateek; Gajendran, Rudhram

    2012-01-01

    This paper describes the system design of a portable and economical mu rhythm based Brain Computer Interface which employs Cypress Semiconductors Programmable System on Chip (PSoC). By carrying out essential processing on the PSoC, the use of an extra computer is eliminated, resulting in considerable cost savings. Microsoft Visual Studio 2005 and PSoC Designer 5.01 are employed in developing the software for the system, the hardware being custom designed. In order to test the usability of the BCI, preliminary testing is carried out by training three subjects who were able to demonstrate control over their electroencephalogram by moving a cursor present at the center of the screen towards the indicated direction with an average accuracy greater than 70% and a bit communication rate of up to 7 bits/min. PMID:23493871

  11. Graded bit patterned magnetic arrays fabricated via angled low-energy He ion irradiation.

    PubMed

    Chang, L V; Nasruallah, A; Ruchhoeft, P; Khizroev, S; Litvinov, D

    2012-07-11

    A bit patterned magnetic array based on Co/Pd magnetic multilayers with a binary perpendicular magnetic anisotropy distribution was fabricated. The binary anisotropy distribution was attained through angled helium ion irradiation of a bit edge using hydrogen silsesquioxane (HSQ) resist as an ion stopping layer to protect the rest of the bit. The viability of this technique was explored numerically and evaluated through magnetic measurements of the prepared bit patterned magnetic array. The resulting graded bit patterned magnetic array showed a 35% reduction in coercivity and a 9% narrowing of the standard deviation of the switching field.

  12. The architecture design of a 2mW 18-bit high speed weight voltage type DAC based on dual weight resistance chain

    NASA Astrophysics Data System (ADS)

    Qixing, Chen; Qiyu, Luo

    2013-03-01

    At present, the architecture of a digital-to-analog converter (DAC) in essence is based on the weight current, and the average value of its D/A signal current increases in geometric series according to its digital signal bits increase, which is 2n-1 times of its least weight current. But for a dual weight resistance chain type DAC, by using the weight voltage manner to D/A conversion, the D/A signal current is fixed to chain current Icha; it is only 1/2n-1 order of magnitude of the average signal current value of the weight current type DAC. Its principle is: n pairs dual weight resistances form a resistance chain, which ensures the constancy of the chain current; if digital signals control the total weight resistance from the output point to the zero potential point, that could directly control the total weight voltage of the output point, so that the digital signals directly turn into a sum of the weight voltage signals; thus the following goals are realized: (1) the total current is less than 200 μA (2) the total power consumption is less than 2 mW; (3) an 18-bit conversion can be realized by adopting a multi-grade structure; (4) the chip area is one order of magnitude smaller than the subsection current-steering type DAC; (5) the error depends only on the error of the unit resistance, so it is smaller than the error of the subsection current-steering type DAC; (6) the conversion time is only one action time of switch on or off, so its speed is not lower than the present DAC.

  13. PDC bits break ground with advanced vibration mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-10-01

    Advancements in PDC bit technology have resulted in the identification and characterization of different types of vibrational modes that historically have limited PDC bit performance. As a result, concepts have been developed that prevent the initiation of vibration and also mitigate its damaging effects once it occurs. This vibration-reducing concept ensures more efficient use of the energy available to a PDC bit performance. As a result, concepts have been developed that prevent the imitation of vibration and also mitigate its damaging effects once it occurs. This vibration-reducing concept ensures more efficient use of the energy available to a PDC bit,more » thereby improving its performance. This improved understanding of the complex forces affecting bit performance is driving bit customization for specific drilling programs.« less

  14. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  15. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  16. A new thermal model for bone drilling with applications to orthopaedic surgery.

    PubMed

    Lee, JuEun; Rabin, Yoed; Ozdoganlar, O Burak

    2011-12-01

    This paper presents a new thermal model for bone drilling with applications to orthopaedic surgery. The new model combines a unique heat-balance equation for the system of the drill bit and the chip stream, an ordinary heat diffusion equation for the bone, and heat generation at the drill tip, arising from the cutting process and friction. Modeling of the drill bit-chip stream system assumes an axial temperature distribution and a lumped heat capacity effect in the transverse cross-section. The new model is solved numerically using a tailor-made finite-difference scheme for the drill bit-chip stream system, coupled with a classic finite-difference method for the bone. The theoretical investigation addresses the significance of heat transfer between the drill bit and the bone, heat convection from the drill bit to the surroundings, and the effect of the initial temperature of the drill bit on the developing thermal field. Using the new model, a parametric study on the effects of machining conditions and drill-bit geometries on the resulting temperature field in the bone and the drill bit is presented. Results of this study indicate that: (1) the maximum temperature in the bone decreases with increased chip flow; (2) the transient temperature distribution is strongly influenced by the initial temperature; (3) the continued cooling (irrigation) of the drill bit reduces the maximum temperature even when the tip is distant from the cooled portion of the drill bit; and (4) the maximum temperature increases with increasing spindle speed, increasing feed rate, decreasing drill-bit diameter, increasing point angle, and decreasing helix angle. The model is expected to be useful in determination of optimum drilling conditions and drill-bit geometries. Copyright © 2011. Published by Elsevier Ltd.

  17. Method to manufacture bit patterned magnetic recording media

    DOEpatents

    Raeymaekers, Bart; Sinha, Dipen N

    2014-05-13

    A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.

  18. Testability Design Rating System: Testability Handbook. Volume 1

    DTIC Science & Technology

    1992-02-01

    4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory

  19. Collision-free motion planning for fiber positioner robots: discretization of velocity profiles

    NASA Astrophysics Data System (ADS)

    Makarem, Laleh; Kneib, Jean-Paul; Gillet, Denis; Bleuler, Hannes; Bouri, Mohamed; Hörler, Philippe; Jenni, Laurent; Prada, Francisco; Sánchez, Justo

    2014-07-01

    The next generation of large-scale spectroscopic survey experiments such as DESI, will use thousands of fiber positioner robots packed on a focal plate. In order to maximize the observing time with this robotic system we need to move in parallel the fiber-ends of all positioners from the previous to the next target coordinates. Direct trajectories are not feasible due to collision risks that could undeniably damage the robots and impact the survey operation and performance. We have previously developed a motion planning method based on a novel decentralized navigation function for collision-free coordination of fiber positioners. The navigation function takes into account the configuration of positioners as well as their envelope constraints. The motion planning scheme has linear complexity and short motion duration (2.5 seconds with the maximum speed of 30 rpm for the positioner), which is independent of the number of positioners. These two key advantages of the decentralization designate the method as a promising solution for the collision-free motion-planning problem in the next-generation of fiber-fed spectrographs. In a framework where a centralized computer communicates with the positioner robots, communication overhead can be reduced significantly by using velocity profiles consisting of a few bits only. We present here the discretization of velocity profiles to ensure the feasibility of a real-time coordination for a large number of positioners. The modified motion planning method that generates piecewise linearized position profiles guarantees collision-free trajectories for all the robots. The velocity profiles fit few bits at the expense of higher computational costs.

  20. The Behavioral Intervention Technology Model: An Integrated Conceptual and Technological Framework for eHealth and mHealth Interventions

    PubMed Central

    Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa

    2014-01-01

    A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or “elements” (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user’s environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs. PMID:24905070

  1. The behavioral intervention technology model: an integrated conceptual and technological framework for eHealth and mHealth interventions.

    PubMed

    Mohr, David C; Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa

    2014-06-05

    A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or "elements" (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user's environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs.

  2. Digital Ratiometer

    NASA Technical Reports Server (NTRS)

    Beer, R.

    1985-01-01

    Small, low-cost comparator with 24-bit-precision yields ratio signal from pair of analog or digital input signals. Arithmetic logic chips (bit-slice) sample two 24-bit analog-to-digital converters approximately once every millisecond and accumulate them in two 24-bit registers. Approach readily modified to arbitrary precision.

  3. The Anoikis Effector Bit1 Inhibits EMT through Attenuation of TLE1-Mediated Repression of E-Cadherin in Lung Cancer Cells

    PubMed Central

    Yao, Xin; Pham, Tri; Temple, Brandi; Gray, Selena; Cannon, Cornita; Chen, Renwei; Abdel-Mageed, Asim B.; Biliran, Hector

    2016-01-01

    The mitochondrial Bcl-2 inhibitor of transcription 1 (Bit1) protein is part of an anoikis-regulating pathway that is selectively dependent on integrins. We previously demonstrated that the caspase-independent apoptotic effector Bit1 exerts tumor suppressive function in lung cancer in part by inhibiting anoikis resistance and anchorage-independent growth in vitro and tumorigenicity in vivo. Herein we show a novel function of Bit1 as an inhibitor cell migration and epithelial–mesenchymal transition (EMT) in the human lung adenocarcinoma A549 cell line. Suppression of endogenous Bit1 expression via siRNA and shRNA strategies promoted mesenchymal phenotypes, including enhanced fibroblastoid morphology and cell migratory potential with concomitant downregulation of the epithelial marker E-cadherin expression. Conversely, ectopic Bit1 expression in A549 cells promoted epithelial transition characterized by cuboidal-like epithelial cell phenotype, reduced cell motility, and upregulated E-cadherin expression. Specific downregulation of E-cadherin in Bit1-transfected cells was sufficient to block Bit1-mediated inhibition of cell motility while forced expression of E-cadherin alone attenuated the enhanced migration of Bit1 knockdown cells, indicating that E-cadherin is a downstream target of Bit1 in regulating cell motility. Furthermore, quantitative real-time PCR and reporter analyses revealed that Bit1 upregulates E-cadherin expression at the transcriptional level through the transcriptional regulator Amino-terminal Enhancer of Split (AES) protein. Importantly, the Bit1/AES pathway induction of E-cadherin expression involves inhibition of the TLE1-mediated repression of E-cadherin, by decreasing TLE1 corepressor occupancy at the E-cadherin promoter as revealed by chromatin immunoprecipitation assays. Consistent with its EMT inhibitory function, exogenous Bit1 expression significantly suppressed the formation of lung metastases of A549 cells in an in vivo experimental metastasis model. Taken together, our studies indicate Bit1 is an inhibitor of EMT and metastasis in lung cancer and hence can serve as a molecular target in curbing lung cancer aggressiveness. PMID:27655370

  4. Effects of size on three-cone bit performance in laboratory drilled shale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, A.D.; DiBona, B.G.; Sandstrom, J.L.

    1982-09-01

    The effects of size on the performance of 3-cone bits were measured during laboratory drilling tests in shale at simulated downhole conditions. Four Reed HP-SM 3-cone bits with diameters of 6 1/2, 7 7/8, 9 1/2 and 11 inches were used to drill Mancos shale with water-based mud. The tests were conducted at constant borehole pressure, two conditions of hydraulic horsepower per square inch of bit area, three conditions of rotary speed and four conditions of weight-on-bit per inch of bit diameter. The resulting penetration rates and torques were measured. Statistical techniques were used to analyze the data.

  5. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  6. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  7. Test anxiety and self-esteem in senior high school students: a cross-sectional study.

    PubMed

    Sarı, Seda Aybüke; Bilek, Günal; Çelik, Ekrem

    2018-02-01

    In this study, it is aimed to determine the level of test anxiety and self-esteem in the high school students preparing for the university exam in Bitlis, Turkey, and to investigate the effect of test anxiety on self-esteem. Seven-hundred and twenty-four high school students who were preparing for the university entrance examination in Bitlis participated in the study. A questionnaire which includes socio-demographic data form, Rosenberg Self-Esteem Scale and Revised Test Anxiety Scale was prepared as an e-questionnaire for the students to fill easily and uploaded to the Bitlis State Hospital's website. Schools were called and informed for the students to fill out the e-questionnaire on the Internet. The most important findings from our study are that gender is influential on test anxiety and self-esteem score and test anxiety level are negatively correlated. It was observed that female students had more test anxiety than male students and those who had higher self-esteem had less test anxiety. Consequently, our study shows that university entrance examination creates anxiety on students and reduces self-esteem, especially in female students.

  8. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    PubMed Central

    Sang, Jun; Ling, Shenggui; Alam, Mohammad S.

    2012-01-01

    In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003

  9. Physically unclonable cryptographic primitives using self-assembled carbon nanotubes.

    PubMed

    Hu, Zhaoying; Comeras, Jose Miguel M Lobez; Park, Hongsik; Tang, Jianshi; Afzali, Ali; Tulevski, George S; Hannon, James B; Liehr, Michael; Han, Shu-Jen

    2016-06-01

    Information security underpins many aspects of modern society. However, silicon chips are vulnerable to hazards such as counterfeiting, tampering and information leakage through side-channel attacks (for example, by measuring power consumption, timing or electromagnetic radiation). Single-walled carbon nanotubes are a potential replacement for silicon as the channel material of transistors due to their superb electrical properties and intrinsic ultrathin body, but problems such as limited semiconducting purity and non-ideal assembly still need to be addressed before they can deliver high-performance electronics. Here, we show that by using these inherent imperfections, an unclonable electronic random structure can be constructed at low cost from carbon nanotubes. The nanotubes are self-assembled into patterned HfO2 trenches using ion-exchange chemistry, and the width of the trench is optimized to maximize the randomness of the nanotube placement. With this approach, two-dimensional (2D) random bit arrays are created that can offer ternary-bit architecture by determining the connection yield and switching type of the nanotube devices. As a result, our cryptographic keys provide a significantly higher level of security than conventional binary-bit architecture with the same key size.

  10. Physically unclonable cryptographic primitives using self-assembled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Hu, Zhaoying; Comeras, Jose Miguel M. Lobez; Park, Hongsik; Tang, Jianshi; Afzali, Ali; Tulevski, George S.; Hannon, James B.; Liehr, Michael; Han, Shu-Jen

    2016-06-01

    Information security underpins many aspects of modern society. However, silicon chips are vulnerable to hazards such as counterfeiting, tampering and information leakage through side-channel attacks (for example, by measuring power consumption, timing or electromagnetic radiation). Single-walled carbon nanotubes are a potential replacement for silicon as the channel material of transistors due to their superb electrical properties and intrinsic ultrathin body, but problems such as limited semiconducting purity and non-ideal assembly still need to be addressed before they can deliver high-performance electronics. Here, we show that by using these inherent imperfections, an unclonable electronic random structure can be constructed at low cost from carbon nanotubes. The nanotubes are self-assembled into patterned HfO2 trenches using ion-exchange chemistry, and the width of the trench is optimized to maximize the randomness of the nanotube placement. With this approach, two-dimensional (2D) random bit arrays are created that can offer ternary-bit architecture by determining the connection yield and switching type of the nanotube devices. As a result, our cryptographic keys provide a significantly higher level of security than conventional binary-bit architecture with the same key size.

  11. Design of Timer Circuit for Dynamic Data System

    NASA Technical Reports Server (NTRS)

    Young, Nathaniel, III

    2004-01-01

    The Branch That I work in is in the Aero Electronic Test Branch, which is part of the Research and Testing Division. The Aero Electronic Test Branch deals with electronic control and instrumentation systems. This branch supports the research and test study of wind tunnels such as the l0x10,9x15, and 8x6. Wind tunnels are used in research to test certain parts of a jet, plane, shuttle or any other flying object in certain test conditions. My assignment is to design a programmable trigger circuit on a 19 standard rack mount that will allow the circuit to latch and hold for a predefined amount of time entered by the user when receiving a signal. It should then re-arm itself within 0.25 seconds after the time is finished. The time should be able to be seen on a display showing the time entered. The time range has to be from 0-600 seconds in 0.01 second increments (600.00). From the information given, counters will be needed to design and build this circuit. A counter, in it s simplest form, is a group of flip flops that can temporarily store bits of information put into the circuit. They can be constructed in many different ways, such as in 4 flip flops (4-bit counter) or 8 flip flops and even higher. Counters are usually cascaded with other counters to reach higher bits, such as 16 or 24 bit counters. The application in which I will use the counters will be to count down from any programmable number that I input either by a keyboard or a thumbwheel. Also, I will use counters that will be used specifically as a frequency divider to divide the pulses that enter the circuit through an input signal from a crystal clock. The pulses will need to be divided so that it will function as a 100Hz clock putting out 100 pulses per second. A switch will be used to load my inputs in and more than likely a button also so that I can stop and hold the count at any point of time. I will use 5 BCD up/down programmable counters, and a certain amount (depending on what kind of "divide by N" counter I use) of frequency dividing counters for the assignment. After the design is carefully made, a task order will be written and then given to the manufacturer to create a rack mount circuit board that will match my specifications given. The applications in which this design will be used for is in the use of the six-component balance signal conditioner for measurement and electronic system control. It can be used as a timer system for the balance signal conditioner in which it does numerous tests for the Wind tunnel research, in which a preset time can be set for how long it performs its tests. Specifically, my design should be applied to the balance signal conditioner used for the 8x6 wind tunnel research. Hopefully this design should aid in more efficient research for the 8x6 wind tunnel.

  12. Hey! A Tick Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Tick Bit Me! KidsHealth / For Kids / Hey! A Tick Bit Me! Print en español ¡Ay! ¡ ... tick collar. More on this topic for: Kids Hey! A Brown Recluse Spider Bit Me! Hey! A ...

  13. Core drill's bit is replaceable without withdrawal of drill stem - A concept

    NASA Technical Reports Server (NTRS)

    Rushing, F. C.; Simon, A. B.

    1970-01-01

    Drill bit is divided into several sectors. When collapsed, the outside diameter is forced down the drill stem, when it reaches bottom the sectors are forced outward and form a cutting bit. A dulled bit is retracted by reversal of this procedure.

  14. Criticality of Low-Energy Protons in Single-Event Effects Testing of Highly-Scaled Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Marshall, Paul W.; Rodbell, Kenneth P.; Gordon, Michael S.; LaBel, Kenneth A.; Schwank, James R.; Dodds, Nathaniel A.; Castaneda, Carlos M.; Berg, Melanie D.; Kim, Hak S.; hide

    2014-01-01

    We report low-energy proton and low-energy alpha particle single-event effects (SEE) data on a 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) latches and static random access memory (SRAM) that demonstrates the criticality of using low-energy protons for SEE testing of highly-scaled technologies. Low-energy protons produced a significantly higher fraction of multi-bit upsets relative to single-bit upsets when compared to similar alpha particle data. This difference highlights the importance of performing hardness assurance testing with protons that include energy distribution components below 2 megaelectron-volt. The importance of low-energy protons to system-level single-event performance is based on the technology under investigation as well as the target radiation environment.

  15. High speed, real-time, camera bandwidth converter

    DOEpatents

    Bower, Dan E; Bloom, David A; Curry, James R

    2014-10-21

    Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.

  16. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.

    PubMed

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.

  17. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces

    PubMed Central

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170

  18. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  19. Residual Highway Convolutional Neural Networks for in-loop Filtering in HEVC.

    PubMed

    Zhang, Yongbing; Shen, Tao; Ji, Xiangyang; Zhang, Yun; Xiong, Ruiqin; Dai, Qionghai

    2018-08-01

    High efficiency video coding (HEVC) standard achieves half bit-rate reduction while keeping the same quality compared with AVC. However, it still cannot satisfy the demand of higher quality in real applications, especially at low bit rates. To further improve the quality of reconstructed frame while reducing the bitrates, a residual highway convolutional neural network (RHCNN) is proposed in this paper for in-loop filtering in HEVC. The RHCNN is composed of several residual highway units and convolutional layers. In the highway units, there are some paths that could allow unimpeded information across several layers. Moreover, there also exists one identity skip connection (shortcut) from the beginning to the end, which is followed by one small convolutional layer. Without conflicting with deblocking filter (DF) and sample adaptive offset (SAO) filter in HEVC, RHCNN is employed as a high-dimension filter following DF and SAO to enhance the quality of reconstructed frames. To facilitate the real application, we apply the proposed method to I frame, P frame, and B frame, respectively. For obtaining better performance, the entire quantization parameter (QP) range is divided into several QP bands, where a dedicated RHCNN is trained for each QP band. Furthermore, we adopt a progressive training scheme for the RHCNN where the QP band with lower value is used for early training and their weights are used as initial weights for QP band of higher values in a progressive manner. Experimental results demonstrate that the proposed method is able to not only raise the PSNR of reconstructed frame but also prominently reduce the bit-rate compared with HEVC reference software.

  20. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, J.R.

    1997-02-11

    A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.

  1. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, John R.

    1997-01-01

    A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.

  2. Hey! A Mosquito Bit Me! (For Kids)

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Mosquito Bit Me! KidsHealth / For Kids / Hey! A Mosquito Bit Me! Print en español ¡Ay! ¡ ... your skin. More on this topic for: Kids Hey! A Flea Bit Me! Hey! A Scorpion Stung ...

  3. Bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling.

    PubMed

    Lent, Craig S; Liu, Mo; Lu, Yuhui

    2006-08-28

    We examine power dissipation in different clocking schemes for molecular quantum-dot cellular automata (QCA) circuits. 'Landauer clocking' involves the adiabatic transition of a molecular cell from the null state to an active state carrying data. Cell layout creates devices which allow data in cells to interact and thereby perform useful computation. We perform direct solutions of the equation of motion for the system in contact with the thermal environment and see that Landauer's Principle applies: one must dissipate an energy of at least k(B)T per bit only when the information is erased. The ideas of Bennett can be applied to keep copies of the bit information by echoing inputs to outputs, thus embedding any logically irreversible circuit in a logically reversible circuit, at the cost of added circuit complexity. A promising alternative which we term 'Bennett clocking' requires only altering the timing of the clocking signals so that bit information is simply held in place by the clock until a computational block is complete, then erased in the reverse order of computation. This approach results in ultralow power dissipation without additional circuit complexity. These results offer a concrete example in which to consider recent claims regarding the fundamental limits of binary logic scaling.

  4. An approach to localize the retinal blood vessels using bit planes and centerline detection.

    PubMed

    Fraz, M M; Barman, S A; Remagnino, P; Hoppe, A; Basit, A; Uyyanonvara, B; Rudnicka, A R; Owen, C G

    2012-11-01

    The change in morphology, diameter, branching pattern or tortuosity of retinal blood vessels is an important indicator of various clinical disorders of the eye and the body. This paper reports an automated method for segmentation of blood vessels in retinal images. A unique combination of techniques for vessel centerlines detection and morphological bit plane slicing is presented to extract the blood vessel tree from the retinal images. The centerlines are extracted by using the first order derivative of a Gaussian filter in four orientations and then evaluation of derivative signs and average derivative values is performed. Mathematical morphology has emerged as a proficient technique for quantifying the blood vessels in the retina. The shape and orientation map of blood vessels is obtained by applying a multidirectional morphological top-hat operator with a linear structuring element followed by bit plane slicing of the vessel enhanced grayscale image. The centerlines are combined with these maps to obtain the segmented vessel tree. The methodology is tested on three publicly available databases DRIVE, STARE and MESSIDOR. The results demonstrate that the performance of the proposed algorithm is comparable with state of the art techniques in terms of accuracy, sensitivity and specificity. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Si photonics technology for future optical interconnection

    NASA Astrophysics Data System (ADS)

    Zheng, Xuezhe; Krishnamoorthy, Ashok V.

    2011-12-01

    Scaling of computing systems require ultra-efficient interconnects with large bandwidth density. Silicon photonics offers a disruptive solution with advantages in reach, energy efficiency and bandwidth density. We review our progress in developing building blocks for ultra-efficient WDM silicon photonic links. Employing microsolder based hybrid integration with low parasitics and high density, we optimize photonic devices on SOI platforms and VLSI circuits on more advanced bulk CMOS technology nodes independently. Progressively, we successfully demonstrated single channel hybrid silicon photonic transceivers at 5 Gbps and 10 Gbps, and 80 Gbps arrayed WDM silicon photonic transceiver using reverse biased depletion ring modulators and Ge waveguide photo detectors. Record-high energy efficiency of less than 100fJ/bit and 385 fJ/bit were achieved for the hybrid integrated transmitter and receiver, respectively. Waveguide grating based optical proximity couplers were developed with low loss and large optical bandwidth to enable multi-layer intra/inter-chip optical interconnects. Thermal engineering of WDM devices by selective substrate removal, together with WDM link using synthetic wavelength comb, we significantly improved the device tuning efficiency and reduced the tuning range. Using these innovative techniques, two orders of magnitude tuning power reduction was achieved. And tuning cost of only a few 10s of fJ/bit is expected for high data rate WDM silicon photonic links.

  6. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging

    PubMed Central

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-01-01

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system. PMID:27025907

  7. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging.

    PubMed

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-03-30

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.

  8. Evolutionary model with genetics, aging, and knowledge

    NASA Astrophysics Data System (ADS)

    Bustillos, Armando Ticona; de Oliveira, Paulo Murilo

    2004-02-01

    We represent a process of learning by using bit strings, where 1 bits represent the knowledge acquired by individuals. Two ways of learning are considered: individual learning by trial and error, and social learning by copying knowledge from other individuals or from parents in the case of species with parental care. The age-structured bit string allows us to study how knowledge is accumulated during life and its influence over the genetic pool of a population after many generations. We use the Penna model to represent the genetic inheritance of each individual. In order to study how the accumulated knowledge influences the survival process, we include it to help individuals to avoid the various death situations. Modifications in the Verhulst factor do not show any special feature due to its random nature. However, by adding years to life as a function of the accumulated knowledge, we observe an improvement of the survival rates while the genetic fitness of the population becomes worse. In this latter case, knowledge becomes more important in the last years of life where individuals are threatened by diseases. Effects of offspring overprotection and differences between individual and social learning can also be observed. Sexual selection as a function of knowledge shows some effects when fidelity is imposed.

  9. Bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling

    NASA Astrophysics Data System (ADS)

    Lent, Craig S.; Liu, Mo; Lu, Yuhui

    2006-08-01

    We examine power dissipation in different clocking schemes for molecular quantum-dot cellular automata (QCA) circuits. 'Landauer clocking' involves the adiabatic transition of a molecular cell from the null state to an active state carrying data. Cell layout creates devices which allow data in cells to interact and thereby perform useful computation. We perform direct solutions of the equation of motion for the system in contact with the thermal environment and see that Landauer's Principle applies: one must dissipate an energy of at least kBT per bit only when the information is erased. The ideas of Bennett can be applied to keep copies of the bit information by echoing inputs to outputs, thus embedding any logically irreversible circuit in a logically reversible circuit, at the cost of added circuit complexity. A promising alternative which we term 'Bennett clocking' requires only altering the timing of the clocking signals so that bit information is simply held in place by the clock until a computational block is complete, then erased in the reverse order of computation. This approach results in ultralow power dissipation without additional circuit complexity. These results offer a concrete example in which to consider recent claims regarding the fundamental limits of binary logic scaling.

  10. What Great Powers Make It: International Order and the Logic of Cooperation in Cyberspace

    DTIC Science & Technology

    2013-01-01

    seems to be a bit of schizophrenia here. On the one hand, cyber—in the form of information and networks—is already changing the nature of war and...of a condominium or a concert of power, relying instead on alliance politics. Alliance politics were both a cause and a cure for the hegemonic wars

  11. Plated wire memory subsystem

    NASA Technical Reports Server (NTRS)

    Reynolds, L.; Tweed, H.

    1972-01-01

    The work performed entailed the design, development, construction and testing of a 4000 word by 18 bit random access, NDRO plated wire memory for use in conjunction with a spacecraft imput/output unit and central processing unit. The primary design parameters, in order of importance, were high reliability, low power, volume and weight. A single memory unit, referred to as a qualification model, was delivered.

  12. A joint equalization algorithm in high speed communication systems

    NASA Astrophysics Data System (ADS)

    Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin

    2018-02-01

    This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.

  13. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  14. A Study of a Standard BIT Circuit.

    DTIC Science & Technology

    1977-02-01

    IENDED BIT APPROACHES FOR QED MODULES AND APPLICATION OF THE ANALYTIC MEASURES 36 4.1 Built-In-Test for Memory Class Modules 37 4.1.1 Random Access...Implementation 68 4.1.5.5 Criti cal Parameters 68 4.1.5.6 QED Module Test Equipment Requirements 68 4.1.6 Application of Analytic Measures to the...Microprocessor BIT Techniques.. 121 4.2.9 Application of Analytic Measures to the Recommended BIT App roaches 125 4.2.10 Process Class BIT by Partial

  15. Developing Project Based Learning, Integrated Courses from Two Different Colleges at an Institution of Higher Education: An Overview of the Processes, Challenges, and Lessons Learned

    ERIC Educational Resources Information Center

    Rice, Marilyn; Shannon, Li-Jen

    2016-01-01

    All too often, courses in higher education tend to teach isolated bits of facts with little effort to assist in learner assimilation of those facts so as to grow knowledge of the world into a more dynamic understanding. To address the need for a capstone research project for students in their master's program and in an effort to create online…

  16. Soybean aphids making their summer appearance early

    USDA-ARS?s Scientific Manuscript database

    Two small, soft-bodied insects have begun showing up in South Dakota soybean. One is the soybean aphid, and the other is a mealybug. Soybean aphids are yellow to yellow/green and are usually found feeding on the underside of leaves. Incidence of soybean aphid has been a bit higher than typical fo...

  17. Minimization of color shift generated in RGBW quad structure.

    NASA Astrophysics Data System (ADS)

    Kim, Hong Chul; Yun, Jae Kyeong; Baek, Heume-Il; Kim, Ki Duk; Oh, Eui Yeol; Chung, In Jae

    2005-03-01

    The purpose of RGBW Quad Structure Technology is to realize higher brightness than that of normal panel (RGB stripe structure) by adding white sub-pixel to existing RGB stripe structure. However, there is side effect called 'color shift' resulted from increasing brightness. This side effect degrades general color characteristics due to change of 'Hue', 'Brightness' and 'Saturation' as compared with existing RGB stripe structure. Especially, skin-tone colors show a tendency to get darker in contrast to normal panel. We"ve tried to minimize 'color shift' through use of LUT (Look Up Table) for linear arithmetic processing of input data, data bit expansion to 12-bit for minimizing arithmetic tolerance and brightness weight of white sub-pixel on each R, G, B pixel. The objective of this study is to minimize and keep Δu'v' value (we commonly use to represent a color difference), quantitative basis of color difference between RGB stripe structure and RGBW quad structure, below 0.01 level (existing 0.02 or higher) using Macbeth colorchecker that is general reference of color characteristics.

  18. A novel approach of an absolute coding pattern based on Hamiltonian graph

    NASA Astrophysics Data System (ADS)

    Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang

    2017-02-01

    In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.

  19. New PDC bit optimizes drilling performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besson, A.; Gudulec, P. le; Delwiche, R.

    1996-05-01

    The lithology in northwest Argentina contains a major section where polycrystalline diamond compact (PDC) bits have not succeeded in the past. The section consists of dense shales and cemented sandstone stringers with limestone laminations. Conventional PDC bits experienced premature failures in the section. A new generation PDC bit tripled rate of penetration (ROP) and increased by five times the potential footage per bit. Recent improvements in PDC bit technology that enabled the improved performance include: the ability to control the PDC cutter quality; use of an advanced cutter lay out defined by 3D software; using cutter face design code formore » optimized cleaning and cooling; and, mastering vibration reduction features, including spiraled blades.« less

  20. A comparison of orthogonal transformations for digital speech processing.

    NASA Technical Reports Server (NTRS)

    Campanella, S. J.; Robinson, G. S.

    1971-01-01

    Discrete forms of the Fourier, Hadamard, and Karhunen-Loeve transforms are examined for their capacity to reduce the bit rate necessary to transmit speech signals. To rate their effectiveness in accomplishing this goal the quantizing error (or noise) resulting for each transformation method at various bit rates is computed and compared with that for conventional companded PCM processing. Based on this comparison, it is found that Karhunen-Loeve provides a reduction in bit rate of 13.5 kbits/s, Fourier 10 kbits/s, and Hadamard 7.5 kbits/s as compared with the bit rate required for companded PCM. These bit-rate reductions are shown to be somewhat independent of the transmission bit rate.

  1. Neighborhood comparison operator

    NASA Technical Reports Server (NTRS)

    Gennery, Donald B. (Inventor)

    1987-01-01

    Digital values in a moving window are compared by an operator having nine comparators (18) connected to line buffers (16) for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The comparator output, plus 2 bits indicating odd-even pixel/line information about the central pixel, addresses a lookup table (20) to provide 14 bits of information, including 2 bits which control a selector (22) to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logic OR of all neighboring pixels.

  2. Serial data correlator/code translator

    NASA Technical Reports Server (NTRS)

    Morgan, L. E. (Inventor)

    1982-01-01

    A system for analyzing asynchronous signals containing bits of information for ensuring the validity of said signals, by sampling each bit of information a plurality of times, and feeding the sampled pieces of bits of information into a sequence controlled is described. The sequence controller has a plurality of maps or programs through which the sampled pieces of bits are stepped so as to identify the particular bit of information and determine the validity and phase of the bit. The step in which the sequence controller is clocked is controlled by a storage register. A data decoder decodes the information fed out of the storage register and feeds such information to shift registers for storage.

  3. Four channel Laser Firing Unit using laser diodes

    NASA Technical Reports Server (NTRS)

    Rosner, David, Sr.; Spomer, Edwin, Sr.

    1994-01-01

    This paper describes the accomplishments and status of PS/EDD's (Pacific Scientific/Energy Dynamics Division) internal research and development effort to prototype and demonstrate a practical four channel laser firing unit (LFU) that uses laser diodes to initiate pyrotechnic events. The LFU individually initiates four ordnance devices using the energy from four diode lasers carried over the fiber optics. The LFU demonstrates end-to-end optical built in test (BIT) capabilities. Both Single Fiber Reflective BIT and Dual Fiber Reflective BIT approaches are discussed and reflection loss data is presented. This paper includes detailed discussions of the advantages and disadvantages of both BIT approaches, all-fire and no-fire levels, and BIT detection levels. The following topics are also addressed: electronic control and BIT circuits, fiber optic sizing and distribution, and an electromechanical shutter type safe/arm device. This paper shows the viability of laser diode initiation systems and single fiber BIT for typing military applications.

  4. Note: optical receiver system for 152-channel magnetoencephalography.

    PubMed

    Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong

    2014-11-01

    An optical receiver system composing 13 serial data restore/synchronizer modules and a single module combiner converted optical 32-bit serial data into 32-bit synchronous parallel data for a computer to acquire 152-channel magnetoencephalography (MEG) signals. A serial data restore/synchronizer module identified 32-bit channel-voltage bits from 48-bit streaming serial data, and then consecutively reproduced 13 times of 32-bit serial data, acting in a synchronous clock. After selecting a single among 13 reproduced data in each module, a module combiner converted it into 32-bit parallel data, which were carried to 32-port digital input board in a computer. When the receiver system together with optical transmitters were applied to 152-channel superconducting quantum interference device sensors, this MEG system maintained a field noise level of 3 fT/√Hz @ 100 Hz at a sample rate of 1 kSample/s per channel.

  5. Electronic Photography at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack; Judge, Nancianne

    1995-01-01

    An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.

  6. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  7. Cascade Error Projection: A Learning Algorithm for Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1996-01-01

    In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.

  8. Astronomical random numbers for quantum foundations experiments

    NASA Astrophysics Data System (ADS)

    Leung, Calvin; Brown, Amy; Nguyen, Hien; Friedman, Andrew S.; Kaiser, David I.; Gallicchio, Jason

    2018-04-01

    Photons from distant astronomical sources can be used as a classical source of randomness to improve fundamental tests of quantum nonlocality, wave-particle duality, and local realism through Bell's inequality and delayed-choice quantum eraser tests inspired by Wheeler's cosmic-scale Mach-Zehnder interferometer gedanken experiment. Such sources of random numbers may also be useful for information-theoretic applications such as key distribution for quantum cryptography. Building on the design of an astronomical random number generator developed for the recent cosmic Bell experiment [Handsteiner et al. Phys. Rev. Lett. 118, 060401 (2017), 10.1103/PhysRevLett.118.060401], in this paper we report on the design and characterization of a device that, with 20-nanosecond latency, outputs a bit based on whether the wavelength of an incoming photon is greater than or less than ≈700 nm. Using the one-meter telescope at the Jet Propulsion Laboratory Table Mountain Observatory, we generated random bits from astronomical photons in both color channels from 50 stars of varying color and magnitude, and from 12 quasars with redshifts up to z =3.9 . With stars, we achieved bit rates of ˜1 ×106Hz/m 2 , limited by saturation of our single-photon detectors, and with quasars of magnitudes between 12.9 and 16, we achieved rates between ˜102 and 2 ×103Hz /m2 . For bright quasars, the resulting bitstreams exhibit sufficiently low amounts of statistical predictability as quantified by the mutual information. In addition, a sufficiently high fraction of bits generated are of true astronomical origin in order to address both the locality and freedom-of-choice loopholes when used to set the measurement settings in a test of the Bell-CHSH inequality.

  9. Analyzing mobile WiMAX base station deployment under different frequency planning strategies

    NASA Astrophysics Data System (ADS)

    Salman, M. K.; Ahmad, R. B.; Ali, Ziad G.; Aldhaibani, Jaafar A.; Fayadh, Rashid A.

    2015-05-01

    The frequency spectrum is a precious resource and scarce in the communication markets. Therefore, different techniques are adopted to utilize the available spectrum in deploying WiMAX base stations (BS) in cellular networks. In this paper several types of frequency planning techniques are illustrated, and a comprehensive comparative study between conventional frequency reuse of 1 (FR of 1) and fractional frequency reuse (FFR) is presented. These techniques are widely used in network deployment, because they employ universal frequency (using all the available bandwidth) in their base station installation/configuration within network system. This paper presents a network model of 19 base stations in order to be employed in the comparison of the aforesaid frequency planning techniques. Users are randomly distributed within base stations, users' resource mapping and their burst profile selection are based on the measured signal to interference plus-noise ratio (SINR). Simulation results reveal that the FFR has advantages over the conventional FR of 1 in various metrics. 98 % of downlink resources (slots) are exploited when FFR is applied, whilst it is 81 % at FR of 1. Data rate of FFR has been increased to 10.6 Mbps, while it is 7.98 Mbps at FR of 1. The spectral efficiency is better enhanced (1.072 bps/Hz) at FR of 1 than FFR (0.808 bps/Hz), since FR of 1 exploits all the Bandwidth. The subcarrier efficiency shows how many data bits that can be carried by subcarriers under different frequency planning techniques, the system can carry more data bits under FFR (2.40 bit/subcarrier) than FR of 1 (1.998 bit/subcarrier). This study confirms that FFR can perform better than conventional frequency planning (FR of 1) which made it a strong candidate for WiMAX BS deployment in cellular networks.

  10. Purpose-built PDC bit successfully drills 7-in liner equipment and formation: An integrated solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puennel, J.G.A.; Huppertz, A.; Huizing, J.

    1996-12-31

    Historically, drilling out the 7-in, liner equipment has been a time consuming operation with a limited success ratio. The success of the operation is highly dependent on the type of drill bit employed. Tungsten carbide mills and mill tooth rock bits required from 7.5 to 11.5 hours respectively to drill the pack-off bushings, landing collar, shoe track and shoe. Rates of penetration dropped dramatically when drilling the float equipment. While conventional PDC bits have drilled the liner equipment successfully (averaging 9.7 hours), severe bit damage invariably prevented them from continuing to drill the formation at cost-effective penetration rates. This papermore » describes the integrated development and application of an IADC M433 Class PDC bit, which was designed specifically to drill out the 7-in. liner equipment and continue drilling the formation at satisfactory penetration rates. The development was the result of a joint investigation There the operator and bit/liner manufacturers shared their expertise in solving a drilling problem, The heavy-set bit was developed following drill-off tests conducted to investigate the drillability of the 7-in. liner equipment. Key features of the new bit and its application onshore The Netherlands will be presented and analyzed.« less

  11. A dressed spin qubit in silicon

    DOE PAGES

    Laucht, Arne; Kalra, Rachpon; Simmons, Stephanie; ...

    2016-10-17

    Coherent dressing of a quantum two-level system provides access to a new quantum system with improved properties—a different and easily tunable level splitting, faster control and longer coherence times. In our work we investigate the properties of the dressed, donor-bound electron spin in silicon, and assess its potential as a quantum bit in scalable architectures. The two dressed spin-polariton levels constitute a quantum bit that can be coherently driven with an oscillating magnetic field, an oscillating electric field, frequency modulation of the driving field or a simple detuning pulse. We measure coherence times of T* 2p = 2.4 ms andmore » T Hahn 2p = 9 ms, one order of magnitude longer than those of the undressed spin. Moreover, the use of the dressed states enables coherent coupling of the solid-state spins to electric fields and mechanical oscillations.« less

  12. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  13. Security analysis of orthogonal-frequency-division-multiplexing-based continuous-variable quantum key distribution with imperfect modulation

    NASA Astrophysics Data System (ADS)

    Zhang, Hang; Mao, Yu; Huang, Duan; Li, Jiawei; Zhang, Ling; Guo, Ying

    2018-05-01

    We introduce a reliable scheme for continuous-variable quantum key distribution (CV-QKD) by using orthogonal frequency division multiplexing (OFDM). As a spectrally efficient multiplexing technique, OFDM allows a large number of closely spaced orthogonal subcarrier signals used to carry data on several parallel data streams or channels. We place emphasis on modulator impairments which would inevitably arise in the OFDM system and analyze how these impairments affect the OFDM-based CV-QKD system. Moreover, we also evaluate the security in the asymptotic limit and the Pirandola-Laurenza-Ottaviani-Banchi upper bound. Results indicate that although the emergence of imperfect modulation would bring about a slight decrease in the secret key bit rate of each subcarrier, the multiplexing technique combined with CV-QKD results in a desirable improvement on the total secret key bit rate which can raise the numerical value about an order of magnitude.

  14. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    PubMed

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  15. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  16. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  17. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  18. Security analysis of quadratic phase based cryptography

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Healy, John J.; Sheridan, John T.

    2016-09-01

    The linear canonical transform (LCT) is essential in modeling a coherent light field propagation through first-order optical systems. Recently, a generic optical system, known as a Quadratic Phase Encoding System (QPES), for encrypting a two-dimensional (2D) image has been reported. It has been reported together with two phase keys the individual LCT parameters serve as keys of the cryptosystem. However, it is important that such the encryption systems also satisfies some dynamic security properties. Therefore, in this work, we examine some cryptographic evaluation methods, such as Avalanche Criterion and Bit Independence, which indicates the degree of security of the cryptographic algorithms on QPES. We compare our simulation results with the conventional Fourier and the Fresnel transform based DRPE systems. The results show that the LCT based DRPE has an excellent avalanche and bit independence characteristics than that of using the conventional Fourier and Fresnel based encryption systems.

  19. Choice of optical system is critical for the security of double random phase encryption systems

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Cassidy, Derek; Zhao, Liang; Ryle, James P.; Healy, John J.; Sheridan, John T.

    2017-06-01

    The linear canonical transform (LCT) is used in modeling a coherent light-field propagation through first-order optical systems. Recently, a generic optical system, known as the quadratic phase encoding system (QPES), for encrypting a two-dimensional image has been reported. In such systems, two random phase keys and the individual LCT parameters (α,β,γ) serve as secret keys of the cryptosystem. It is important that such encryption systems also satisfy some dynamic security properties. We, therefore, examine such systems using two cryptographic evaluation methods, the avalanche effect and bit independence criterion, which indicate the degree of security of the cryptographic algorithms using QPES. We compared our simulation results with the conventional Fourier and the Fresnel transform-based double random phase encryption (DRPE) systems. The results show that the LCT-based DRPE has an excellent avalanche and bit independence characteristics compared to the conventional Fourier and Fresnel-based encryption systems.

  20. Analog Nonvolatile Computer Memory Circuits

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd

    2007-01-01

    In nonvolatile random-access memory (RAM) circuits of a proposed type, digital data would be stored in analog form in ferroelectric field-effect transistors (FFETs). This type of memory circuit would offer advantages over prior volatile and nonvolatile types: In a conventional complementary metal oxide/semiconductor static RAM, six transistors must be used to store one bit, and storage is volatile in that data are lost when power is turned off. In a conventional dynamic RAM, three transistors must be used to store one bit, and the stored bit must be refreshed every few milliseconds. In contrast, in a RAM according to the proposal, data would be retained when power was turned off, each memory cell would contain only two FFETs, and the cell could store multiple bits (the exact number of bits depending on the specific design). Conventional flash memory circuits afford nonvolatile storage, but they operate at reading and writing times of the order of thousands of conventional computer memory reading and writing times and, hence, are suitable for use only as off-line storage devices. In addition, flash memories cease to function after limited numbers of writing cycles. The proposed memory circuits would not be subject to either of these limitations. Prior developmental nonvolatile ferroelectric memories are limited to one bit per cell, whereas, as stated above, the proposed memories would not be so limited. The design of a memory circuit according to the proposal must reflect the fact that FFET storage is only partly nonvolatile, in that the signal stored in an FFET decays gradually over time. (Retention times of some advanced FFETs exceed ten years.) Instead of storing a single bit of data as either a positively or negatively saturated state in a ferroelectric device, each memory cell according to the proposal would store two values. The two FFETs in each cell would be denoted the storage FFET and the control FFET. The storage FFET would store an analog signal value, between the positive and negative FFET saturation values. This signal value would represent a numerical value of interest corresponding to multiple bits: for example, if the memory circuit were designed to distinguish among 16 different analog values, then each cell could store 4 bits. Simultaneously with writing the signal value in the storage FFET, a negative saturation signal value would be stored in the control FFET. The decay of this control-FFET signal from the saturation value would serve as a model of the decay, for use in regenerating the numerical value of interest from its decaying analog signal value. The memory circuit would include addressing, reading, and writing circuitry that would have features in common with the corresponding parts of other memory circuits, but would also have several distinctive features. The writing circuitry would include a digital-to-analog converter (DAC); the reading circuitry would include an analog-to-digital converter (ADC). For writing a numerical value of interest in a given cell, that cell would be addressed, the saturation value would be written in the control FFET in that cell, and the non-saturation analog value representing the numerical value of interest would be generated by use of the DAC and stored in the storage FFET in that cell. For reading the numerical value of interest stored in a given cell, the cell would be addressed, the ADC would convert the decaying control and storage analog signal values to digital values, and an associated fast digital processing circuit would regenerate the numerical value from digital values.

  1. Simple proof of the impossibility of bit commitment in generalized probabilistic theories using cone programming

    NASA Astrophysics Data System (ADS)

    Sikora, Jamie; Selby, John

    2018-04-01

    Bit commitment is a fundamental cryptographic task, in which Alice commits a bit to Bob such that she cannot later change the value of the bit, while, simultaneously, the bit is hidden from Bob. It is known that ideal bit commitment is impossible within quantum theory. In this work, we show that it is also impossible in generalized probabilistic theories (under a small set of assumptions) by presenting a quantitative trade-off between Alice's and Bob's cheating probabilities. Our proof relies crucially on a formulation of cheating strategies as cone programs, a natural generalization of semidefinite programs. In fact, using the generality of this technique, we prove that this result holds for the more general task of integer commitment.

  2. Multi-Bit Quantum Private Query

    NASA Astrophysics Data System (ADS)

    Shi, Wei-Xu; Liu, Xing-Tong; Wang, Jian; Tang, Chao-Jing

    2015-09-01

    Most of the existing Quantum Private Queries (QPQ) protocols provide only single-bit queries service, thus have to be repeated several times when more bits are retrieved. Wei et al.'s scheme for block queries requires a high-dimension quantum key distribution system to sustain, which is still restricted in the laboratory. Here, based on Markus Jakobi et al.'s single-bit QPQ protocol, we propose a multi-bit quantum private query protocol, in which the user can get access to several bits within one single query. We also extend the proposed protocol to block queries, using a binary matrix to guard database security. Analysis in this paper shows that our protocol has better communication complexity, implementability and can achieve a considerable level of security.

  3. Neighborhood comparison operator

    NASA Technical Reports Server (NTRS)

    Gennery, D. B. (Inventor)

    1985-01-01

    Digital values in a moving window are compared by an operator having nine comparators connected to line buffers for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The omparator output plus 2 bits indicating odd-even pixel/line information about the central pixel addresses a lookup table to provide 14 bits of information, including 2 bits which control a selector to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logical OR of all nine pixels through circuit that implements a very wide OR gate.

  4. Ultrasonic rotary-hammer drill

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph (Inventor); Badescu, Mircea (Inventor); Sherrit, Stewart (Inventor); Kassab, Steve (Inventor); Bao, Xiaoqi (Inventor)

    2010-01-01

    A mechanism for drilling or coring by a combination of sonic hammering and rotation. The drill includes a hammering section with a set of preload weights mounted atop a hammering actuator and an axial passage through the hammering section. In addition, a rotary section includes a motor coupled to a drive shaft that traverses the axial passage through the hammering section. A drill bit is coupled to the drive shaft for drilling by a combination of sonic hammering and rotation. The drill bit includes a fluted shaft leading to a distal crown cutter with teeth. The bit penetrates sampled media by repeated hammering action. In addition, the bit is rotated. As it rotates the fluted bit carries powdered cuttings helically upward along the side of the bit to the surface.

  5. Intracrystalline cation order in a lunar crustal troctolite

    NASA Technical Reports Server (NTRS)

    Smyth, J. R.

    1975-01-01

    Lunar sample 76535 appears to be one of the most slowly cooled bits of silicate material yet studied. It provides, therefore, a unique opportunity for the study of ordering processes in the minerals present. A better understanding of these processes may permit better characterization of the thermal history of this and similar rocks. The cation ordering in the olivine is consistent with terrestrial olivines favoring the interpretation that ordering in olivines increases with increasing temperature. In low bronzite, the deviations from the common orthopyroxene space group appear to be caused by cation order on the basis of four M sites instead of two. The degree of cation order in each of these minerals is consistent with the rock having been excavated from its depth of formation by tectonic or impact processes without being reheated above 300 C.

  6. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2014-10-30

    to single photon detection, at comparable detection efficiencies. On the other hand, error-correction codes are better developed for small-alphabet...protocol is several orders of magnitude better than the Shapiro protocol, which needs entangled states. The bits/mode performance achieved by our...putting together a software tool implemented in MATLAB , which talks to the MODTRAN database via an intermediate numerical dump of transmission data

  7. The MOOC Moment and the End of Reform

    ERIC Educational Resources Information Center

    Bady, Aaron

    2013-01-01

    The Massive Open Online Courses (MOOC) phenomenon has happened very quickly and is also a shift in discourse. This article aims to slow things down and go through the last year or so with a bit more care than we're usually able to do in order to do a "close reading" of the year of the MOOC. Author Aaron Brady ventures an opinion to say…

  8. Intrinsic evolution of controllable oscillators in FPTA-2

    NASA Technical Reports Server (NTRS)

    Sekanina, Lukas; Zebulum, Ricardo S.

    2005-01-01

    Simple one- and two-bit controllable oscillators were intrinsically evolved using only four cells of Field Programmable Transistor Array (FPTA-2). These oscillators can produce different oscillations for different setting of control signals. Therefore, they could be used, in principle, to compose complex networks of oscillators that could exhibit rich dynamical behavior in order to perform a computation or to model a desired system.

  9. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, David D.; Napolitano, Jr., Leonard M.; Redinbo, G. Robert; Shreeve, William O.

    1994-01-01

    An internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum is provided with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented.

  10. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, D.D.; Napolitano, L.M. Jr.; Redinbo, G.R.; Shreeve, W.O.

    1994-03-01

    An internally fault-tolerant data error detection and correction integrated circuit device and a method of operating same is described. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented. 8 figures.

  11. Tunnel field-effect transistor charge-trapping memory with steep subthreshold slope and large memory window

    NASA Astrophysics Data System (ADS)

    Kino, Hisashi; Fukushima, Takafumi; Tanaka, Tetsu

    2018-04-01

    Charge-trapping memory requires the increase of bit density per cell and a larger memory window for lower-power operation. A tunnel field-effect transistor (TFET) can achieve to increase the bit density per cell owing to its steep subthreshold slope. In addition, a TFET structure has an asymmetric structure, which is promising for achieving a larger memory window. A TFET with the N-type gate shows a higher electric field between the P-type source and the N-type gate edge than the conventional FET structure. This high electric field enables large amounts of charges to be injected into the charge storage layer. In this study, we fabricated silicon-oxide-nitride-oxide-semiconductor (SONOS) memory devices with the TFET structure and observed a steep subthreshold slope and a larger memory window.

  12. Seismic Data Preparation Procedures

    DTIC Science & Technology

    1977-10-20

    H. Swindell, and D. Sun. I’ I,,. - - _ _ _ _ - - I I j TABLE OF CONTENTS I SECTION TITLE PAGE ABSTRACT iii ACKNOWLEDGMENTS iv F I. INTRODUCTION I...followed by 4 bits of zero padding Bit 0 = 1 if i n s t rument 1 is bad Bit 1 = 1 if i n s t rumen t 2 is bad Bit ?. 1 if i n s t rument 3 is

  13. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    PubMed

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  14. Use of One Time Pad Algorithm for Bit Plane Security Improvement

    NASA Astrophysics Data System (ADS)

    Suhardi; Suwilo, Saib; Budhiarti Nababan, Erna

    2017-12-01

    BPCS (Bit-Plane Complexity Segmentation) which is one of the steganography techniques that utilizes the human vision characteristics that cannot see the change in binary patterns that occur in the image. This technique performs message insertion by making a switch to a high-complexity bit-plane or noise-like regions with bits of secret messages. Bit messages that were previously stored precisely result the message extraction process to be done easily by rearranging a set of previously stored characters in noise-like region in the image. Therefore the secret message becomes easily known by others. In this research, the process of replacing bit plane with message bits is modified by utilizing One Time Pad cryptography technique which aims to increase security in bit plane. In the tests performed, the combination of One Time Pad cryptographic algorithm to the steganography technique of BPCS works well in the insertion of messages into the vessel image, although in insertion into low-dimensional images is poor. The comparison of the original image with the stegoimage looks identical and produces a good quality image with a mean value of PSNR above 30db when using a largedimensional image as the cover messages.

  15. Soft Decision Analyzer

    NASA Technical Reports Server (NTRS)

    Steele, Glen; Lansdowne, Chatwin; Zucha, Joan; Schlensinger, Adam

    2013-01-01

    The Soft Decision Analyzer (SDA) is an instrument that combines hardware, firmware, and software to perform realtime closed-loop end-to-end statistical analysis of single- or dual- channel serial digital RF communications systems operating in very low signal-to-noise conditions. As an innovation, the unique SDA capabilities allow it to perform analysis of situations where the receiving communication system slips bits due to low signal-to-noise conditions or experiences constellation rotations resulting in channel polarity in versions or channel assignment swaps. SDA s closed-loop detection allows it to instrument a live system and correlate observations with frame, codeword, and packet losses, as well as Quality of Service (QoS) and Quality of Experience (QoE) events. The SDA s abilities are not confined to performing analysis in low signal-to-noise conditions. Its analysis provides in-depth insight of a communication system s receiver performance in a variety of operating conditions. The SDA incorporates two techniques for identifying slips. The first is an examination of content of the received data stream s relation to the transmitted data content and the second is a direct examination of the receiver s recovered clock signals relative to a reference. Both techniques provide benefits in different ways and allow the communication engineer evaluating test results increased confidence and understanding of receiver performance. Direct examination of data contents is performed by two different data techniques, power correlation or a modified Massey correlation, and can be applied to soft decision data widths 1 to 12 bits wide over a correlation depth ranging from 16 to 512 samples. The SDA detects receiver bit slips within a 4 bits window and can handle systems with up to four quadrants (QPSK, SQPSK, and BPSK systems). The SDA continuously monitors correlation results to characterize slips and quadrant change and is capable of performing analysis even when the receiver under test is subjected to conditions where its performance degrades to high error rates (30 percent or beyond). The design incorporates a number of features, such as watchdog triggers that permit the SDA system to recover from large receiver upsets automatically and continue accumulating performance analysis unaided by operator intervention. This accommodates tests that can last in the order of days in order to gain statistical confidence in results and is also useful for capturing snapshots of rare events.

  16. On the improvement of Wiener attack on RSA with small private exponent.

    PubMed

    Wu, Mu-En; Chen, Chien-Ming; Lin, Yue-Hsun; Sun, Hung-Min

    2014-01-01

    RSA system is based on the hardness of the integer factorization problem (IFP). Given an RSA modulus N = pq, it is difficult to determine the prime factors p and q efficiently. One of the most famous short exponent attacks on RSA is the Wiener attack. In 1997, Verheul and van Tilborg use an exhaustive search to extend the boundary of the Wiener attack. Their result shows that the cost of exhaustive search is 2r + 8 bits when extending the Weiner's boundary r bits. In this paper, we first reduce the cost of exhaustive search from 2r + 8 bits to 2r + 2 bits. Then, we propose a method named EPF. With EPF, the cost of exhaustive search is further reduced to 2r - 6 bits when we extend Weiner's boundary r bits. It means that our result is 2(14) times faster than Verheul and van Tilborg's result. Besides, the security boundary is extended 7 bits.

  17. The application of advanced PDC concepts proves effective in south Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlem, J.S.; Baxter, R.L.; Dunn, K.E.

    1996-12-01

    Over the years, a variety of problems with polycrystalline diamond compact (PDC) bit design and application has been documented, with bit whirl being identified as the cause of many inherent problems. The goal of most PDC manufacturers, and the subject of this paper, is development of a better-performing, whirl-resistant PDC bit design. Similarly, the goal for most operators is the lower cost resulting from effective application of such bits. Toward those ends, a cooperative development effort between operators and a manufacturer was undertaken to apply advanced concepts effectively to the design, manufacture, and application of a new series of PDCmore » bits in south Texas. Adoption of design concepts, such as force-balanced cutting structures, asymmetric blade layouts, spiral blade designs, and tracking cutter arrangements, proved effective in countering the destructive effects of bit whirl, and allowed PDC bits to be used in harder formations. Summaries of both operational and economic performance confirm the success of the undertaking.« less

  18. The investigation of the influence of thermomechanical treatment of the material of rotary cutter bit toolholders on its hardness

    NASA Astrophysics Data System (ADS)

    Chupin, S. A.; Bolobov, V. I.

    2017-02-01

    The causes of failure of the tangential rotary cutter bits of the road header during stonedrift in rocks of medium strength are analyzed in the article. It was revealed that the most typical cause of failure of cutter bits is premature wear of the toolholder (body) of the cutter bit. It is well known that the most effective way to improve the wear resistance is to increase hardness. The influence of the thermomechanical treatment of the material of the cutter bit toolholder on its hardness is studied. It was established that the thermomechanical treatment of the cutter bit toolholder material results in the increase of its hardness. It was found that the increase of material hardness is proportional to the increase of material strain intensity during thermomechanical treatment. It was concluded that the use of thermomechanical treatment can lead to the increase of both the hardness and wear resistance of the cutter bit material.

  19. On the Improvement of Wiener Attack on RSA with Small Private Exponent

    PubMed Central

    Chen, Chien-Ming; Lin, Yue-Hsun

    2014-01-01

    RSA system is based on the hardness of the integer factorization problem (IFP). Given an RSA modulus N = pq, it is difficult to determine the prime factors p and q efficiently. One of the most famous short exponent attacks on RSA is the Wiener attack. In 1997, Verheul and van Tilborg use an exhaustive search to extend the boundary of the Wiener attack. Their result shows that the cost of exhaustive search is 2r + 8 bits when extending the Weiner's boundary r bits. In this paper, we first reduce the cost of exhaustive search from 2r + 8 bits to 2r + 2 bits. Then, we propose a method named EPF. With EPF, the cost of exhaustive search is further reduced to 2r − 6 bits when we extend Weiner's boundary r bits. It means that our result is 214 times faster than Verheul and van Tilborg's result. Besides, the security boundary is extended 7 bits. PMID:24982974

  20. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  1. Turbodrills and innovative PDC bits economically drilled hard formations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boudreaux, R.C.; Massey, K.

    1994-03-28

    The use of turbodrills and polycrystalline diamond compact (PDC) bits with an innovative, tracking cutting structure has improved drilling economics in medium and hard formations in the Gulf of Mexico. Field results have confirmed that turbodrilling with trackset PDC bits reduced drilling costs, compared to offset wells. The combination of turbodrills and trackset bits has been used successfully in a broad range of applications and with various drilling parameters. Formations ranging from medium shales to hard, abrasive sands have been successfully and economically drilled. The tools have been used in both water-based and oil-based muds. Additionally, the turbo-drill and tracksetmore » PDC bit combination has been stable on directional drilling applications. The locking effect of the cutting structure helps keep the bit on course.« less

  2. Combined group ECC protection and subgroup parity protection

    DOEpatents

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  3. The Effects of Spatial Diversity and Imperfect Channel Estimation on Wideband MC-DS-CDMA and MC-CDMA

    DTIC Science & Technology

    2009-10-01

    In our previous work, we compared the theoretical bit error rates of multi-carrier direct sequence code division multiple access (MC- DS - CDMA ) and...consider only those cases where MC- CDMA has higher frequency diversity than MC- DS - CDMA . Since increases in diversity yield diminishing gains, we conclude

  4. ALA 2010 Midwinter Meeting: The Price to Participate

    ERIC Educational Resources Information Center

    Berry, John N., III

    2009-01-01

    While the library economy continues its downward slide, the cost of attending the American Library Association (ALA) Midwinter Meeting seems as high as ever. That is the price of professional participation. These days it seems a bit too high and tends to limit involvement in the old association to librarians in the higher echelons of the field.…

  5. Communication system analysis for manned space flight

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1977-01-01

    One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.

  6. Percussive Augmenter of Rotary Drills for Operating as a Rotary-Hammer Drill

    NASA Technical Reports Server (NTRS)

    Aldrich, Jack Barron (Inventor); Bar-Cohen, Yoseph (Inventor); Sherrit, Stewart (Inventor); Badescu, Mircea (Inventor); Bao, Xiaoqi (Inventor); Scott, James Samson (Inventor)

    2014-01-01

    A percussive augmenter bit includes a connection shaft for mounting the bit onto a rotary drill. In a first modality, an actuator percussively drives the bit, and an electric slip-ring provides power to the actuator while being rotated by the drill. Hammering action from the actuator and rotation from the drill are applied directly to material being drilled. In a second modality, a percussive augmenter includes an actuator that operates as a hammering mechanism that drives a free mass into the bit creating stress pulses that fracture material that is in contact with the bit.

  7. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    NASA Astrophysics Data System (ADS)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  8. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  9. Zn-BTC MOFs with active metal sites synthesized via a structure-directing approach for highly efficient carbon conversion.

    PubMed

    Huang, Xianqiang; Chen, Yifa; Lin, Zhengguo; Ren, Xiaoqian; Song, Yuna; Xu, Zhenzhu; Dong, Xinmei; Li, Xingguo; Hu, Changwen; Wang, Bo

    2014-03-11

    Three zinc-trimesic acid (Zn-BTC) MOFs, BIT-101, BIT-102 and BIT-103, have been synthesized via a structure-directing strategy. Interestingly, BIT-102 and -103 exhibit extraordinary catalytic performance (up to Conv. 100% and Sele. 95.2%) in the cycloaddition of CO2 under solvent- and halogen-free conditions without any additives or co-catalysts.

  10. Field-Deployable Video Cloud Solution

    DTIC Science & Technology

    2016-03-01

    78 2. Shipboard Server or Video Cloud System .......................................79 3. 4G LTE and Wi-Fi...local area network LED light emitting diode Li-ion lithium ion LTE long term evolution Mbps mega-bits per second MBps mega-bytes per second xv...restrictions on distribution. File size is dependent on both bit rate and content length. Bit rate is a value measured in bits per second (bps) and is

  11. Real-time positioning technology in horizontal directional drilling based on magnetic gradient tensor measurement

    NASA Astrophysics Data System (ADS)

    Deng, Guoqing; Yao, Aiguo

    2017-04-01

    Horizontal directional drilling (HDD) technology has been widely used in Civil Engineering. The dynamic position of the drill bit during construction is one of significant facts determining the accuracy of the trajectory of HDD. A new method now has been proposed to detecting the position of drill bit by measuring the magnetic gradient tensor of the ground solenoid magnetic beacon. Compared with traditional HDD positioning technologies, this new model is much easier to apply with lower request for construction sites and higher positioning efficiency. A direct current (DC) solenoid as a magnetic dipole is placed on ground near the drill bit, and related sensors array which contains four Micro-electromechanical Systems (MEMS ) tri-axial magnetometers, one MEMS tri-axial accelerometer and one MEMS tri-axial gyroscope is set up for measuring the magnetic gradient tensor of the magnetic dipole. The related HDD positioning model has been established and simulation experiments have been carried out to verify the feasibility and reliability of the proposed method. The experiments show that this method has good positioning accuracy in horizontal and vertical direction, and totally avoid the impact of the environmental magnetic field. It can be found that the posture of the magnetic beacon will impact the remote positioning precision within valid positioning range, and the positioning accuracy is higher with longer baseline for limited space in drilling tools. The results prove that the relative error can be limited in 2% by adjusting position of the magnetic beacon, the layers of the enameled coil, the sensitive of magnetometers and the baseline distance. Conclusion can be made that this new method can be applied in HDD positioning with better effect and wider application range than traditional method.

  12. Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.

    2017-08-01

    The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.

  13. PDC Bit Testing at Sandia Reveals Influence of Chatter in Hard-Rock Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RAYMOND,DAVID W.

    1999-10-14

    Polycrystalline diamond compact (PDC) bits have yet to be routinely applied to drilling the hard-rock formations characteristic of geothermal reservoirs. Most geothermal production wells are currently drilled with tungsten-carbide-insert roller-cone bits. PDC bits have significantly improved penetration rates and bit life beyond roller-cone bits in the oil and gas industry where soft to medium-hard rock types are encountered. If PDC bits could be used to double current penetration rates in hard rock geothermal well-drilling costs could be reduced by 15 percent or more. PDC bits exhibit reasonable life in hard-rock wear testing using the relatively rigid setups typical of laboratorymore » testing. Unfortunately, field experience indicates otherwise. The prevailing mode of failure encountered by PDC bits returning from hard-rock formations in the field is catastrophic, presumably due to impact loading. These failures usually occur in advance of any appreciable wear that might dictate cutter replacement. Self-induced bit vibration, or ''chatter'', is one of the mechanisms that may be responsible for impact damage to PDC cutters in hard-rock drilling. Chatter is more severe in hard-rock formations since they induce significant dynamic loading on the cutter elements. Chatter is a phenomenon whereby the drillstring becomes dynamically unstable and excessive sustained vibrations occur. Unlike forced vibration, the force (i.e., weight on bit) that drives self-induced vibration is coupled with the response it produces. Many of the chatter principles derived in the machine tool industry are applicable to drilling. It is a simple matter to make changes to a machine tool to study the chatter phenomenon. This is not the case with drilling. Chatter occurs in field drilling due to the flexibility of the drillstring. Hence, laboratory setups must be made compliant to observe chatter.« less

  14. Mars Rover Step Toward Possible Resumption of Drilling

    NASA Image and Video Library

    2017-10-23

    NASA's Curiosity Mars rover conducted a test on Oct. 17, 2017, as part of the rover team's development of a new way to use the rover's drill. This image from Curiosity's front Hazard Avoidance Camera (Hazcam) shows the drill's bit touching the ground during an assessment of measurements by a sensor on the rover's robotic arm. Curiosity used its drill to acquire sample material from Martian rocks 15 times from 2013 to 2016. In December 2016, the drill's feed mechanism stopped working reliably. During the test shown in this image, the rover touched the drill bit to the ground for the first time in 10 months. The image has been adjusted to brighten shaded areas so that the bit is more evident. The date was the 1,848th Martian day, or sol, of Curiosity's work on Mars In drill use prior to December 2016, two contact posts -- the stabilizers on either side of the bit -- were placed on the target rock while the bit was in a withdrawn position. Then the motorized feed mechanism within the drill extended the bit forward, and the bit's rotation and percussion actions penetrated the rock. A promising alternative now under development and testing -- called feed-extended drilling -- uses motion of the robotic arm to directly advance the extended bit into a rock. In this image, the bit is touching the ground but the stabilizers are not. In the Sol 1848 activity, Curiosity pressed the drill bit downward, and then applied smaller sideways forces while taking measurements with a force/torque sensor on the arm. The objective was to gain understanding about how readings from the sensor can be used during drilling to adjust for any sideways pressure that might risk the bit becoming stuck in a rock. While rover-team engineers are working on an alternative drilling method, the mission continues to examine sites on Mount Sharp, Mars, with other tools. https://photojournal.jpl.nasa.gov/catalog/PIA22063

  15. Characterization of rotary-percussion drilling as a seismic-while-drilling source

    NASA Astrophysics Data System (ADS)

    Xiao, Yingjian; Hurich, Charles; Butt, Stephen D.

    2018-04-01

    This paper focuses on an evaluation of rotary-percussion drilling (RPD) as a seismic source. Two field experiments were conducted to characterize seismic sources from different rocks with different strengths, i.e. weak shale and hard arkose. Characterization of RPD sources consist of spectral analysis and mean power measurements, along with field measurements of the source radiation patterns. Spectral analysis shows that increase of rock strength increases peak frequency and widens bandwidth, which makes harder rock more viable for seismic-while-drilling purposes. Mean power analysis infers higher magnitude of body waves in RPD than in conventional drillings. Within the horizontal plane, the observed P-wave energy radiation pattern partially confirms the theoretical radiation pattern under a single vertical bit vibration. However a horizontal lobe of energy is observed close to orthogonal to the axial bit vibration. From analysis, this lobe is attributed to lateral bit vibration, which is not documented elsewhere during RPD. Within the horizontal plane, the observed radiation pattern of P-waves is generally consistent with a spherically-symmetric distribution of energy. In addition, polarization analysis is conducted on P-waves recorded at surface geophones for understanding the particle motions. P-wave particle motions are predominantly in the vertical direction showing the interference of the free-surface.

  16. The Harvard Beat Assessment Test (H-BAT): a battery for assessing beat perception and production and their dissociation.

    PubMed

    Fujii, Shinya; Schlaug, Gottfried

    2013-01-01

    Humans have the abilities to perceive, produce, and synchronize with a musical beat, yet there are widespread individual differences. To investigate these abilities and to determine if a dissociation between beat perception and production exists, we developed the Harvard Beat Assessment Test (H-BAT), a new battery that assesses beat perception and production abilities. H-BAT consists of four subtests: (1) music tapping test (MTT), (2) beat saliency test (BST), (3) beat interval test (BIT), and (4) beat finding and interval test (BFIT). MTT measures the degree of tapping synchronization with the beat of music, whereas BST, BIT, and BFIT measure perception and production thresholds via psychophysical adaptive stair-case methods. We administered the H-BAT on thirty individuals and investigated the performance distribution across these individuals in each subtest. There was a wide distribution in individual abilities to tap in synchrony with the beat of music during the MTT. The degree of synchronization consistency was negatively correlated with thresholds in the BST, BIT, and BFIT: a lower degree of synchronization was associated with higher perception and production thresholds. H-BAT can be a useful tool in determining an individual's ability to perceive and produce a beat within a single session.

  17. The Harvard Beat Assessment Test (H-BAT): a battery for assessing beat perception and production and their dissociation

    PubMed Central

    Fujii, Shinya; Schlaug, Gottfried

    2013-01-01

    Humans have the abilities to perceive, produce, and synchronize with a musical beat, yet there are widespread individual differences. To investigate these abilities and to determine if a dissociation between beat perception and production exists, we developed the Harvard Beat Assessment Test (H-BAT), a new battery that assesses beat perception and production abilities. H-BAT consists of four subtests: (1) music tapping test (MTT), (2) beat saliency test (BST), (3) beat interval test (BIT), and (4) beat finding and interval test (BFIT). MTT measures the degree of tapping synchronization with the beat of music, whereas BST, BIT, and BFIT measure perception and production thresholds via psychophysical adaptive stair-case methods. We administered the H-BAT on thirty individuals and investigated the performance distribution across these individuals in each subtest. There was a wide distribution in individual abilities to tap in synchrony with the beat of music during the MTT. The degree of synchronization consistency was negatively correlated with thresholds in the BST, BIT, and BFIT: a lower degree of synchronization was associated with higher perception and production thresholds. H-BAT can be a useful tool in determining an individual's ability to perceive and produce a beat within a single session. PMID:24324421

  18. Downregulation of Bit1 expression promotes growth, anoikis resistance, and transformation of immortalized human bronchial epithelial cells via Erk activation-dependent suppression of E-cadherin.

    PubMed

    Yao, Xin; Gray, Selena; Pham, Tri; Delgardo, Mychael; Nguyen, An; Do, Stephen; Ireland, Shubha Kale; Chen, Renwei; Abdel-Mageed, Asim B; Biliran, Hector

    2018-01-01

    The mitochondrial Bit1 protein exerts tumor-suppressive function in NSCLC through induction of anoikis and inhibition of EMT. Having this dual tumor suppressive effect, its downregulation in the established human lung adenocarcinoma A549 cell line resulted in potentiation of tumorigenicity and metastasis in vivo. However, the exact role of Bit1 in regulating malignant growth and transformation of human lung epithelial cells, which are origin of most forms of human lung cancers, has not been examined. To this end, we have downregulated the endogenous Bit1 expression in the immortalized non-tumorigenic human bronchial epithelial BEAS-2B cells. Knockdown of Bit1 enhanced the growth and anoikis insensitivity of BEAS-2B cells. In line with their acquired anoikis resistance, the Bit1 knockdown BEAS-2B cells exhibited enhanced anchorage-independent growth in vitro but failed to form tumors in vivo. The loss of Bit1-induced transformed phenotypes was in part attributable to the repression of E-cadherin expression since forced exogenous E-cadherin expression attenuated the malignant phenotypes of the Bit1 knockdown cells. Importantly, we show that the loss of Bit1 expression in BEAS-2B cells resulted in increased Erk activation, which functions upstream to promote TLE1-mediated transcriptional repression of E-cadherin. These collective findings indicate that loss of Bit1 expression contributes to the acquisition of malignant phenotype of human lung epithelial cells via Erk activation-induced suppression of E-cadherin expression. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks

    NASA Astrophysics Data System (ADS)

    Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.

    2007-05-01

    This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.

  20. A rigorous analysis of digital pre-emphasis and DAC resolution for interleaved DAC Nyquist-WDM signal generation in high-speed coherent optical transmission systems

    NASA Astrophysics Data System (ADS)

    Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi

    2018-02-01

    The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.

  1. Morphology of powders of tungsten carbide used in wear-resistant coatings and deposition on the PDC drill bits

    NASA Astrophysics Data System (ADS)

    Zakharova, E. S.; Markova, I. Yu; Maslov, A. L.; Polushin, N. I.; Laptev, A. I.

    2017-05-01

    Modern drill bits have high abrasive wear in the area of contact with the rock and removed sludge. Currently, these bits have a protective layer on the bit body, which consists of a metal matrix with inclusions of carbide particles. The research matrix of this coating and the wear-resistant particles is a prerequisite in the design and production of drill bits. In this work, complex investigation was made for various carbide powders of the grades Relit (tungsten carbide produced by Ltd “ROSNAMIS”) which are used as wear-resistant particles in the coating of the drill bit body. The morphology and phase composition of the chosen powders as well as the influence of a particle shape on prospects of their application in wear-resistance coating presented in this work.

  2. Combined group ECC protection and subgroup parity protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Cheng, Dong; Heidelberger, Philip

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less

  3. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  4. Reconfigurable data path processor

    NASA Technical Reports Server (NTRS)

    Donohoe, Gregory (Inventor)

    2005-01-01

    A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.

  5. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP).

    PubMed

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift-Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent "Bit 0," "Bit 1" and "Bit 2" respectively. Different to common BFSK in digital communication, "Bit 0" and "Bit 1" composited the unique identifier of stimuli in binary bit stream form, while "Bit 2" indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2 n -1 ( n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations.

  6. Optical Guidance for a Robotic Submarine

    NASA Astrophysics Data System (ADS)

    Schulze, Karl R.; LaFlash, Chris

    2002-11-01

    There is a need for autonomous submarines that can quickly and safely complete jobs, such as the recovery of a downed aircraft's black box recorder. In order to complete this feat, it is necessary to use an optical processing algorithm that distinguishes a desired target and uses the feedback from the algorithm to retrieve the target. The algorithm itself uses many bit mask filters for particle information, and then uses a unique rectation method in order to resolve complete objects. The algorithm has been extensively tested on an AUV platform, and proven to succeed repeatedly in approximately five or more feet of water clarity.

  7. Image storage in coumarin-based copolymer thin films by photoinduced dimerization.

    PubMed

    Gindre, Denis; Iliopoulos, Konstantinos; Krupka, Oksana; Champigny, Emilie; Morille, Yohann; Sallé, Marc

    2013-11-15

    We report a technique to encode grayscale digital images in thin films composed of copolymers containing coumarins. A nonlinear microscopy setup was implemented and two nonlinear optical processes were used to store and read information. A third-order process (two-photon absorption) was used to photoinduce a controlled dimer-to-monomer ratio within a defined tiny volume in the material, which corresponds to each recorded bit of data. Moreover, a second-order process (second-harmonic generation) was used to read the stored information, which has been found to be highly dependent upon the monomer-to-dimer ratio.

  8. Effect of combined digital imaging parameters on endodontic file measurements.

    PubMed

    de Oliveira, Matheus Lima; Pinto, Geraldo Camilo de Souza; Ambrosano, Glaucia Maria Bovi; Tosoni, Guilherme Monteiro

    2012-10-01

    This study assessed the effect of the combination of a dedicated endodontic filter, spatial resolution, and contrast resolution on the determination of endodontic file lengths. Forty extracted single-rooted teeth were x-rayed with K-files (ISO size 10 and 15) in the root canals. Images were acquired using the VistaScan system (Dürr Dental, Beitigheim-Bissingen, Germany) under different combining parameters of spatial resolution (10 and 25 line pairs per millimeter [lp/mm]) and contrast resolution (8- and 16-bit depths). Subsequently, a dedicated endodontic filter was applied on the 16-bit images, creating 2 additional parameters. Six observers measured the length of the endodontic files in the root canals using the software that accompanies the system. The mean values of the actual file lengths and the measurements of the radiographic images were submitted to 1-way analysis of variance and the Tukey test at a level of significance of 5%. The intraobserver reproducibility was assessed by the intraclass correlation coefficient. All combined image parameters showed excellent intraobserver agreement with intraclass correlation coefficient means higher than 0.98. The imaging parameter of 25 lp/mm and 16 bit associated with the use of the endodontic filter did not differ significantly from the actual file lengths when both file sizes were analyzed together or separately (P > .05). When the size 15 file was evaluated separately, only 8-bit images differed significantly from the actual file lengths (P ≤ .05). The combination of an endodontic filter with high spatial resolution and high contrast resolution is recommended for the determination of file lengths when using storage phosphor plates. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  9. Design of a Multi-Level/Analog Ferroelectric Memory Device

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Phillips, Thomas A.; Ho, Fat D.

    2006-01-01

    Increasing the memory density and utilizing the dove1 characteristics of ferroelectric devices is important in making ferroelectric memory devices more desirable to the consumer. This paper describes a design that allows multiple levels to be stored in a ferroelectric based memory cell. It can be used to store multiple bits or analog values in a high speed nonvolatile memory. The design utilizes the hysteresis characteristic of ferroelectric transistors to store an analog value in the memory cell. The design also compensates for the decay of the polarization of the ferroelectric material over time. This is done by utilizing a pair of ferroelectric transistors to store the data. One transistor is used as a reference to determine the amount of decay that has occurred since the pair was programmed. The second transistor stores the analog value as a polarization value between zero and saturated. The design allows digital data to be stored as multiple bits in each memory cell. The number of bits per cell that can be stored will vary with the decay rate of the ferroelectric transistors and the repeatability of polarization between transistors. It is predicted that each memory cell may be able to store 8 bits or more. The design is based on data taken from actual ferroelectric transistors. Although the circuit has not been fabricated, a prototype circuit is now under construction. The design of this circuit is different than multi-level FLASH or silicon transistor circuits. The differences between these types of circuits are described in this paper. This memory design will be useful because it allows higher memory density, compensates for the environmental and ferroelectric aging processes, allows analog values to be directly stored in memory, compensates for the thermal and radiation environments associated with space operations, and relies only on existing technologies.

  10. Development and testing of a Mudjet-augmented PDC bit.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Alan; Chahine, Georges; Raymond, David Wayne

    2006-01-01

    This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.

  11. Hydromechanical drilling device

    DOEpatents

    Summers, David A.

    1978-01-01

    A hydromechanical drilling tool which combines a high pressure water jet drill with a conventional roller cone type of drilling bit. The high pressure jet serves as a tap drill for cutting a relatively small diameter hole in advance of the conventional bit. Auxiliary laterally projecting jets also serve to partially cut rock and to remove debris from in front of the bit teeth thereby reducing significantly the thrust loading for driving the bit.

  12. IRIG Serial Time Code Formats

    DTIC Science & Technology

    2016-08-01

    codes contain control functions (CFs) that are reserved for encoding various controls, identification, and other special- purpose functions. Time...set of CF bits for the encoding of various control, identification, and other special- purpose functions. The control bits may be programmed in any... recycles yearly. • There are 18 CFs occur between position identifiers P6 and P8. Any CF bit or combination of bits can be programmed to read a

  13. Archival Services and Technologies for Scientific Data

    NASA Astrophysics Data System (ADS)

    Meyer, Jörg; Hardt, Marcus; Streit, Achim; van Wezel, Jos

    2014-06-01

    After analysis and publication, there is no need to keep experimental data online on spinning disks. For reliability and costs inactive data is moved to tape and put into a data archive. The data archive must provide reliable access for at least ten years following a recommendation of the German Science Foundation (DFG), but many scientific communities wish to keep data available much longer. Data archival is on the one hand purely a bit preservation activity in order to ensure the bits read are the same as those written years before. On the other hand enough information must be archived to be able to use and interpret the content of the data. The latter is depending on many also community specific factors and remains an areas of much debate among archival specialists. The paper describes the current practice of archival and bit preservation in use for different science communities at KIT for which a combination of organizational services and technical tools are required. The special monitoring to detect tape related errors, the software infrastructure in use as well as the service certification are discussed. Plans and developments at KIT also in the context of the Large Scale Data Management and Analysis (LSDMA) project are presented. The technical advantages of the T10 SCSI Stream Commands (SSC-4) and the Linear Tape File System (LTFS) will have a profound impact on future long term archival of large data sets.

  14. Experimental demonstration of the optical multi-mesh hypercube: scaleable interconnection network for multiprocessors and multicomputers.

    PubMed

    Louri, A; Furlonge, S; Neocleous, C

    1996-12-10

    A prototype of a novel topology for scaleable optical interconnection networks called the optical multi-mesh hypercube (OMMH) is experimentally demonstrated to as high as a 150-Mbit/s data rate (2(7) - 1 nonreturn-to-zero pseudo-random data pattern) at a bit error rate of 10(-13)/link by the use of commercially available devices. OMMH is a scaleable network [Appl. Opt. 33, 7558 (1994); J. Lightwave Technol. 12, 704 (1994)] architecture that combines the positive features of the hypercube (small diameter, connectivity, symmetry, simple routing, and fault tolerance) and the mesh (constant node degree and size scaleability). The optical implementation method is divided into two levels: high-density local connections for the hypercube modules, and high-bit-rate, low-density, long connections for the mesh links connecting the hypercube modules. Free-space imaging systems utilizing vertical-cavity surface-emitting laser (VCSEL) arrays, lenslet arrays, space-invariant holographic techniques, and photodiode arrays are demonstrated for the local connections. Optobus fiber interconnects from Motorola are used for the long-distance connections. The OMMH was optimized to operate at the data rate of Motorola's Optobus (10-bit-wide, VCSEL-based bidirectional data interconnects at 150 Mbits/s). Difficulties encountered included the varying fan-out efficiencies of the different orders of the hologram, misalignment sensitivity of the free-space links, low power (1 mW) of the individual VCSEL's, and noise.

  15. RF-MEMS for future mobile applications: experimental verification of a reconfigurable 8-bit power attenuator up to 110 GHz

    NASA Astrophysics Data System (ADS)

    Iannacci, J.; Tschoban, C.

    2017-04-01

    RF-MEMS technology is proposed as a key enabling solution for realising the high-performance and highly reconfigurable passive components that future communication standards will demand. In this work, we present, test and discuss a novel design concept for an 8-bit reconfigurable power attenuator, manufactured using the RF-MEMS technology available at the CMM-FBK, in Italy. The device features electrostatically controlled MEMS ohmic switches in order to select/deselect the resistive loads (both in series and shunt configuration) that attenuate the RF signal, and comprises eight cascaded stages (i.e. 8-bit), thus implementing 256 different network configurations. The fabricated samples are measured (S-parameters) from 10 MHz to 110 GHz in a wide range of different configurations, and modelled/simulated with Ansys HFSS. The device exhibits attenuation levels (S21) in the range from  -10 dB to  -60 dB, up to 110 GHz. In particular, S21 shows flatness from 15 dB down to 3-5 dB and from 10 MHz to 50 GHz, as well as fewer linear traces up to 110 GHz. A comprehensive discussion is developed regarding the voltage standing wave ratio, which is employed as a quality indicator for the attenuation levels. The margins of improvement at design level which are needed to overcome the limitations of the presented RF-MEMS device are also discussed.

  16. Designing ecological flows to gravely braided rivers in alpine environments

    NASA Astrophysics Data System (ADS)

    Egozi, R.; Ashmore, P.

    2009-04-01

    Designing ecological flows in gravelly braided streams requires estimating the channel forming discharge in order to maintain the braided reach physical (allocation of flow and bed load) and ecological (maintaining the habitat diversity) functions. At present, compared to single meander streams, there are fewer guiding principles for river practitioners that can be used to manage braided streams. Insight into braiding morphodynamics using braiding intensity indices allows estimation of channel forming discharge. We assess variation in braiding intensity by mapping the total number of channels (BIT) and the number of active (transporting bed load) channels (BIA) at different stages of typical diurnal melt-water hydrographs in a pro-glacial braided river, Sunwapta River, Canada. Results show that both BIA and BIT vary with flow stage but over a limited range of values. Furthermore, maximum BIT occurs below peak discharge. At this stage there is a balance between channel merging from inundation and occupation of new channels as the stage rises. This stage is the channel forming discharge because above this stage the existing braided pattern cannot discharge the volume of water without causing morphological changes (e.g., destruction of bifurcations, channel avulsion). Estimation of the channel forming discharge requires a set of braiding intensity measurements over a range of flow stages. The design of ecological flows must take into consideration flow regime characteristics rather than just the channel forming discharge magnitude.

  17. Optical tomographic memories: algorithms for the efficient information readout

    NASA Astrophysics Data System (ADS)

    Pantelic, Dejan V.

    1990-07-01

    Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different

  18. FPGA implementation of bit controller in double-tick architecture

    NASA Astrophysics Data System (ADS)

    Kobylecki, Michał; Kania, Dariusz

    2017-11-01

    This paper presents a comparison of the two original architectures of programmable bit controllers built on FPGAs. Programmable Logic Controllers (which include, among other things programmable bit controllers) built on FPGAs provide a efficient alternative to the controllers based on microprocessors which are expensive and often too slow. The presented and compared methods allow for the efficient implementation of any bit control algorithm written in Ladder Diagram language into the programmable logic system in accordance with IEC61131-3. In both cases, we have compared the effect of the applied architecture on the performance of executing the same bit control program in relation to its own size.

  19. Conditions for the optical wireless links bit error ratio determination

    NASA Astrophysics Data System (ADS)

    Kvíčala, Radek

    2017-11-01

    To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.

  20. Grinding tool for making hemispherical bores in hard materials

    DOEpatents

    Duran, E.L.

    1985-04-03

    A grinding tool for forming hemispherical bores in hard materials such as boron carbide. The tool comprises a hemicircular grinding bit, formed of a metal bond diamond matrix, which is mounted transversely on one end of a tubular tool shaft. The bit includes a spherically curved outer edge surface which is the active grinding surface of the tool. Two coolant fluid ports on opposite sides of the bit enable introduction of coolant fluid through the bore of the tool shaft so as to be emitted adjacent the opposite sides of the grinding bit, thereby providing optimum cooling of both the workpiece and the bit.

  1. Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojtanowicz, A.K.; Kuru, E.

    1993-12-01

    An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less

  2. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  3. Double acting bit holder

    DOEpatents

    Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.

    1994-01-01

    A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.

  4. Simple and high-speed polarization-based QKD

    NASA Astrophysics Data System (ADS)

    Grünenfelder, Fadri; Boaron, Alberto; Rusca, Davide; Martin, Anthony; Zbinden, Hugo

    2018-01-01

    We present a simplified BB84 protocol with only three quantum states and one decoy-state level. We implement this scheme using the polarization degree of freedom at telecom wavelength. Only one pulsed laser is used in order to reduce possible side-channel attacks. The repetition rate of 625 MHz and the achieved secret bit rate of 23 bps over 200 km of standard fiber are the actual state of the art.

  5. Lessons from the Institute for New Heads (INH) Class of 2006: Ten Headships--134 Years of Hard-Earned Experience

    ERIC Educational Resources Information Center

    Raphel, Annette; Huber, John; Chandler, Carolyn; Vorenberg, Amy; Jones-Wilkins, Andy; Devey, Mark A.; Holford, Josie; Craig, Ian; Elam, Julie

    2016-01-01

    Ten years ago in July 2006, 64 mostly starry-eyed men and women attended the NAIS Institute for New Heads (INH) in order to learn the ropes of headship. These newly minted heads were filled with enthusiasm, commitment, and passion, along with humility and a bit of healthy trepidation. One core group connected under the careful guidance of…

  6. Improving pointing of Toruń 32-m radio telescope: effects of rail surface irregularities

    NASA Astrophysics Data System (ADS)

    Lew, Bartosz

    2018-03-01

    Over the last few years a number of software and hardware improvements have been implemented to the 32-m Cassegrain radio telescope located near Toruń. The 19-bit angle encoders have been upgraded to 29-bit in azimuth and elevation axes. The control system has been substantially improved, in order to account for a number of previously-neglected, astrometric effects that are relevant for milli-degree pointing. In the summer 2015, as a result of maintenance works, the orientation of the secondary mirror has been slightly altered, which resulted in worsening of the pointing precision, much below the nominal telescope capabilities. In preparation for observations at the highest available frequency of 30-GHz, we use One Centimeter Receiver Array (OCRA), to take the most accurate pointing data ever collected with the telescope, and we analyze it in order to improve the pointing precision. We introduce a new generalized pointing model that, for the first time, accounts for the rail irregularities, and we show that the telescope can have root mean square pointing accuracy at the level < 8″ and < 12″ in azimuth and elevation respectively. Finally, we discuss the implemented pointing improvements in the light of effects that may influence their long-term stability.

  7. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  8. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

  9. Fastener Recess Evaluation

    DTIC Science & Technology

    1978-04-01

    lbs respectively. The Torx recess suffered most under the test methods adopted (as would any similar recess such as the internal hex). Zero values ...Administrationi, National Tool Center Apex Machine and Tool Co. r~. Phillips International Co.] Hi-Shear Corp. General Dynamics Corp. Defense Logistics Agency...29 6. Undersized Bits 30 7. Worn Bit Test 31 8. Stock Bit and Screw Comparison 32 9. Ribbed Bits 36 V FIELD DATA OBSERVATIONS 38 1. Torque Values 38 2

  10. Meteor burst communications for LPI applications

    NASA Astrophysics Data System (ADS)

    Schilling, D. L.; Apelewicz, T.; Lomp, G. R.; Lundberg, L. A.

    A technique that enhances the performance of meteor-burst communications is described. The technique, the feedback adaptive variable rate (FAVR) system, maintains a feedback channel that allows the transmitted bit rate to mimic the time behavior of the received power so that a constant bit energy is maintained. This results in a constant probability of bit error in each transmitted bit. Experimentally determined meteor-burst channel characteristics and FAVR system simulation results are presented.

  11. Short Note on Complexity of Multi-Value Byzantine Agreement

    DTIC Science & Technology

    2010-07-27

    which lead to nBl /D bits over the whole algorithm. Broadcasts in extended step: In the extended step, every node broadcasts D bits. Thus nDB bits...bits, as: (n− 1)l + n(n− 1)(k +D/k)l/D + nBl /D + nDBt(t+ 1) (4) = (n− 1)l +O(n2kl/D + n2l/k + nBl /D + n3BD). (5) Notice that broadcast algorithm of

  12. Long sequence correlation coprocessor

    NASA Astrophysics Data System (ADS)

    Gage, Douglas W.

    1994-09-01

    A long sequence correlation coprocessor (LSCC) accelerates the bitwise correlation of arbitrarily long digital sequences by calculating in parallel the correlation score for 16, for example, adjacent bit alignments between two binary sequences. The LSCC integrated circuit is incorporated into a computer system with memory storage buffers and a separate general purpose computer processor which serves as its controller. Each of the LSCC's set of sequential counters simultaneously tallies a separate correlation coefficient. During each LSCC clock cycle, computer enable logic associated with each counter compares one bit of a first sequence with one bit of a second sequence to increment the counter if the bits are the same. A shift register assures that the same bit of the first sequence is simultaneously compared to different bits of the second sequence to simultaneously calculate the correlation coefficient by the different counters to represent different alignments of the two sequences.

  13. Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Iskenderian, Theodore; Sherrit, Stewart; Bao, Xiaoqi; Linderman, Randel

    2010-01-01

    Long tool bits are undergoing development that can be stowed compactly until used as rock- or ground-penetrating probes actuated by ultrasonic/sonic mechanisms. These bits are designed to be folded or rolled into compact form for transport to exploration sites, where they are to be connected to their ultrasonic/ sonic actuation mechanisms and unfolded or unrolled to their full lengths for penetrating ground or rock to relatively large depths. These bits can be designed to acquire rock or soil samples and/or to be equipped with sensors for measuring properties of rock or soil in situ. These bits can also be designed to be withdrawn from the ground, restowed, and transported for reuse at different exploration sites. Apparatuses based on the concept of a probe actuated by an ultrasonic/sonic mechanism have been described in numerous prior NASA Tech Briefs articles, the most recent and relevant being "Ultrasonic/ Sonic Impacting Penetrators" (NPO-41666) NASA Tech Briefs, Vol. 32, No. 4 (April 2008), page 58. All of those apparatuses are variations on the basic theme of the earliest ones, denoted ultrasonic/sonic drill corers (USDCs). To recapitulate: An apparatus of this type includes a lightweight, low-power, piezoelectrically driven actuator in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that the size of the axial force needed to make the tool bit advance into soil, rock, or another material of interest is much smaller than in ordinary twist drilling, ordinary hammering, or ordinary steady pushing. Examples of properties that could be measured by use of an instrumented tool bit include electrical conductivity, permittivity, magnetic field, magnetic permeability, temperature, and any other properties that can be measured by fiber-optic sensors. The problem of instrumenting a probe of this type is simplified, relative to the problem of attaching electrodes in a rotating drill bit, in two ways: (1) Unlike a rotating drill bit, a bit of this type does not have flutes, which would compound the problem of ensuring contact between sensors and the side wall of a hole; and (2) there is no need for slip rings for electrical contact between sensor electronic circuitry and external circuitry because, unlike a rotating drill, a tool bit of this type is not rotated continuously during operation. One design for a tool bit of the present type is a segmented bit with a segmented, hinged support structure (see figure). The bit and its ultrasonic/sonic actuator are supported by a slider/guiding fixture, and its displacement and preload are controlled by a motor. For deployment from the folded configuration, a spring-loaded mechanism rotates the lower segment about the hinges, causing the lower segment to become axially aligned with the upper segment. A latching mechanism then locks the segments of the bit and the corresponding segments of the slider/guiding fixture. Then the entire resulting assembly is maneuvered into position for drilling into the ground. Another design provides for a bit comprising multiple tubular segments with an inner alignment string, similar to a foldable tent pole comprising multiple tubular segments with an inner elastic cable connecting the two ends. At the beginning of deployment, all segments except the first (lowermost) one remain folded, and the ultrasonic/sonic actuator is clamped to the top of the lowermost segment and used to drive this segment into the ground. When the first segment has penetrated to a specified depth, the second segment is connected to the upper end of the first segment to form a longer rigid tubular bit and the actuator is moved to the upper end of the second segnt. The process as described thus far is repeated, adding segments until the desired depth of penetration has been attained. Yet other designs provide for bits in the form of bistable circular- or rectangular- cross-section tubes that can be stowed compactly like rolls of flat tape and become rigidified upon extension to full length, in a manner partly similar to that of a common steel tape measure. Albeit not marketed for use in tool bits, a bistable reeled composite product that transforms itself from a flat coil to a rigid tube of circular cross section when unrolled, is commercially available under the trade name RolaTube(TradeMark) and serves as a model for the further development of tool bits of this subtype.

  14. Experimental superposition of orders of quantum gates

    PubMed Central

    Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  15. Microstructure design of low alloy transformation-induced plasticity assisted steels

    NASA Astrophysics Data System (ADS)

    Zhu, Ruixian

    The microstructure of low alloy Transformation Induced Plasticity (TRIP) assisted steels has been systematically varied through the combination of computational and experimental methodologies in order to enhance the mechanical performance and to fulfill the requirement of the next generation Advanced High Strength Steels (AHSS). The roles of microstructural parameters, such as phase constitutions, phase stability, and volume fractions on the strength-ductility combination have been revealed. Two model alloy compositions (i.e. Fe-1.5Mn-1.5Si-0.3C, and Fe-3Mn-1Si-0.3C in wt%, nominal composition) were studied. Multiphase microstructures including ferrite, bainite, retained austenite and martensite were obtained through conventional two step heat treatment (i.e. intercritical annealing-IA, and bainitic isothermal transformation-BIT). The effect of phase constitution on the mechanical properties was first characterized experimentally via systematically varying the volume fractions of these phases through computational thermodynamics. It was found that martensite was the main phase to deteriorate ductility, meanwhile the C/VA ratio (i.e. carbon content over the volume fraction of austenite) could be another indicator for the ductility of the multiphase microstructure. Following the microstructural characterization of the multiphase alloys, two microstructural design criteria (i.e. maximizing ferrite and austenite, suppressing athermal martensite) were proposed in order to optimize the corresponding mechanical performance. The volume fraction of ferrite was maximized during the IA with the help of computational thermodyanmics. On the other hand, it turned out theoretically that the martensite suppression could not be avoided on the low Mn contained alloy (i.e. Fe- 1.5Mn-1.5Si-0.3C). Nevertheless, the achieved combination of strength (~1300MPa true strength) and ductility (˜23% uniform elongation) on the low Mn alloy following the proposed design criteria fulfilled the requirement of the next generation AHSS. To further optimize the microstructure such that the designed criteria can be fully satisfied, further efforts have been made on two aspects: heat treatment and alloy addition. A multi-step BIT treatment was designed and successfully reduced the martensite content on the Fe-1.5Mn-1.5Si-0.3C alloy. Microstructure analysis showed a significant reduction on the volume fraction of martensite after the multi-step BIT as compared to the single BIT step. It was also found that, a slow cooling rate between the two BIT treatments resulted in a better combination of strength and ductility than rapid cooling or conventional one step BIT. Moreover, the athermal martensite formation can be fully suppressed by increasing the Mn content (Fe-3Mn-1Si-0.3C) and through carefully designed heat treatments. The athermal martensite-free alloy provided consistently better ductility than the martensite containing alloy. Finally, a microstructure based semi-empirical constitutive model has been developed to predict the monotonic tensile behavior of the multiphase TRIP assisted steels. The stress rule of mixture and isowork assumption for individual phases was presumed. Mecking-Kocks model was utilized to simulate the flow behavior of ferrite, bainitic ferrite and untransformed retained austenite. The kinetics of strain induced martensitic transformation was modeled following the Olson-Cohen method. The developed model has results in good agreements with the experimental results for both TRIP steels studied with same model parameters.

  16. Recipes for Avoiding Limpness: An Exploration of Women in Senior Management Positions in Australian Universities.

    ERIC Educational Resources Information Center

    Meyenn, Bob; Parker, Judith

    This paper describes a study that explored assumptions regarding the role of women in higher education set forth in the Sydney Morning Herald (Australia) in June 1995 by Dame Leonie Kramer, a prominent academic. She contended that "women go a bit limp when things get tough...." The study was based on semistructured interviews with seven…

  17. Adaptive P300 based control system

    PubMed Central

    Jin, Jing; Allison, Brendan Z.; Sellers, Eric W.; Brunner, Clemens; Horki, Petar; Wang, Xingyu; Neuper, Christa

    2015-01-01

    An adaptive P300 brain-computer interface (BCI) using a 12 × 7 matrix explored new paradigms to improve bit rate and accuracy. During online use, the system adaptively selects the number of flashes to average. Five different flash patterns were tested. The 19-flash paradigm represents the typical row/column presentation (i.e., 12 columns and 7 rows). The 9- and 14-flash A & B paradigms present all items of the 12 × 7 matrix three times using either nine or 14 flashes (instead of 19), decreasing the amount of time to present stimuli. Compared to 9-flash A, 9-flash B decreased the likelihood that neighboring items would flash when the target was not flashing, thereby reducing interference from items adjacent to targets. 14-flash A also reduced adjacent item interference and 14-flash B additionally eliminated successive (double) flashes of the same item. Results showed that accuracy and bit rate of the adaptive system were higher than the non-adaptive system. In addition, 9- and 14-flash B produced significantly higher performance than their respective A conditions. The results also show the trend that the 14-flash B paradigm was better than the 19-flash pattern for naïve users. PMID:21474877

  18. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  19. Gigahertz repetition rate, sub-femtosecond timing jitter optical pulse train directly generated from a mode-locked Yb:KYW laser.

    PubMed

    Yang, Heewon; Kim, Hyoji; Shin, Junho; Kim, Chur; Choi, Sun Young; Kim, Guang-Hoon; Rotermund, Fabian; Kim, Jungwon

    2014-01-01

    We show that a 1.13 GHz repetition rate optical pulse train with 0.70 fs high-frequency timing jitter (integration bandwidth of 17.5 kHz-10 MHz, where the measurement instrument-limited noise floor contributes 0.41 fs in 10 MHz bandwidth) can be directly generated from a free-running, single-mode diode-pumped Yb:KYW laser mode-locked by single-wall carbon nanotube-coated mirrors. To our knowledge, this is the lowest-timing-jitter optical pulse train with gigahertz repetition rate ever measured. If this pulse train is used for direct sampling of 565 MHz signals (Nyquist frequency of the pulse train), the jitter level demonstrated would correspond to the projected effective-number-of-bit of 17.8, which is much higher than the thermal noise limit of 50 Ω load resistance (~14 bits).

  20. DOC II 32-bit digital optical computer: optoelectronic hardware and software

    NASA Astrophysics Data System (ADS)

    Stone, Richard V.; Zeise, Frederick F.; Guilfoyle, Peter S.

    1991-12-01

    This paper describes current electronic hardware subsystems and software code which support OptiComp's 32-bit general purpose digital optical computer (DOC II). The reader is referred to earlier papers presented in this section for a thorough discussion of theory and application regarding DOC II. The primary optoelectronic subsystems include the drive electronics for the multichannel acousto-optic modulators, the avalanche photodiode amplifier, as well as threshold circuitry, and the memory subsystems. This device utilizes a single optical Boolean vector matrix multiplier and its VME based host controller interface in performing various higher level primitives. OptiComp Corporation wishes to acknowledge the financial support of the Office of Naval Research, the National Aeronautics and Space Administration, the Rome Air Development Center, and the Strategic Defense Initiative Office for the funding of this program under contracts N00014-87-C-0077, N00014-89-C-0266 and N00014-89-C- 0225.

  1. Ultrasonic/Sonic Driller/Corer (USDC) as a Subsurface Sampler and Sensors Platform for Planetary Exploration Applications

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Sherrit, Stewart; Bao, Xiaoqi; Badescu, Mircea; Aldrich, Jack; Chang, Zensheu

    2006-01-01

    The search for existing or past life in the Universe is one of the most important objectives of NASA's mission. For this purpose, effective instruments that can sample and conduct in-situ astrobiology analysis are being developed. In support of this objective, a series of novel mechanisms that are driven by an Ultrasonic/Sonic actuator have been developed to probe and sample rocks, ice and soil. This mechanism is driven by an ultrasonic piezoelectric actuator that impacts a bit at sonic frequencies through the use of an intermediate free-mass. Ultrasonic/Sonic Driller/Corer (USDC) devices were made that can produce both core and powdered cuttings, operate as a sounder to emit elastic waves and serve as a platform for sensors. For planetary exploration, this mechanism has the important advantage of requiring low axial force, virtually no torque, and can be duty cycled for operation at low average power. The advantage of requiring low axial load allows overcoming a major limitation of planetary sampling in low gravity environments or when operating from lightweight robots and rovers. The ability to operate at duty cycling with low average power produces a minimum temperature rise allowing for control of the sample integrity and preventing damage to potential biological markers in the acquired sample. The development of the USDC is being pursued on various fronts ranging from analytical modeling to mechanisms improvements while considering a wide range of potential applications. While developing the analytical capability to predict and optimize its performance, efforts are made to enhance its capability to drill at higher power and high speed. Taking advantage of the fact that the bit does not require rotation, sensors (e.g., thermocouple and fiberoptics) were integrated into the bit to examine the borehole during drilling. The sounding effect of the drill was used to emit elastic waves in order to evaluate the surface characteristics of rocks. Since the USDC is driven by piezoelectric actuation mechanism it can designed to operate at extreme temperature environments from very cold as on Titan and Europa to very hot as on Venus. In this paper, a review of the latest development and applications of the USDC will be given.

  2. A boosted negative bit-line SRAM with write-assisted cell in 45 nm CMOS technology

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Vipul; Kumar, Pradeep; Pandey, Neeta; Pandey, Sujata

    2018-02-01

    A new 11 T SRAM cell with write-assist is proposed to improve operation at low supply voltage. In this technique, a negative bit-line voltage is applied to one of the write bit-lines, while a boosted voltage is applied to the other write bit-line where transmission gate access is used in proposed 11 T cell. Supply voltage to one of the inverters is interrupted to weaken the feedback. Improved write feature is attributed to strengthened write access devices and weakened feedback loop of cell at the same time. Amount of boosting required for write performance improvement is also reduced due to feedback weakening, solving the persistent problem of half-selected cells and reliability reduction of access devices with the other suggested boosted and negative bit-line techniques. The proposed design improves write time by 79%, 63% and slower by 52% with respect to LP 10 T, WRE 8 T and 6 T cells respectively. It is found that write margin for the proposed cell is improved by about 4×, 2.4× and 5.37× compared to WRE8 T, LP10 T and 6 T respectively. The proposed cell with boosted negative bit line (BNBL) provides 47%, 31%, and 68.4% improvement in write margin with respect to no write-assist, negative bit line (NBL) and boosted bit line (BBL) write-assist respectively. Also, new sensing circuit with replica bit-line is proposed to give a more precise timing of applying boosted voltages for improved results. All simulations are done on TSMC 45 nm CMOS technology.

  3. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP)

    PubMed Central

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift–Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent “Bit 0,” “Bit 1” and “Bit 2” respectively. Different to common BFSK in digital communication, “Bit 0” and “Bit 1” composited the unique identifier of stimuli in binary bit stream form, while “Bit 2” indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2n−1 (n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations. PMID:28626393

  4. Estimation of the safe use concentrations of the preservative 1,2-benzisothiazolin-3-one (BIT) in consumer cleaning products and sunscreens.

    PubMed

    Novick, Rachel M; Nelson, Mindy L; Unice, Kenneth M; Keenan, James J; Paustenbach, Dennis J

    2013-06-01

    1,2-Benzisothiazolin-3-one (BIT; CAS # 2634-33-5) is a preservative used in consumer products. Dermal exposure to BIT at sufficient dose and duration can produce skin sensitization and allergic contact dermatitis in animals and susceptible humans.The purpose of this study is to derive a maximal concentration of BIT in various consumer products that would result in exposures below the No Expected Sensitization Induction Level (NESIL), a dose below which skin sensitization should not occur. A screening level exposure estimate was performed for several product use scenarios with sunscreen, laundry detergent, dish soap, and spray cleaner. We calculated that BIT concentrations below the following concentrations of 0.0075%, 0.035%, 0.035%, 0.021% in sunscreen, laundry detergent, dish soap, and spray cleaner, respectively, are unlikely to induce skin sensitization. We completed a pilot study consisting of bulk sample analysis of one representative product from each category labelled as containing BIT, and found BIT concentrations of 0.0009% and 0.0027% for sunscreen and dish soap, respectively. BIT was not detected in the laundry detergent and spray cleaner products above the limit of detection of 0.0006%. Based on publically available data for product formulations and our results, we were able to establish that cleaning products and sunscreens likely contain BIT at concentrations similar to or less than our calculated maximal safe concentrations and that exposures are unlikely to induce skin sensitization in most users. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Optical domain analog to digital conversion methods and apparatus

    DOEpatents

    Vawter, Gregory A

    2014-05-13

    Methods and apparatus for optical analog to digital conversion are disclosed. An optical signal is converted by mapping the optical analog signal onto a wavelength modulated optical beam, passing the mapped beam through interferometers to generate analog bit representation signals, and converting the analog bit representation signals into an optical digital signal. A photodiode receives an optical analog signal, a wavelength modulated laser coupled to the photodiode maps the optical analog signal to a wavelength modulated optical beam, interferometers produce an analog bit representation signal from the mapped wavelength modulated optical beam, and sample and threshold circuits corresponding to the interferometers produce a digital bit signal from the analog bit representation signal.

  6. Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.

    2003-01-01

    Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.

  7. Family centered brief intensive treatment: a pilot study of an outpatient treatment for acute suicidal ideation.

    PubMed

    Anastasia, Trena T; Humphries-Wadsworth, Terresa; Pepper, Carolyn M; Pearson, Timothy M

    2015-02-01

    Family Centered Brief Intensive Treatment (FC BIT), a hospital diversion treatment program for individuals with acute suicidal ideation, was developed to treat suicidal clients and their families. Individuals who met criteria for hospitalization were treated as outpatients using FC BIT (n = 19) or an intensive outpatient treatment without the family component (IOP; n = 24). Clients receiving FC BIT identified family members or supportive others to participate in therapy. FC BIT clients had significantly greater improvement at the end of treatment compared to IOP clients on measures of depression, hopelessness, and suicidality. Further research is needed to test the efficacy of FC BIT. © 2014 The American Association of Suicidology.

  8. High speed, very large (8 megabyte) first in/first out buffer memory (FIFO)

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1989-01-01

    A fast FIFO (First In First Out) memory buffer capable of storing data at rates of 100 megabytes per second. The invention includes a data packer which concatenates small bit data words into large bit data words, a memory array having individual data storage addresses adapted to store the large bit data words, a data unpacker into which large bit data words from the array can be read and reconstructed into small bit data words, and a controller to control and keep track of the individual data storage addresses in the memory array into which data from the packer is being written and data to the unpacker is being read.

  9. Study of bidirectional broadband passive optical network (BPON) using EDFA

    NASA Astrophysics Data System (ADS)

    Almalaq, Yasser

    Optical line terminals (OLTs) and number of optical network units (ONUs) are two main parts of passive optical network (PON). OLT is placed at the central office of the service providers, the ONUs are located near to the end subscribers. When compared with point-to-point design, a PON decreases the number of fiber used and central office components required. Broadband PON (BPON), which is one type of PON, can support high-speed voice, data and video services to subscribers' residential homes and small businesses. In this research, by using erbium doped fiber amplifier (EDFA), the performance of bi-directional BPON is experimented and tested for both downstream and upstream traffic directions. Ethernet PON (E-PON) and gigabit PON (G-PON) are the two other kinds of passive optical network besides BPON. The most beneficial factor of using BPON is it's reduced cost. The cost of the maintenance between the central office and the users' side is suitable because of the use of passive components, such as a splitter in the BPON architecture. In this work, a bidirectional BPON has been analyzed for both downstream and upstream cases by using bit error rate analyzer (BER). BER analyzers test three factors that are the maximum Q factor, minimum bit error rate, and eye height. In other words, parameters such as maximum Q factor, minimum bit error rate, and eye height can be analyzed utilized a BER tester. Passive optical components such as a splitter, optical circulator, and filters have been used in modeling and simulations. A 12th edition Optiwave simulator has been used in order to analyze the bidirectional BPON system. The system has been tested under several conditions such as changing the fiber length, extinction ratio, dispersion, and coding technique. When a long optical fiber above 40km was used, an EDFA was used in order to improve the quality of the signal.

  10. "Push back" technique: A simple method to remove broken drill bit from the proximal femur.

    PubMed

    Chouhan, Devendra K; Sharma, Siddhartha

    2015-11-18

    Broken drill bits can be difficult to remove from the proximal femur and may necessitate additional surgical exploration or special instrumentation. We present a simple technique to remove a broken drill bit that does not require any special instrumentation and can be accomplished through the existing incision. This technique is useful for those cases where the length of the broken drill bit is greater than the diameter of the bone.

  11. Secret Bit Transmission Using a Random Deal of Cards

    DTIC Science & Technology

    1990-05-01

    conversation between sender and receiver is public and is heard by all. A correct protocol always succeeds in transmitting the secret bit, and the other player...s), who receive the remaining cards and are assumed to have unlimited computing power, gain no information whatsoever about the value of the secret bit...In other words, their probability of correctly guessing the secret is bit exactly the same after listening to a run of the protocol as it was

  12. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  13. Characterization of binary string statistics for syntactic landmine detection

    NASA Astrophysics Data System (ADS)

    Nasif, Ahmed O.; Mark, Brian L.; Hintz, Kenneth J.

    2011-06-01

    Syntactic landmine detection has been proposed to detect and classify non-metallic landmines using ground penetrating radar (GPR). In this approach, the GPR return is processed to extract characteristic binary strings for landmine and clutter discrimination. In our previous work, we discussed the preprocessing methodology by which the amplitude information of the GPR A-scan signal can be effectively converted into binary strings, which identify the impedance discontinuities in the signal. In this work, we study the statistical properties of the binary string space. In particular, we develop a Markov chain model to characterize the observed bit sequence of the binary strings. The state is defined as the number of consecutive zeros between two ones in the binarized A-scans. Since the strings are highly sparse (the number of zeros is much greater than the number of ones), defining the state this way leads to fewer number of states compared to the case where each bit is defined as a state. The number of total states is further reduced by quantizing the number of consecutive zeros. In order to identify the correct order of the Markov model, the mean square difference (MSD) between the transition matrices of mine strings and non-mine strings is calculated up to order four using training data. The results show that order one or two maximizes this MSD. The specification of the transition probabilities of the chain can be used to compute the likelihood of any given string. Such a model can be used to identify characteristic landmine strings during the training phase. These developments on modeling and characterizing the string statistics can potentially be part of a real-time landmine detection algorithm that identifies landmine and clutter in an adaptive fashion.

  14. Spin-Valve and Spin-Tunneling Devices: Read Heads, MRAMs, Field Sensors

    NASA Astrophysics Data System (ADS)

    Freitas, P. P.

    Hard disk magnetic data storage is increasing at a steady state in terms of units sold, with 144 million drives sold in 1998 (107 million for desktops, 18 million for portables, and 19 million for enterprise drives), corresponding to a total business of 34 billion US [1]. The growing need for storage coming from new PC operating systems, INTERNET applications, and a foreseen explosion of applications connected to consumer electronics (digital TV, video, digital cameras, GPS systems, etc.), keep the magnetics community actively looking for new solutions, concerning media, heads, tribology, and system electronics. Current state of the art disk drives (January 2000), using dual inductive-write, magnetoresistive-read (MR) integrated heads reach areal densities of 15 to 23 bit/μm2, capable of putting a full 20 GB in one platter (a 2 hour film occupies 10 GB). Densities beyond 80 bit/μm2 have already been demonstrated in the laboratory (Fujitsu 87 bit/μm2-Intermag 2000, Hitachi 81 bit/μm2, Read-Rite 78 bit/μ m2, Seagate 70 bit/μ m2 - all the last three demos done in the first 6 months of 2000, with IBM having demonstrated 56 bit/μ m2 already at the end of 1999). At densities near 60 bit/μm2, the linear bit size is sim 43 nm, and the width of the written tracks is sim 0.23 μm. Areal density in commercial drives is increasing steadily at a rate of nearly 100% per year [1], and consumer products above 60 bit/μm2 are expected by 2002. These remarkable achievements are only possible by a stream of technological innovations, in media [2], write heads [3], read heads [4], and system electronics [5]. In this chapter, recent advances on spin valve materials and spin valve sensor architectures, low resistance tunnel junctions and tunnel junction head architectures will be addressed.

  15. Smaller Footprint Drilling System for Deep and Hard Rock Environments; Feasibility of Ultra-High-Speed Diamond Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis; Alan Black; Homer Robertson

    2006-03-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the ultra-high rotary speed drilling system is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling'' for the period starting 1 October 2004 through 30 September 2005. Additionally, research activity from 1 October 2005 through 28 February 2006 is included in this report: (1) TerraTek reviewed applicable literature and documentation and convened a project kick-off meeting with Industry Advisors in attendance. (2) TerraTek designed and planned Phase I bench scale experiments. Some difficulties continue in obtaining ultra-high speed motors. Improvements have been made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs have been provided to vendors for production. A more consistent product is required to minimize the differences in bit performance. A test matrix for the final core bit testing program has been completed. (3) TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. (4) Significant testing has been performed on nine different rocks. (5) Bit balling has been observed on some rock and seems to be more pronounces at higher rotational speeds. (6) Preliminary analysis of data has been completed and indicates that decreased specific energy is required as the rotational speed increases (Task 4). This data analysis has been used to direct the efforts of the final testing for Phase I (Task 5). (7) Technology transfer (Task 6) has begun with technical presentations to the industry (see Judzis).« less

  16. Computer-Aided Design for Built-In-Test (CADBIT) - BIT Library. Volume 2

    DTIC Science & Technology

    1989-10-01

    TECHNIQUE: ON-BOARD RONI CATEGORY: LONG TUTORIA \\L PAG E 5 of 14I SUBCATEGORY: BIT TECHNIQUE ATTRIBUTES DATA TYPE: TEXT El LIST E] TABLE [ GRAPHIC E...SHIFT REGISTER (MISR) CATEGORY: LONG TUTORIA -L PAGE i Of 13 SUBCATEGORY: BIT TECH-{MQUE ATTRIBUTES DATA TYPE: TEXT LIST El TABLE GRAPHIC E EQUATIONS...ELEMENT DATA SHEET BIT TECHNIQUE: UTILIZING REDUNDANCY CATEGORY: LONG TUTORIA L PAGE 9 of 10 SUBCATEGORY: PARTS DATA TABLE DATA TYPE: TEXT F1 UST C3

  17. Wear and performance: An experimental study on PDC bits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, O.; Azar, J.J.

    1997-07-01

    Real-time drilling data, gathered under full-scale conditions, was analyzed to determine the influence of cutter dullness on PDC-bit rate of penetration. It was found that while drilling in shale, the cutters` wearflat area was not a controlling factor on rate of penetration; however, when drilling in limestone, wearflat area significantly influenced PDC bit penetration performance. Similarly, the presence of diamond lips on PDC cutters was found to be unimportant while drilling in shale, but it greatly enhanced bit performance when drilling in limestone.

  18. SEM Analysis Techniques for LSI Microcircuits. Volume 2

    DTIC Science & Technology

    1980-08-01

    4~, 1 v’ ’ RADC-TR80-250, Vol 11 (of two), Final Technical -Report, Augut1980 SEM, ANALYSIS TECHNIQUES, FOR LSI MICROCIRCUITS: ’Martin...Bit Static ’RAM.. Volume II - 1024 Bit Stat’i RAM, 4096 Bit Dynamic RAM (SiGATE WOS,)., 4096 Bit -Dynamic RAM ( 1 2 L Bipolar)., ,Summary. RADC-TR-80-250...States, ithout.irst obtani an export nse, is a violation t Internatio 1 Tr ffic in A . eguiations. Such violation is subject o penalty of to 2 years impr

  19. Compositional Verification with Abstraction, Learning, and SAT Solving

    DTIC Science & Technology

    2015-05-01

    arithmetic, and bit-vectors (currently, via bit-blasting). The front-end is based on an existing tool called UFO [8] which converts C programs to the Horn...supports propositional logic, linear arithmetic, and bit-vectors (via bit-blasting). The front-end is based on the tool UFO [8]. It encodes safety of...tool UFO [8]. The encoding in Horn-SMT only uses the theory of Linear Rational Arithmetic. All experiments were carried out on an Intel R© CoreTM2 Quad

  20. Personal supercomputing by using transputer and Intel 80860 in plasma engineering

    NASA Astrophysics Data System (ADS)

    Ido, S.; Aoki, K.; Ishine, M.; Kubota, M.

    1992-09-01

    Transputer (T800) and 64-bit RISC Intel 80860 (i860) added on a personal computer can be used as an accelerator. When 32-bit T800s in a parallel system or 64-bit i860s are used, scientific calculations are carried out several ten times as fast as in the case of commonly used 32-bit personal computers or UNIX workstations. Benchmark tests and examples of physical simulations using T800s and i860 are reported.

Top