Sample records for biased random bit

  1. Variable-bias coin tossing

    NASA Astrophysics Data System (ADS)

    Colbeck, Roger; Kent, Adrian

    2006-03-01

    Alice is a charismatic quantum cryptographer who believes her parties are unmissable; Bob is a (relatively) glamorous string theorist who believes he is an indispensable guest. To prevent possibly traumatic collisions of self-perception and reality, their social code requires that decisions about invitation or acceptance be made via a cryptographically secure variable-bias coin toss (VBCT). This generates a shared random bit by the toss of a coin whose bias is secretly chosen, within a stipulated range, by one of the parties; the other party learns only the random bit. Thus one party can secretly influence the outcome, while both can save face by blaming any negative decisions on bad luck. We describe here some cryptographic VBCT protocols whose security is guaranteed by quantum theory and the impossibility of superluminal signaling, setting our results in the context of a general discussion of secure two-party computation. We also briefly discuss other cryptographic applications of VBCT.

  2. Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation

    NASA Technical Reports Server (NTRS)

    Swift, G.

    2002-01-01

    JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modeste Nguimdo, Romain, E-mail: Romain.Nguimdo@vub.ac.be; Tchitnga, Robert; Woafo, Paul

    We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bitmore » rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.« less

  4. Unbiased All-Optical Random-Number Generator

    NASA Astrophysics Data System (ADS)

    Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja

    2017-10-01

    The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.

  5. Experimental loss-tolerant quantum coin flipping

    PubMed Central

    Berlín, Guido; Brassard, Gilles; Bussières, Félix; Godbout, Nicolas; Slater, Joshua A.; Tittel, Wolfgang

    2011-01-01

    Coin flipping is a cryptographic primitive in which two distrustful parties wish to generate a random bit to choose between two alternatives. This task is impossible to realize when it relies solely on the asynchronous exchange of classical bits: one dishonest player has complete control over the final outcome. It is only when coin flipping is supplemented with quantum communication that this problem can be alleviated, although partial bias remains. Unfortunately, practical systems are subject to loss of quantum data, which allows a cheater to force a bias that is complete or arbitrarily close to complete in all previous protocols and implementations. Here we report on the first experimental demonstration of a quantum coin-flipping protocol for which loss cannot be exploited to cheat better. By eliminating the problem of loss, which is unavoidable in any realistic setting, quantum coin flipping takes a significant step towards real-world applications of quantum communication. PMID:22127057

  6. Investigation of mode partition noise in Fabry-Perot laser diode

    NASA Astrophysics Data System (ADS)

    Guo, Qingyi; Deng, Lanxin; Mu, Jianwei; Li, Xun; Huang, Wei-Ping

    2014-09-01

    Passive optical network (PON) is considered as the most appealing access network architecture in terms of cost-effectiveness, bandwidth management flexibility, scalability and durability. And to further reduce the cost per subscriber, a Fabry-Perot (FP) laser diode is preferred as the transmitter at the optical network units (ONUs) because of its lower cost compared to distributed feedback (DFB) laser diode. However, the mode partition noise (MPN) associated with the multi-longitudinal-mode FP laser diode becomes the limiting factor in the network. This paper studies the MPN characteristics of the FP laser diode using the time-domain simulation of noise-driven multi-mode laser rate equation. The probability density functions are calculated for each longitudinal mode. The paper focuses on the investigation of the k-factor, which is a simple yet important measure of the noise power, but is usually taken as a fitted or assumed value in the penalty calculations. In this paper, the sources of the k-factor are studied with simulation, including the intrinsic source of the laser Langevin noise, and the extrinsic source of the bit pattern. The photon waveforms are shown under four simulation conditions for regular or random bit pattern, and with or without Langevin noise. The k-factors contributed by those sources are studied with a variety of bias current and modulation current. Simulation results are illustrated in figures, and show that the contribution of Langevin noise to the k-factor is larger than that of the random bit pattern, and is more dominant at lower bias current or higher modulation current.

  7. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  8. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  9. 10 Gb/s operation of photonic crystal silicon optical modulators.

    PubMed

    Nguyen, Hong C; Sakai, Yuya; Shinkawa, Mizuki; Ishikura, Norihiro; Baba, Toshihiko

    2011-07-04

    We report the first experimental demonstration of 10 Gb/s modulation in a photonic crystal silicon optical modulator. The device consists of a 200 μm-long SiO2-clad photonic crystal waveguide, with an embedded p-n junction, incorporated into an asymmetric Mach-Zehnder interferometer. The device is integrated on a SOI chip and fabricated by CMOS-compatible processes. With the bias voltage set at 0 V, we measure a V(π)L < 0.056 V∙cm. Optical modulation is demonstrated by electrically driving the device with a 2(31) - 1 bit non-return-to-zero pseudo-random bit sequence signal. An open eye pattern is observed at bitrates of 10 Gb/s and 2 Gb/s, with and without pre-emphasis of the drive signal, respectively.

  10. Analysis of TID process, geometry, and bias condition dependence in 14-nm FinFETs and implications for RF and SRAM performance

    DOE PAGES

    King, M. P.; Wu, X.; Eller, Manfred; ...

    2016-12-07

    Here, total ionizing dose results are provided, showing the effects of different threshold adjust implant processes and irradiation bias conditions of 14-nm FinFETs. Minimal radiation-induced threshold voltage shift across a variety of transistor types is observed. Off-state leakage current of nMOSFET transistors exhibits a strong gate bias dependence, indicating electrostatic gate control of the sub-fin region and the corresponding parasitic conduction path are the largest concern for radiation hardness in FinFET technology. The high-Vth transistors exhibit the best irradiation performance across all bias conditions, showing a reasonably small change in off-state leakage current and Vth, while the low-Vth transistors exhibitmore » a larger change in off-state leakage current. The “worst-case” bias condition during irradiation for both pull-down and pass-gate nMOSFETs in static random access memory is determined to be the on-state (Vgs = Vdd). We find the nMOSFET pull-down and pass-gate transistors of the SRAM bit-cell show less radiation-induced degradation due to transistor geometry and channel doping differences than the low-Vth transistor. Near-threshold operation is presented as a methodology for reducing radiation-induced increases in off-state device leakage current. In a 14-nm FinFET technology, the modeling indicates devices with high channel stop doping show the most robust response to TID allowing stable operation of ring oscillators and the SRAM bit-cell with minimal shift in critical operating characteristics.« less

  11. Analysis of TID process, geometry, and bias condition dependence in 14-nm FinFETs and implications for RF and SRAM performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, M. P.; Wu, X.; Eller, Manfred

    Here, total ionizing dose results are provided, showing the effects of different threshold adjust implant processes and irradiation bias conditions of 14-nm FinFETs. Minimal radiation-induced threshold voltage shift across a variety of transistor types is observed. Off-state leakage current of nMOSFET transistors exhibits a strong gate bias dependence, indicating electrostatic gate control of the sub-fin region and the corresponding parasitic conduction path are the largest concern for radiation hardness in FinFET technology. The high-Vth transistors exhibit the best irradiation performance across all bias conditions, showing a reasonably small change in off-state leakage current and Vth, while the low-Vth transistors exhibitmore » a larger change in off-state leakage current. The “worst-case” bias condition during irradiation for both pull-down and pass-gate nMOSFETs in static random access memory is determined to be the on-state (Vgs = Vdd). We find the nMOSFET pull-down and pass-gate transistors of the SRAM bit-cell show less radiation-induced degradation due to transistor geometry and channel doping differences than the low-Vth transistor. Near-threshold operation is presented as a methodology for reducing radiation-induced increases in off-state device leakage current. In a 14-nm FinFET technology, the modeling indicates devices with high channel stop doping show the most robust response to TID allowing stable operation of ring oscillators and the SRAM bit-cell with minimal shift in critical operating characteristics.« less

  12. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  13. Superparamagnetic perpendicular magnetic tunnel junctions for true random number generators

    NASA Astrophysics Data System (ADS)

    Parks, Bradley; Bapna, Mukund; Igbokwe, Julianne; Almasi, Hamid; Wang, Weigang; Majetich, Sara A.

    2018-05-01

    Superparamagnetic perpendicular magnetic tunnel junctions are fabricated and analyzed for use in random number generators. Time-resolved resistance measurements are used as streams of bits in statistical tests for randomness. Voltage control of the thermal stability enables tuning the average speed of random bit generation up to 70 kHz in a 60 nm diameter device. In its most efficient operating mode, the device generates random bits at an energy cost of 600 fJ/bit. A narrow range of magnetic field tunes the probability of a given state from 0 to 1, offering a means of probabilistic computing.

  14. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    PubMed

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  15. 640-Gbit/s fast physical random number generation using a broadband chaotic semiconductor laser

    NASA Astrophysics Data System (ADS)

    Zhang, Limeng; Pan, Biwei; Chen, Guangcan; Guo, Lu; Lu, Dan; Zhao, Lingjuan; Wang, Wei

    2017-04-01

    An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method. The compact chaotic source is implemented by using a monolithic integrated dual-mode amplified feedback laser (AFL) with self-injection, where a robust chaotic signal with RF frequency coverage of above 50 GHz and flatness of ±3.6 dB is generated. By using 4-least significant bits (LSBs) retaining from the 8-bit digitization of the chaotic waveform, random sequences with a bit-rate up to 640 Gbit/s (160 GS/s × 4 bits) are realized. The generated random bits have passed each of the fifteen NIST statistics tests (NIST SP800-22), indicating its randomness for practical applications.

  16. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  17. Implementing Bayesian networks with embedded stochastic MRAM

    NASA Astrophysics Data System (ADS)

    Faria, Rafatul; Camsari, Kerem Y.; Datta, Supriyo

    2018-04-01

    Magnetic tunnel junctions (MTJ's) with low barrier magnets have been used to implement random number generators (RNG's) and it has recently been shown that such an MTJ connected to the drain of a conventional transistor provides a three-terminal tunable RNG or a p-bit. In this letter we show how this p-bit can be used to build a p-circuit that emulates a Bayesian network (BN), such that the correlations in real world variables can be obtained from electrical measurements on the corresponding circuit nodes. The p-circuit design proceeds in two steps: the BN is first translated into a behavioral model, called Probabilistic Spin Logic (PSL), defined by dimensionless biasing (h) and interconnection (J) coefficients, which are then translated into electronic circuit elements. As a benchmark example, we mimic a family tree of three generations and show that the genetic relatedness calculated from a SPICE-compatible circuit simulator matches well-known results.

  18. Secure self-calibrating quantum random-bit generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiorentino, M.; Santori, C.; Spillane, S. M.

    2007-03-15

    Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographicmore » method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.« less

  19. Dynamic SVL and body bias for low leakage power and high performance in CMOS digital circuits

    NASA Astrophysics Data System (ADS)

    Deshmukh, Jyoti; Khare, Kavita

    2012-12-01

    In this article, a new complementary metal oxide semiconductor design scheme called dynamic self-controllable voltage level (DSVL) is proposed. In the proposed scheme, leakage power is controlled by dynamically disconnecting supply to inactive blocks and adjusting body bias to further limit leakage and to maintain performance. Leakage power measurements at 1.8 V, 75°C demonstrate power reduction by 59.4% in case of 1 bit full adder and by 43.0% in case of a chain of four inverters using SVL circuit as a power switch. Furthermore, we achieve leakage power reduction by 94.7% in case of 1 bit full adder and by 91.8% in case of a chain of four inverters using dynamic body bias. The forward body bias of 0.45 V applied in active mode improves the maximum operating frequency by 16% in case of 1 bit full adder and 5.55% in case of a chain of inverters. Analysis shows that additional benefits of using the DSVL and body bias include high performance, low leakage power consumption in sleep mode, single threshold implementation and state retention even in standby mode.

  20. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  1. Efficient and robust quantum random number generation by photon number detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.

    2015-08-17

    We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less

  2. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  3. Fuel management optimization using genetic algorithms and expert knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1996-09-01

    The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.

  4. Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis

    PubMed Central

    Hensen, B.; Kalb, N.; Blok, M. S.; Dréau, A. E.; Reiserer, A.; Vermeulen, R. F. L.; Schouten, R. N.; Markham, M.; Twitchen, D. J.; Goodenough, K.; Elkouss, D.; Wehner, S.; Taminiau, T. H.; Hanson, R.

    2016-01-01

    The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator. PMID:27509823

  5. The Random Telegraph Signal Behavior of Intermittently Stuck Bits in SDRAMs

    NASA Astrophysics Data System (ADS)

    Chugg, Andrew Michael; Burnell, Andrew J.; Duncan, Peter H.; Parker, Sarah; Ward, Jonathan J.

    2009-12-01

    This paper reports behavior analogous to the Random Telegraph Signal (RTS) seen in the leakage currents from radiation induced hot pixels in Charge Coupled Devices (CCDs), but in the context of stuck bits in Synchronous Dynamic Random Access Memories (SDRAMs). Our analysis suggests that pseudo-random sticking and unsticking of the SDRAM bits is due to thermally induced fluctuations in leakage current through displacement damage complexes in depletion regions that were created by high-energy neutron and proton interactions. It is shown that the number of observed stuck bits increases exponentially with temperature, due to the general increase in the leakage currents through the damage centers with temperature. Nevertheless, some stuck bits are seen to pseudo-randomly stick and unstick in the context of a continuously rising trend of temperature, thus demonstrating that their damage centers can exist in multiple widely spaced, discrete levels of leakage current, which is highly consistent with RTS. This implies that these intermittently stuck bits (ISBs) are a displacement damage phenomenon and are unrelated to microdose issues, which is confirmed by the observation that they also occur in unbiased irradiation. Finally, we note that observed variations in the periodicity of the sticking and unsticking behavior on several timescales is most readily explained by multiple leakage current pathways through displacement damage complexes spontaneously and independently opening and closing under the influence of thermal vibrations.

  6. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  7. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  8. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  9. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    NASA Astrophysics Data System (ADS)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-09-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method.

  10. Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode.

    PubMed

    Abellán, C; Amaya, W; Jofre, M; Curty, M; Acín, A; Capmany, J; Pruneri, V; Mitchell, M W

    2014-01-27

    We demonstrate a high bit-rate quantum random number generator by interferometric detection of phase diffusion in a gain-switched DFB laser diode. Gain switching at few-GHz frequencies produces a train of bright pulses with nearly equal amplitudes and random phases. An unbalanced Mach-Zehnder interferometer is used to interfere subsequent pulses and thereby generate strong random-amplitude pulses, which are detected and digitized to produce a high-rate random bit string. Using established models of semiconductor laser field dynamics, we predict a regime of high visibility interference and nearly complete vacuum-fluctuation-induced phase diffusion between pulses. These are confirmed by measurement of pulse power statistics at the output of the interferometer. Using a 5.825 GHz excitation rate and 14-bit digitization, we observe 43 Gbps quantum randomness generation.

  11. Alternation blindness in the representation of binary sequences.

    PubMed

    Yu, Ru Qi; Osherson, Daniel; Zhao, Jiaying

    2018-03-01

    Binary information is prevalent in the environment and contains 2 distinct outcomes. Binary sequences consist of a mixture of alternation and repetition. Understanding how people perceive such sequences would contribute to a general theory of information processing. In this study, we examined how people process alternation and repetition in binary sequences. Across 4 paradigms involving estimation, working memory, change detection, and visual search, we found that the number of alternations is underestimated compared with repetitions (Experiment 1). Moreover, recall for binary sequences deteriorates as the sequence alternates more (Experiment 2). Changes in bits are also harder to detect as the sequence alternates more (Experiment 3). Finally, visual targets superimposed on bits of a binary sequence take longer to process as alternation increases (Experiment 4). Overall, our results indicate that compared with repetition, alternation in a binary sequence is less salient in the sense of requiring more attention for successful encoding. The current study thus reveals the cognitive constraints in the representation of alternation and provides a new explanation for the overalternation bias in randomness perception. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Hybrid spread spectrum radio system

    DOEpatents

    Smith, Stephen F.; Dress, William B.

    2010-02-02

    Systems and methods are described for hybrid spread spectrum radio systems. A method includes modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control an amplification circuit that provides a gain to the signal. Another method includes: modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control a fast hopping frequency synthesizer; and fast frequency hopping the signal with the fast hopping frequency synthesizer, wherein multiple frequency hops occur within a single data-bit time.

  13. Magnet/Hall-Effect Random-Access Memory

    NASA Technical Reports Server (NTRS)

    Wu, Jiin-Chuan; Stadler, Henry L.; Katti, Romney R.

    1991-01-01

    In proposed magnet/Hall-effect random-access memory (MHRAM), bits of data stored magnetically in Perm-alloy (or equivalent)-film memory elements and read out by using Hall-effect sensors to detect magnetization. Value of each bit represented by polarity of magnetization. Retains data for indefinite time or until data rewritten. Speed of Hall-effect sensors in MHRAM results in readout times of about 100 nanoseconds. Other characteristics include high immunity to ionizing radiation and storage densities of order 10(Sup6)bits/cm(Sup 2) or more.

  14. Testability Design Rating System: Testability Handbook. Volume 1

    DTIC Science & Technology

    1992-02-01

    4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory

  15. Multi-floor cascading ferroelectric nanostructures: multiple data writing-based multi-level non-volatile memory devices

    NASA Astrophysics Data System (ADS)

    Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon

    2016-01-01

    Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07377d

  16. Critical side channel effects in random bit generation with multiple semiconductor lasers in a polarization-based quantum key distribution system.

    PubMed

    Ko, Heasin; Choi, Byung-Seok; Choe, Joong-Seon; Kim, Kap-Joong; Kim, Jong-Hoi; Youn, Chun Ju

    2017-08-21

    Most polarization-based BB84 quantum key distribution (QKD) systems utilize multiple lasers to generate one of four polarization quantum states randomly. However, random bit generation with multiple lasers can potentially open critical side channels that significantly endangers the security of QKD systems. In this paper, we show unnoticed side channels of temporal disparity and intensity fluctuation, which possibly exist in the operation of multiple semiconductor laser diodes. Experimental results show that the side channels can enormously degrade security performance of QKD systems. An important system issue for the improvement of quantum bit error rate (QBER) related with laser driving condition is further addressed with experimental results.

  17. The cosmic microwave background radiation power spectrum as a random bit generator for symmetric- and asymmetric-key cryptography.

    PubMed

    Lee, Jeffrey S; Cleaver, Gerald B

    2017-10-01

    In this note, the Cosmic Microwave Background (CMB) Radiation is shown to be capable of functioning as a Random Bit Generator, and constitutes an effectively infinite supply of truly random one-time pad values of arbitrary length. It is further argued that the CMB power spectrum potentially conforms to the FIPS 140-2 standard. Additionally, its applicability to the generation of a (n × n) random key matrix for a Vernam cipher is established.

  18. A Study of a Standard BIT Circuit.

    DTIC Science & Technology

    1977-02-01

    IENDED BIT APPROACHES FOR QED MODULES AND APPLICATION OF THE ANALYTIC MEASURES 36 4.1 Built-In-Test for Memory Class Modules 37 4.1.1 Random Access...Implementation 68 4.1.5.5 Criti cal Parameters 68 4.1.5.6 QED Module Test Equipment Requirements 68 4.1.6 Application of Analytic Measures to the...Microprocessor BIT Techniques.. 121 4.2.9 Application of Analytic Measures to the Recommended BIT App roaches 125 4.2.10 Process Class BIT by Partial

  19. Twin-bit via resistive random access memory in 16 nm FinFET logic technologies

    NASA Astrophysics Data System (ADS)

    Shih, Yi-Hong; Hsu, Meng-Yin; King, Ya-Chin; Lin, Chrong Jung

    2018-04-01

    A via resistive random access memory (RRAM) cell fully compatible with the standard CMOS logic process has been successfully demonstrated for high-density logic nonvolatile memory (NVM) modules in advanced FinFET circuits. In this new cell, the transition metal layers are formed on both sides of a via, given two storage bits per via. In addition to its compact cell area (1T + 14 nm × 32 nm), the twin-bit via RRAM cell features a low operation voltage, a large read window, good data retention, and excellent cycling capability. As fine alignments between mask layers become possible, the twin-bit via RRAM cell is expected to be highly scalable in advanced FinFET technology.

  20. Double acting bit holder

    DOEpatents

    Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.

    1994-01-01

    A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.

  1. Bias estimation for the Landsat 8 operational land imager

    USGS Publications Warehouse

    Morfitt, Ron; Vanderwerff, Kelly

    2011-01-01

    The Operational Land Imager (OLI) is a pushbroom sensor that will be a part of the Landsat Data Continuity Mission (LDCM). This instrument is the latest in the line of Landsat imagers, and will continue to expand the archive of calibrated earth imagery. An important step in producing a calibrated image from instrument data is accurately accounting for the bias of the imaging detectors. Bias variability is one factor that contributes to error in bias estimation for OLI. Typically, the bias is simply estimated by averaging dark data on a per-detector basis. However, data acquired during OLI pre-launch testing exhibited bias variation that correlated well with the variation in concurrently collected data from a special set of detectors on the focal plane. These detectors are sensitive to certain electronic effects but not directly to incoming electromagnetic radiation. A method of using data from these special detectors to estimate the bias of the imaging detectors was developed, but found not to be beneficial at typical radiance levels as the detectors respond slightly when the focal plane is illuminated. In addition to bias variability, a systematic bias error is introduced by the truncation performed by the spacecraft of the 14-bit instrument data to 12-bit integers. This systematic error can be estimated and removed on average, but the per pixel quantization error remains. This paper describes the variability of the bias, the effectiveness of a new approach to estimate and compensate for it, as well as the errors due to truncation and how they are reduced.

  2. Conceptual design and feasibility evaluation model of a 10 to the 8th power bit oligatomic mass memory. Volume 1: Conceptual design

    NASA Technical Reports Server (NTRS)

    Recksiedler, A. L.; Lutes, C. L.

    1972-01-01

    The oligatomic (mirror) thin film memory technology is a suitable candidate for general purpose spaceborne applications in the post-1975 time frame. Capacities of around 10 to the 8th power bits can be reliably implemented with systems designed around a 335 million bit module. The recommended mode was determined following an investigation of implementation sizes ranging from an 8,000,000 to 100,000,000 bits per module. Cost, power, weight, volume, reliability, maintainability and speed were investigated. The memory includes random access, NDRO, SEC-DED, nonvolatility, and dual interface characteristics. The applications most suitable for the technology are those involving a large capacity with high speed (no latency), nonvolatility, and random accessing.

  3. Experimental nonlocality-based randomness generation with nonprojective measurements

    NASA Astrophysics Data System (ADS)

    Gómez, S.; Mattar, A.; Gómez, E. S.; Cavalcanti, D.; Farías, O. Jiménez; Acín, A.; Lima, G.

    2018-04-01

    We report on an optical setup generating more than one bit of randomness from one entangled bit (i.e., a maximally entangled state of two qubits). The amount of randomness is certified through the observation of Bell nonlocal correlations. To attain this result we implemented a high-purity entanglement source and a nonprojective three-outcome measurement. Our implementation achieves a gain of 27% of randomness as compared with the standard methods using projective measurements. Additionally, we estimate the amount of randomness certified in a one-sided device-independent scenario, through the observation of Einstein-Podolsky-Rosen steering. Our results prove that nonprojective quantum measurements allow extending the limits for nonlocality-based certified randomness generation using current technology.

  4. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  5. Secret Bit Transmission Using a Random Deal of Cards

    DTIC Science & Technology

    1990-05-01

    conversation between sender and receiver is public and is heard by all. A correct protocol always succeeds in transmitting the secret bit, and the other player...s), who receive the remaining cards and are assumed to have unlimited computing power, gain no information whatsoever about the value of the secret bit...In other words, their probability of correctly guessing the secret is bit exactly the same after listening to a run of the protocol as it was

  6. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  7. Effects of plastic bits on the condition and behaviour of captive-reared pheasants.

    PubMed

    Butler, D A; Davis, C

    2010-03-27

    Between 2005 and 2007, data were collected from game farms across England and Wales to examine the effects of the use of bits on the physiological condition and behaviour of pheasants. On each site, two pheasant pens kept in the same conditions were randomly allocated to either use bits or not. The behaviour and physiological conditions of pheasants in each treatment pen were assessed on the day of bitting and weekly thereafter until release. Detailed records of feed usage, medications and mortality were also kept. Bits halved the number of acts of bird-on-bird pecking, but they doubled the incidence of headshaking and scratching. Bits caused nostril inflammation and bill deformities in some birds, particularly after seven weeks of age. In all weeks after bitting, feather condition was poorer in non-bitted pheasants than in those fitted with bits. Less than 3 per cent of bitted birds had damaged skin, but in the non-bitted pens this figure increased over time to 23 per cent four weeks later. Feed use and mortality did not differ between bitted and non-bitted birds.

  8. Development of a high capacity bubble domain memory element and related epitaxial garnet materials for application in spacecraft data recorders. Item 1: Development of a high capacity memory element

    NASA Technical Reports Server (NTRS)

    Besser, P. J.

    1977-01-01

    Several versions of the 100K bit chip, which is configured as a single serial loop, were designed, fabricated and evaluated. Design and process modifications were introduced into each succeeding version to increase device performance and yield. At an intrinsic field rate of 150 KHz the final design operates from -10 C to +60 C with typical bias margins of 12 and 8 percent, respectively, for continuous operation. Asynchronous operation with first bit detection on start-up produces essentially the same margins over the temperature range. Cost projections made from fabrication yield runs on the 100K bit devices indicate that the memory element cost will be less than 10 millicents/bit in volume production.

  9. Tailored Codes for Small Quantum Memories

    NASA Astrophysics Data System (ADS)

    Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.

    2017-12-01

    We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.

  10. Deterministic switching of a magnetoelastic single-domain nano-ellipse using bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Cheng-Yen; Sepulveda, Abdon; Keller, Scott

    2016-03-21

    In this paper, a fully coupled analytical model between elastodynamics with micromagnetics is used to study the switching energies using voltage induced mechanical bending of a magnetoelastic bit. The bit consists of a single domain magnetoelastic nano-ellipse deposited on a thin film piezoelectric thin film (500 nm) attached to a thick substrate (0.5 mm) with patterned electrodes underneath the nano-dot. A voltage applied to the electrodes produces out of plane deformation with bending moments induced in the magnetoelastic bit modifying the magnetic anisotropy. To minimize the energy, two design stages are used. In the first stage, the geometry and bias field (H{submore » b}) of the bit are optimized to minimize the strain energy required to rotate between two stable states. In the second stage, the bit's geometry is fixed, and the electrode position and control mechanism is optimized. The electrical energy input is about 200 (aJ) which is approximately two orders of magnitude lower than spin transfer torque approaches.« less

  11. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  12. Randomizer for High Data Rates

    NASA Technical Reports Server (NTRS)

    Garon, Howard; Sank, Victor J.

    2018-01-01

    NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others.

  13. True random bit generators based on current time series of contact glow discharge electrolysis

    NASA Astrophysics Data System (ADS)

    Rojas, Andrea Espinel; Allagui, Anis; Elwakil, Ahmed S.; Alawadhi, Hussain

    2018-05-01

    Random bit generators (RBGs) in today's digital information and communication systems employ a high rate physical entropy sources such as electronic, photonic, or thermal time series signals. However, the proper functioning of such physical systems is bound by specific constrains that make them in some cases weak and susceptible to external attacks. In this study, we show that the electrical current time series of contact glow discharge electrolysis, which is a dc voltage-powered micro-plasma in liquids, can be used for generating random bit sequences in a wide range of high dc voltages. The current signal is quantized into a binary stream by first using a simple moving average function which makes the distribution centered around zero, and then applying logical operations which enables the binarized data to pass all tests in industry-standard randomness test suite by the National Institute of Standard Technology. Furthermore, the robustness of this RBG against power supply attacks has been examined and verified.

  14. Quantum random number generator based on quantum nature of vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.

    2017-11-01

    Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.

  15. Multi-floor cascading ferroelectric nanostructures: multiple data writing-based multi-level non-volatile memory devices.

    PubMed

    Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon

    2016-01-21

    Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.

  16. Physical layer one-time-pad data encryption through synchronized semiconductor laser networks

    NASA Astrophysics Data System (ADS)

    Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris

    2016-02-01

    Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.

  17. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography.

    PubMed

    Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang

    2010-08-16

    Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.

  18. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  19. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  20. Quantum random bit generation using energy fluctuations in stimulated Raman scattering.

    PubMed

    Bustard, Philip J; England, Duncan G; Nunn, Josh; Moffatt, Doug; Spanner, Michael; Lausten, Rune; Sussman, Benjamin J

    2013-12-02

    Random number sequences are a critical resource in modern information processing systems, with applications in cryptography, numerical simulation, and data sampling. We introduce a quantum random number generator based on the measurement of pulse energy quantum fluctuations in Stokes light generated by spontaneously-initiated stimulated Raman scattering. Bright Stokes pulse energy fluctuations up to five times the mean energy are measured with fast photodiodes and converted to unbiased random binary strings. Since the pulse energy is a continuous variable, multiple bits can be extracted from a single measurement. Our approach can be generalized to a wide range of Raman active materials; here we demonstrate a prototype using the optical phonon line in bulk diamond.

  1. Compression performance of HEVC and its format range and screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Li, Bin; Xu, Jizheng; Sullivan, Gary J.

    2015-09-01

    This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.

  2. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  3. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-01

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  4. Random Bits Forest: a Strong Classifier/Regressor for Big Data

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Yi; Pu, Weilin; Wen, Kathryn; Shugart, Yin Yao; Xiong, Momiao; Jin, Li

    2016-07-01

    Efficiency, memory consumption, and robustness are common problems with many popular methods for data analysis. As a solution, we present Random Bits Forest (RBF), a classification and regression algorithm that integrates neural networks (for depth), boosting (for width), and random forests (for prediction accuracy). Through a gradient boosting scheme, it first generates and selects ~10,000 small, 3-layer random neural networks. These networks are then fed into a modified random forest algorithm to obtain predictions. Testing with datasets from the UCI (University of California, Irvine) Machine Learning Repository shows that RBF outperforms other popular methods in both accuracy and robustness, especially with large datasets (N > 1000). The algorithm also performed highly in testing with an independent data set, a real psoriasis genome-wide association study (GWAS).

  5. Compact Quantum Random Number Generator with Silicon Nanocrystals Light Emitting Device Coupled to a Silicon Photomultiplier

    NASA Astrophysics Data System (ADS)

    Bisadi, Zahra; Acerbi, Fabio; Fontana, Giorgio; Zorzi, Nicola; Piemonte, Claudio; Pucker, Georg; Pavesi, Lorenzo

    2018-02-01

    A small-sized photonic quantum random number generator, easy to be implemented in small electronic devices for secure data encryption and other applications, is highly demanding nowadays. Here, we propose a compact configuration with Silicon nanocrystals large area light emitting device (LED) coupled to a Silicon photomultiplier to generate random numbers. The random number generation methodology is based on the photon arrival time and is robust against the non-idealities of the detector and the source of quantum entropy. The raw data show high quality of randomness and pass all the statistical tests in national institute of standards and technology tests (NIST) suite without a post-processing algorithm. The highest bit rate is 0.5 Mbps with the efficiency of 4 bits per detected photon.

  6. Narrowband (LPC-10) Vocoder Performance under Combined Effects of Random Bit Errors and Jet Aircraft Cabin Noise.

    DTIC Science & Technology

    1983-12-01

    rAD-141 333 NRRROWRAND (LPC-iB) VOCODER PERFORMANCE UNDER COMBINED i/ EFFECTS OF RRNDOM.(U) ROME AIR DEVELOPMENT CENTER GRIFFISS RFB NY C P SMITH DEC...LPC-10) VOCODER In House. PERFORMANCE UNDER COMBINED EFFECTS June 82 - Sept. 83 OF RANDOM BIT ERRORS AND JET AIRCRAFT Z PERFORMING ORG REPO- NUMSEF...PAGE(Wh.n Does Eneerd) 20. (contd) Compartment, and NCA Compartment were alike in their effects on overall vocoder performance . Composite performance

  7. Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity

    DTIC Science & Technology

    2007-09-01

    transmits a 32,767 bit pseudo -random “short” code that repeats 37.5 times per second. Since the pseudo -random bit pattern and modulation scheme are... correlation process takes two “ sample windows,” both of which are ν = 16 samples wide and are spaced N = 64 samples apart, and compares them. When the...technique in (3.4) is a necessary step in order to get a more accurate estimate of the sample shift from the symbol boundary correlator in (3.1). Figure

  8. Electrical Characterization of Hughes HCMP 1852D and RCA CDP1852D 8-bit, CMOS, I/O Ports

    NASA Technical Reports Server (NTRS)

    Stokes, R. L.

    1979-01-01

    Twenty-five Hughes HCMP 1852D and 25 RCA CDP1852D 8-bit, CMOS, I/O port microcircuits underwent electrical characterization tests. All electrical measurements were performed on a Tektronix S-3260 Test System. Before electrical testing, the devices were subjected to a 168-hour burn-in at 125 C with the inputs biased at 10V. Four of the Hughes parts became inoperable during testing. They exhibited functional failures and out-of-range parametric measurements after a few runs of the test program.

  9. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  10. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  11. Traffic management mechanism for intranets with available-bit-rate access to the Internet

    NASA Astrophysics Data System (ADS)

    Hassan, Mahbub; Sirisena, Harsha R.; Atiquzzaman, Mohammed

    1997-10-01

    The design of a traffic management mechanism for intranets connected to the Internet via an available bit rate access- link is presented. Selection of control parameters for this mechanism for optimum performance is shown through analysis. An estimate for packet loss probability at the access- gateway is derived for random fluctuation of available bit rate of the access-link. Some implementation strategies of this mechanism in the standard intranet protocol stack are also suggested.

  12. A novel attack method about double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  13. Fair loss-tolerant quantum coin flipping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlin, Guido; Brassard, Gilles; Bussieres, Felix

    Coin flipping is a cryptographic primitive in which two spatially separated players, who do not trust each other, wish to establish a common random bit. If we limit ourselves to classical communication, this task requires either assumptions on the computational power of the players or it requires them to send messages to each other with sufficient simultaneity to force their complete independence. Without such assumptions, all classical protocols are so that one dishonest player has complete control over the outcome. If we use quantum communication, on the other hand, protocols have been introduced that limit the maximal bias that dishonestmore » players can produce. However, those protocols would be very difficult to implement in practice because they are susceptible to realistic losses on the quantum channel between the players or in their quantum memory and measurement apparatus. In this paper, we introduce a quantum protocol and we prove that it is completely impervious to loss. The protocol is fair in the sense that either player has the same probability of success in cheating attempts at biasing the outcome of the coin flip. We also give explicit and optimal cheating strategies for both players.« less

  14. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.

    PubMed

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-05

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80  Gb×45.6  Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114  bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  15. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  16. Compact quantum random number generator based on superluminescent light-emitting diodes

    NASA Astrophysics Data System (ADS)

    Wei, Shihai; Yang, Jie; Fan, Fan; Huang, Wei; Li, Dashuang; Xu, Bingjie

    2017-12-01

    By measuring the amplified spontaneous emission (ASE) noise of the superluminescent light emitting diodes, we propose and realize a quantum random number generator (QRNG) featured with practicability. In the QRNG, after the detection and amplification of the ASE noise, the data acquisition and randomness extraction which is integrated in a field programmable gate array (FPGA) are both implemented in real-time, and the final random bit sequences are delivered to a host computer with a real-time generation rate of 1.2 Gbps. Further, to achieve compactness, all the components of the QRNG are integrated on three independent printed circuit boards with a compact design, and the QRNG is packed in a small enclosure sized 140 mm × 120 mm × 25 mm. The final random bit sequences can pass all the NIST-STS and DIEHARD tests.

  17. Astronomical random numbers for quantum foundations experiments

    NASA Astrophysics Data System (ADS)

    Leung, Calvin; Brown, Amy; Nguyen, Hien; Friedman, Andrew S.; Kaiser, David I.; Gallicchio, Jason

    2018-04-01

    Photons from distant astronomical sources can be used as a classical source of randomness to improve fundamental tests of quantum nonlocality, wave-particle duality, and local realism through Bell's inequality and delayed-choice quantum eraser tests inspired by Wheeler's cosmic-scale Mach-Zehnder interferometer gedanken experiment. Such sources of random numbers may also be useful for information-theoretic applications such as key distribution for quantum cryptography. Building on the design of an astronomical random number generator developed for the recent cosmic Bell experiment [Handsteiner et al. Phys. Rev. Lett. 118, 060401 (2017), 10.1103/PhysRevLett.118.060401], in this paper we report on the design and characterization of a device that, with 20-nanosecond latency, outputs a bit based on whether the wavelength of an incoming photon is greater than or less than ≈700 nm. Using the one-meter telescope at the Jet Propulsion Laboratory Table Mountain Observatory, we generated random bits from astronomical photons in both color channels from 50 stars of varying color and magnitude, and from 12 quasars with redshifts up to z =3.9 . With stars, we achieved bit rates of ˜1 ×106Hz/m 2 , limited by saturation of our single-photon detectors, and with quasars of magnitudes between 12.9 and 16, we achieved rates between ˜102 and 2 ×103Hz /m2 . For bright quasars, the resulting bitstreams exhibit sufficiently low amounts of statistical predictability as quantified by the mutual information. In addition, a sufficiently high fraction of bits generated are of true astronomical origin in order to address both the locality and freedom-of-choice loopholes when used to set the measurement settings in a test of the Bell-CHSH inequality.

  18. Design of high-throughput and low-power true random number generator utilizing perpendicularly magnetized voltage-controlled magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.

    2017-05-01

    A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.

  19. Low power laser driver design in 28nm CMOS for on-chip and chip-to-chip optical interconnect

    NASA Astrophysics Data System (ADS)

    Belfiore, Guido; Szilagyi, Laszlo; Henker, Ronny; Ellinger, Frank

    2015-09-01

    This paper discusses the challenges and the trade-offs in the design of laser drivers for very-short distance optical communications. A prototype integrated circuit is designed and fabricated in 28 nm super-low-power CMOS technology. The power consumption of the transmitter is 17.2 mW excluding the VCSEL that in our test has a DC power consumption of 10 mW. The active area of the driver is only 0.0045 mm2. The driver can achieve an error-free (BER < 10 -12) electrical data-rate of 25 Gbit/s using a pseudo random bit sequence of 27 -1. When the driver is connected to the VCSEL module an open optical eye is reported at 15 Gbit/s. In the tested bias point the VCSEL module has a measured bandwidth of 10.7 GHz.

  20. Spin-transfer torque in multiferroic tunnel junctions with composite dielectric/ferroelectric barriers

    NASA Astrophysics Data System (ADS)

    Velev, Julian P.; Merodio, Pablo; Pollack, Cesar; Kalitsov, Alan; Chshiev, Mairbek; Kioussis, Nicholas

    2017-12-01

    Using model calculations, we demonstrate a very high level of control of the spin-transfer torque (STT) by electric field in multiferroic tunnel junctions with composite dielectric/ferroelectric barriers. We find that, for particular device parameters, toggling the polarization direction can switch the voltage-induced part of STT between a finite value and a value close to zero, i.e. quench and release the torque. Additionally, we demonstrate that under certain conditions the zero-voltage STT, i.e. the interlayer exchange coupling, can switch sign with polarization reversal, which is equivalent to reversing the magnetic ground state of the tunnel junction. This bias- and polarization-tunability of the STT could be exploited to engineer novel functionalities such as softening/hardening of the bit or increasing the signal-to-noise ratio in magnetic sensors, which can have important implications for magnetic random access memories or for combined memory and logic devices.

  1. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less

  2. False Operation of Static Random Access Memory Cells under Alternating Current Power Supply Voltage Variation

    NASA Astrophysics Data System (ADS)

    Sawada, Takuya; Takata, Hidehiro; Nii, Koji; Nagata, Makoto

    2013-04-01

    Static random access memory (SRAM) cores exhibit susceptibility against power supply voltage variation. False operation is investigated among SRAM cells under sinusoidal voltage variation on power lines introduced by direct RF power injection. A standard SRAM core of 16 kbyte in a 90 nm 1.5 V technology is diagnosed with built-in self test and on-die noise monitor techniques. The sensitivity of bit error rate is shown to be high against the frequency of injected voltage variation, while it is not greatly influenced by the difference in frequency and phase against SRAM clocking. It is also observed that the distribution of false bits is substantially random in a cell array.

  3. Differential Fault Analysis on CLEFIA

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Wu, Wenling; Feng, Dengguo

    CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

  4. Efficient quantum state transfer in an engineered chain of quantum bits

    NASA Astrophysics Data System (ADS)

    Sandberg, Martin; Knill, Emanuel; Kapit, Eliot; Vissers, Michael R.; Pappas, David P.

    2016-03-01

    We present a method of performing quantum state transfer in a chain of superconducting quantum bits. Our protocol is based on engineering the energy levels of the qubits in the chain and tuning them all simultaneously with an external flux bias. The system is designed to allow sequential adiabatic state transfers, resulting in on-demand quantum state transfer from one end of the chain to the other. Numerical simulations of the master equation using realistic parameters for capacitive nearest-neighbor coupling, energy relaxation, and dephasing show that fast, high-fidelity state transfer should be feasible using this method.

  5. On the design of henon and logistic map-based random number generator

    NASA Astrophysics Data System (ADS)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  6. Physical key-protected one-time pad

    PubMed Central

    Horstmeyer, Roarke; Judkewitz, Benjamin; Vellekoop, Ivo M.; Assawaworrarit, Sid; Yang, Changhuei

    2013-01-01

    We describe an encrypted communication principle that forms a secure link between two parties without electronically saving either of their keys. Instead, random cryptographic bits are kept safe within the unique mesoscopic randomness of two volumetric scattering materials. We demonstrate how a shared set of patterned optical probes can generate 10 gigabits of statistically verified randomness between a pair of unique 2 mm3 scattering objects. This shared randomness is used to facilitate information-theoretically secure communication following a modified one-time pad protocol. Benefits of volumetric physical storage over electronic memory include the inability to probe, duplicate or selectively reset any bits without fundamentally altering the entire key space. Our ability to securely couple the randomness contained within two unique physical objects can extend to strengthen hardware required by a variety of cryptographic protocols, which is currently a critically weak link in the security pipeline of our increasingly mobile communication culture. PMID:24345925

  7. Self-reverse-biased solar panel optical receiver for simultaneous visible light communication and energy harvesting.

    PubMed

    Shin, Won-Ho; Yang, Se-Hoon; Kwon, Do-Hoon; Han, Sang-Kook

    2016-10-31

    We propose a self-reverse-biased solar panel optical receiver for energy harvesting and visible light communication. Since the solar panel converts an optical component into an electrical component, it provides both energy harvesting and communication. The signal component can be separated from the direct current component, and these components are used for communication and energy harvesting. We employed a self-reverse-biased receiver circuit to improve the communication and energy harvesting performance. The reverse bias on the solar panel improves the responsivity and response time. The proposed system achieved 17.05 mbps discrete multitone transmission with a bit error rate of 1.1 x 10-3 and enhanced solar energy conversion efficiency.

  8. Experimental phase diagram of zero-bias conductance peaks in superconductor/semiconductor nanowire devices

    PubMed Central

    Chen, Jun; Yu, Peng; Stenger, John; Hocevar, Moïra; Car, Diana; Plissard, Sébastien R.; Bakkers, Erik P. A. M.; Stanescu, Tudor D.; Frolov, Sergey M.

    2017-01-01

    Topological superconductivity is an exotic state of matter characterized by spinless p-wave Cooper pairing of electrons and by Majorana zero modes at the edges. The first signature of topological superconductivity is a robust zero-bias peak in tunneling conductance. We perform tunneling experiments on semiconductor nanowires (InSb) coupled to superconductors (NbTiN) and establish the zero-bias peak phase in the space of gate voltage and external magnetic field. Our findings are consistent with calculations for a finite-length topological nanowire and provide means for Majorana manipulation as required for braiding and topological quantum bits. PMID:28913432

  9. Practical quantum key distribution protocol without monitoring signal disturbance.

    PubMed

    Sasaki, Toshihiko; Yamamoto, Yoshihisa; Koashi, Masato

    2014-05-22

    Quantum cryptography exploits the fundamental laws of quantum mechanics to provide a secure way to exchange private information. Such an exchange requires a common random bit sequence, called a key, to be shared secretly between the sender and the receiver. The basic idea behind quantum key distribution (QKD) has widely been understood as the property that any attempt to distinguish encoded quantum states causes a disturbance in the signal. As a result, implementation of a QKD protocol involves an estimation of the experimental parameters influenced by the eavesdropper's intervention, which is achieved by randomly sampling the signal. If the estimation of many parameters with high precision is required, the portion of the signal that is sacrificed increases, thus decreasing the efficiency of the protocol. Here we propose a QKD protocol based on an entirely different principle. The sender encodes a bit sequence onto non-orthogonal quantum states and the receiver randomly dictates how a single bit should be calculated from the sequence. The eavesdropper, who is unable to learn the whole of the sequence, cannot guess the bit value correctly. An achievable rate of secure key distribution is calculated by considering complementary choices between quantum measurements of two conjugate observables. We found that a practical implementation using a laser pulse train achieves a key rate comparable to a decoy-state QKD protocol, an often-used technique for lasers. It also has a better tolerance of bit errors and of finite-sized-key effects. We anticipate that this finding will give new insight into how the probabilistic nature of quantum mechanics can be related to secure communication, and will facilitate the simple and efficient use of conventional lasers for QKD.

  10. New adaptive color quantization method based on self-organizing maps.

    PubMed

    Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai

    2005-01-01

    Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.

  11. Standard random number generation for MBASIC

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.

  12. Experimental demonstration of real-time adaptively modulated DDO-OFDM systems with a high spectral efficiency up to 5.76bit/s/Hz transmission over SMF links.

    PubMed

    Chen, Ming; He, Jing; Tang, Jin; Wu, Xian; Chen, Lin

    2014-07-28

    In this paper, a FPGAs-based real-time adaptively modulated 256/64/16QAM-encoded base-band OFDM transceiver with a high spectral efficiency up to 5.76bit/s/Hz is successfully developed, and experimentally demonstrated in a simple intensity-modulated direct-detection optical communication system. Experimental results show that it is feasible to transmit a raw signal bit rate of 7.19Gbps adaptively modulated real-time optical OFDM signal over 20km and 50km single mode fibers (SMFs). The performance comparison between real-time and off-line digital signal processing is performed, and the results show that there is a negligible power penalty. In addition, to obtain the best transmission performance, direct-current (DC) bias voltage for MZM and launch power into optical fiber links are explored in the real-time optical OFDM systems.

  13. A dynamically reconfigurable logic cell: from artificial neural networks to quantum-dot cellular automata

    NASA Astrophysics Data System (ADS)

    Naqvi, Syed Rameez; Akram, Tallha; Iqbal, Saba; Haider, Sajjad Ali; Kamran, Muhammad; Muhammad, Nazeer

    2018-02-01

    Considering the lack of optimization support for Quantum-dot Cellular Automata, we propose a dynamically reconfigurable logic cell capable of implementing various logic operations by means of artificial neural networks. The cell can be reconfigured to any 2-input combinational logic gate by altering the strength of connections, called weights and biases. We demonstrate how these cells may appositely be organized to perform multi-bit arithmetic and logic operations. The proposed work is important in that it gives a standard implementation of an 8-bit arithmetic and logic unit for quantum-dot cellular automata with minimal area and latency overhead. We also compare the proposed design with a few existing arithmetic and logic units, and show that it is more area efficient than any equivalent available in literature. Furthermore, the design is adaptable to 16, 32, and 64 bit architectures.

  14. Unified random access memory (URAM) by integration of a nanocrystal floating gate for nonvolatile memory and a partially depleted floating body for capacitorless 1T-DRAM

    NASA Astrophysics Data System (ADS)

    Ryu, Seong-Wan; Han, Jin-Woo; Kim, Chung-Jin; Kim, Sungho; Choi, Yang-Kyu

    2009-03-01

    This paper describes a unified memory (URAM) that utilizes a nanocrystal SOI MOSFET for multi-functional applications of both nonvolatile memory (NVM) and capacitorless 1T-DRAM. By using a discrete storage node (Ag nanocrystal) as the floating gate of the NVM, high defect immunity and 2-bit/cell operation were achieved. The embedded nanocrystal NVM also showed 1T-DRAM operation (program/erase time = 100 ns) characteristics, which were realized by storing holes in the floating body of the SOI MOSFET, without requiring an external capacitor. Three-bit/cell operation was accomplished for different applications - 2-bits for nonvolatility and 1-bit for fast operation.

  15. Investigation of system integration methods for bubble domain flight recorders

    NASA Technical Reports Server (NTRS)

    Chen, T. T.; Bohning, O. D.

    1975-01-01

    System integration methods for bubble domain flight records are investigated. Bubble memory module packaging and assembly, the control electronics design and construction, field coils, and permanent magnet bias structure design are studied. A small 60-k bit engineering model was built and tested to demonstrate the feasibility of the bubble recorder. Based on the various studies performed, a projection is made on a 50,000,000-bit prototype recorder. It is estimated that the recorder will occupy 190 cubic in., weigh 12 lb, and consume 12 w power when all of its four tracks are operated in parallel at 150 kHz data rate.

  16. Random numbers from vacuum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  17. Demonstration of the Potential of Magnetic Tunnel Junctions for a Universal RAM Technology

    NASA Astrophysics Data System (ADS)

    Gallagher, William J.

    2000-03-01

    Over the past four years, tunnel junctions with magnetic electrodes have emerged as promising devices for future magnetoresistive sensing and for information storage. This talk will review advances in these devices, focusing particularly on the use of magnetic tunnel junctions for magnetic random access memory (MRAM). Exchange-biased versions of magnetic tunnel junctions (MTJs) in particular will be shown to have useful properties for forming magnetic memory storage elements in a novel cross-point architecture. Exchange-biased MTJ elements have been made with areas as small as 0.1 square microns and have shown magnetoresistance values exceeding 40 The potential of exchange-biased MTJs for MRAM has been most seriously explored in a demonstration experiment involving the integration of 0.25 micron CMOS technology with a special magnetic tunnel junction "back end." The magnetic back end is based upon multi-layer magnetic tunnel junction growth technology which was developed using research-scale equipment and one-inch size substrates. For the demonstration, the CMOS wafers processed through two metal layers were cut into one-inch squares for depositions of bottom-pinned exchange-biased magnetic tunnel junctions. The samples were then processed through four additional lithographic levels to complete the circuits. The demonstration focused attention on a number of processing and device issues that were addressed successfully enough that key performance aspects of MTJ MRAM were demonstrated in 1 K bit arrays, including reads and writes in less than 10 ns and nonvolatility. While other key issues remain to be addressed, these results suggest that MTJ MRAM might simultaneously provide much of the functionality now provided separately by SRAM, DRAM, and NVRAM.

  18. Source-Independent Quantum Random Number Generation

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  19. A Novel Bit-level Image Encryption Method Based on Chaotic Map and Dynamic Grouping

    NASA Astrophysics Data System (ADS)

    Zhang, Guo-Ji; Shen, Yan

    2012-10-01

    In this paper, a novel bit-level image encryption method based on dynamic grouping is proposed. In the proposed method, the plain-image is divided into several groups randomly, then permutation-diffusion process on bit level is carried out. The keystream generated by logistic map is related to the plain-image, which confuses the relationship between the plain-image and the cipher-image. The computer simulation results of statistical analysis, information entropy analysis and sensitivity analysis show that the proposed encryption method is secure and reliable enough to be used for communication application.

  20. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    PubMed Central

    Sang, Jun; Ling, Shenggui; Alam, Mohammad S.

    2012-01-01

    In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003

  1. Physically unclonable cryptographic primitives using self-assembled carbon nanotubes.

    PubMed

    Hu, Zhaoying; Comeras, Jose Miguel M Lobez; Park, Hongsik; Tang, Jianshi; Afzali, Ali; Tulevski, George S; Hannon, James B; Liehr, Michael; Han, Shu-Jen

    2016-06-01

    Information security underpins many aspects of modern society. However, silicon chips are vulnerable to hazards such as counterfeiting, tampering and information leakage through side-channel attacks (for example, by measuring power consumption, timing or electromagnetic radiation). Single-walled carbon nanotubes are a potential replacement for silicon as the channel material of transistors due to their superb electrical properties and intrinsic ultrathin body, but problems such as limited semiconducting purity and non-ideal assembly still need to be addressed before they can deliver high-performance electronics. Here, we show that by using these inherent imperfections, an unclonable electronic random structure can be constructed at low cost from carbon nanotubes. The nanotubes are self-assembled into patterned HfO2 trenches using ion-exchange chemistry, and the width of the trench is optimized to maximize the randomness of the nanotube placement. With this approach, two-dimensional (2D) random bit arrays are created that can offer ternary-bit architecture by determining the connection yield and switching type of the nanotube devices. As a result, our cryptographic keys provide a significantly higher level of security than conventional binary-bit architecture with the same key size.

  2. Physically unclonable cryptographic primitives using self-assembled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Hu, Zhaoying; Comeras, Jose Miguel M. Lobez; Park, Hongsik; Tang, Jianshi; Afzali, Ali; Tulevski, George S.; Hannon, James B.; Liehr, Michael; Han, Shu-Jen

    2016-06-01

    Information security underpins many aspects of modern society. However, silicon chips are vulnerable to hazards such as counterfeiting, tampering and information leakage through side-channel attacks (for example, by measuring power consumption, timing or electromagnetic radiation). Single-walled carbon nanotubes are a potential replacement for silicon as the channel material of transistors due to their superb electrical properties and intrinsic ultrathin body, but problems such as limited semiconducting purity and non-ideal assembly still need to be addressed before they can deliver high-performance electronics. Here, we show that by using these inherent imperfections, an unclonable electronic random structure can be constructed at low cost from carbon nanotubes. The nanotubes are self-assembled into patterned HfO2 trenches using ion-exchange chemistry, and the width of the trench is optimized to maximize the randomness of the nanotube placement. With this approach, two-dimensional (2D) random bit arrays are created that can offer ternary-bit architecture by determining the connection yield and switching type of the nanotube devices. As a result, our cryptographic keys provide a significantly higher level of security than conventional binary-bit architecture with the same key size.

  3. Improved classical and quantum random access codes

    NASA Astrophysics Data System (ADS)

    Liabøtrø, O.

    2017-05-01

    A (quantum) random access code ((Q)RAC) is a scheme that encodes n bits into m (qu)bits such that any of the n bits can be recovered with a worst case probability p >1/2 . We generalize (Q)RACs to a scheme encoding n d -levels into m (quantum) d -levels such that any d -level can be recovered with the probability for every wrong outcome value being less than 1/d . We construct explicit solutions for all n ≤d/2m-1 d -1 . For d =2 , the constructions coincide with those previously known. We show that the (Q)RACs are d -parity oblivious, generalizing ordinary parity obliviousness. We further investigate optimization of the success probabilities. For d =2 , we use the measure operators of the previously best-known solutions, but improve the encoding states to give a higher success probability. We conjecture that for maximal (n =4m-1 ,m ,p ) QRACs, p =1/2 {1 +[(√{3}+1)m-1 ] -1} is possible, and show that it is an upper bound for the measure operators that we use. We then compare (n ,m ,pq) QRACs with classical (n ,2 m ,pc) RACs. We can always find pq≥pc , but the classical code gives information about every input bit simultaneously, while the QRAC only gives information about a subset. For several different (n ,2 ,p ) QRACs, we see the same trade-off, as the best p values are obtained when the number of bits that can be obtained simultaneously is as small as possible. The trade-off is connected to parity obliviousness, since high certainty information about several bits can be used to calculate probabilities for parities of subsets.

  4. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  5. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  6. Random access codes and nonlocal resources

    NASA Astrophysics Data System (ADS)

    Chaturvedi, Anubhav; Pawlowski, Marcin; Horodecki, Karol

    2017-08-01

    This work explores the notion of inter-convertibility between a cryptographic primitive: the random access code (RAC) and bipartite no-signaling nonlocal resources. To this end we introduce two generalizations of the Popescu-Rohrlich box (PR) and investigate their relation with the corresponding RACs. The first generalization is based on the number of Alice's input bits; we refer to it as the Bn-box. We show that the no-signaling condition imposes an equivalence between the Bn-box and the (n →1 ) RAC (encoding of n input bits to 1 bit of message). As an application we show that (n -1 ) PRs supplemented with one bit communication are necessary and sufficient to win a (n →1 ) RAC with certainty. Furthermore, we present a signaling instant of a perfectly working (n →1 ) RAC which cannot simulate the Bn-box, thus showing that it is weaker than its no-signaling counterpart. For the second generalization we replace Alice's input bits with d its (d -leveled classical systems); we call this the Bnd-box. In this case the no-signaling condition is not enough to enforce an equivalence between the Bnd-box and (n →1 ,d ) RAC (encoding of n input d its to 1 d it of message); i.e., while the Bnd-box can win a (n →1 ,d ) RAC with certainty, not all no-signaling instances of a (n →1 ,d ) RAC can simulate the Bnd-box. We use resource inequalities to quantitatively capture these results.

  7. A novel ternary content addressable memory design based on resistive random access memory with high intensity and low search energy

    NASA Astrophysics Data System (ADS)

    Han, Runze; Shen, Wensheng; Huang, Peng; Zhou, Zheng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    A novel ternary content addressable memory (TCAM) design based on resistive random access memory (RRAM) is presented. Each TCAM cell consists of two parallel RRAM to both store and search for ternary data. The cell size of the proposed design is 8F2, enable a ∼60× cell area reduction compared with the conventional static random access memory (SRAM) based implementation. Simulation results also show that the search delay and energy consumption of the proposed design at the 64-bit word search are 2 ps and 0.18 fJ/bit/search respectively at 22 nm technology node, where significant improvements are achieved compared to previous works. The desired characteristics of RRAM for implementation of the high performance TCAM search chip are also discussed.

  8. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  9. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  10. Multiple ECG Fiducial Points-Based Random Binary Sequence Generation for Securing Wireless Body Area Networks.

    PubMed

    Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif

    2017-05-01

    Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.

  11. Analysis of space telescope data collection system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.

  12. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs.

    PubMed

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-12-26

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  13. Anisotropy Induced Switching Field Distribution in High-Density Patterned Media

    NASA Astrophysics Data System (ADS)

    Talapatra, A.; Mohanty, J.

    We present here micromagnetic study of variation of switching field distribution (SFD) in a high-density patterned media as a function of magnetic anisotropy of the system. We consider the manifold effect of magnetic anisotropy in terms of its magnitude, tilt in anisotropy axis and random arrangements of magnetic islands with random anisotropy values. Our calculation shows that reduction in anisotropy causes linear decrease in coercivity because the anisotropy energy tries to align the spins along a preferred crystallographic direction. Tilt in anisotropy axis results in decrease in squareness of the hysteresis loop and hence facilitates switching. Finally, the experimental challenges like lithographic distribution of magnetic islands, their orientation, creation of defects, etc. demanded the distribution of anisotropy to be random along with random repetitions. We have explained that the range of anisotropy values and the number of bits with different anisotropy play a key role over SFD, whereas the position of the bits and their repetitions do not show a considerable contribution.

  14. Solution-Processed Carbon Nanotube True Random Number Generator.

    PubMed

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  15. Side Channel Attacks on STTRAM and Low Overhead Countermeasures

    DTIC Science & Technology

    2017-03-20

    introduce security vulnerabilities and expose the cache memory to side channel attacks. In this paper, we propose a side channel attack (SCA) model...where the adversary can monitor the supply current of the memory array to partially identify the sensi- tive cache data that is being read or written. We...propose solutions such as short retention STTRAM, obfuscation of SCA using 1-bit parity, multi-bit random write, and, neutral- izing the SCA using

  16. Asymmetric Memory Circuit Would Resist Soft Errors

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Perlman, Marvin

    1990-01-01

    Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.

  17. Choice of optical system is critical for the security of double random phase encryption systems

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Cassidy, Derek; Zhao, Liang; Ryle, James P.; Healy, John J.; Sheridan, John T.

    2017-06-01

    The linear canonical transform (LCT) is used in modeling a coherent light-field propagation through first-order optical systems. Recently, a generic optical system, known as the quadratic phase encoding system (QPES), for encrypting a two-dimensional image has been reported. In such systems, two random phase keys and the individual LCT parameters (α,β,γ) serve as secret keys of the cryptosystem. It is important that such encryption systems also satisfy some dynamic security properties. We, therefore, examine such systems using two cryptographic evaluation methods, the avalanche effect and bit independence criterion, which indicate the degree of security of the cryptographic algorithms using QPES. We compared our simulation results with the conventional Fourier and the Fresnel transform-based double random phase encryption (DRPE) systems. The results show that the LCT-based DRPE has an excellent avalanche and bit independence characteristics compared to the conventional Fourier and Fresnel-based encryption systems.

  18. Acoustically assisted spin-transfer-torque switching of nanomagnets: An energy-efficient hybrid writing scheme for non-volatile memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biswas, Ayan K.; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha

    We show that the energy dissipated to write bits in spin-transfer-torque random access memory can be reduced by an order of magnitude if a surface acoustic wave (SAW) is launched underneath the magneto-tunneling junctions (MTJs) storing the bits. The SAW-generated strain rotates the magnetization of every MTJs' soft magnet from the easy towards the hard axis, whereupon passage of a small spin-polarized current through a target MTJ selectively switches it to the desired state with > 99.99% probability at room temperature, thereby writing the bit. The other MTJs return to their original states at the completion of the SAW cycle.

  19. A Scheme for Obtaining Secure S-Boxes Based on Chaotic Baker's Map

    NASA Astrophysics Data System (ADS)

    Gondal, Muhammad Asif; Abdul Raheem; Hussain, Iqtadar

    2014-09-01

    In this paper, a method for obtaining cryptographically strong 8 × 8 substitution boxes (S-boxes) is presented. The method is based on chaotic baker's map and a "mini version" of a new block cipher with block size 8 bits and can be easily and efficiently performed on a computer. The cryptographic strength of some 8 × 8 S-boxes randomly produced by the method is analyzed. The results show (1) all of them are bijective; (2) the nonlinearity of each output bit of them is usually about 100; (3) all of them approximately satisfy the strict avalanche criterion and output bits independence criterion; (4) they all have an almost equiprobable input/output XOR distribution.

  20. Many-body localization beyond eigenstates in all dimensions

    NASA Astrophysics Data System (ADS)

    Chandran, A.; Pal, A.; Laumann, C. R.; Scardicchio, A.

    2016-10-01

    Isolated quantum systems with quenched randomness exhibit many-body localization (MBL), wherein they do not reach local thermal equilibrium even when highly excited above their ground states. It is widely believed that individual eigenstates capture this breakdown of thermalization at finite size. We show that this belief is false in general and that a MBL system can exhibit the eigenstate properties of a thermalizing system. We propose that localized approximately conserved operators (l*-bits) underlie localization in such systems. In dimensions d >1 , we further argue that the existing MBL phenomenology is unstable to boundary effects and gives way to l*-bits . Physical consequences of l*-bits include the possibility of an eigenstate phase transition within the MBL phase unrelated to the dynamical transition in d =1 and thermal eigenstates at all parameters in d >1 . Near-term experiments in ultracold atomic systems and numerics can probe the dynamics generated by boundary layers and emergence of l*-bits .

  1. A wide bandwidth CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K.; Wallace, R. W.; Robinson, C. R.

    1978-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. CCD shift register memories (8K bit) were used to construct a feasibility model 128 K-bit buffer memory system. Serial data that can have rates between 150 kHz and 4.0 MHz can be stored in 4K-bit, randomly-accessible memory blocks. Peak power dissipation during a data transfer is less than 7 W, while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. System expansion to accommodate parallel inputs or a greater number of memory blocks can be performed in a modular fashion. Since the control logic does not increase proportionally to increase in memory capacity, the power requirements per bit of storage can be reduced significantly in a larger system.

  2. Comparison and statistical analysis of four write stability metrics in bulk CMOS static random access memory cells

    NASA Astrophysics Data System (ADS)

    Qiu, Hao; Mizutani, Tomoko; Saraya, Takuya; Hiramoto, Toshiro

    2015-04-01

    The commonly used four metrics for write stability were measured and compared based on the same set of 2048 (2k) six-transistor (6T) static random access memory (SRAM) cells by the 65 nm bulk technology. The preferred one should be effective for yield estimation and help predict edge of stability. Results have demonstrated that all metrics share the same worst SRAM cell. On the other hand, compared to butterfly curve with non-normality and write N-curve where no cell state flip happens, bit-line and word-line margins have good normality as well as almost perfect correlation. As a result, both bit line method and word line method prove themselves preferred write stability metrics.

  3. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    PubMed Central

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-01-01

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765

  4. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  5. Towards a heralded eigenstate-preserving measurement of multi-qubit parity in circuit QED

    NASA Astrophysics Data System (ADS)

    Huembeli, Patrick; Nigg, Simon E.

    2017-07-01

    Eigenstate-preserving multi-qubit parity measurements lie at the heart of stabilizer quantum error correction, which is a promising approach to mitigate the problem of decoherence in quantum computers. In this work we explore a high-fidelity, eigenstate-preserving parity readout for superconducting qubits dispersively coupled to a microwave resonator, where the parity bit is encoded in the amplitude of a coherent state of the resonator. Detecting photons emitted by the resonator via a current biased Josephson junction yields information about the parity bit. We analyze theoretically the measurement back action in the limit of a strongly coupled fast detector and show that in general such a parity measurement, while approximately quantum nondemolition is not eigenstate preserving. To remediate this shortcoming we propose a simple dynamical decoupling technique during photon detection, which greatly reduces decoherence within a given parity subspace. Furthermore, by applying a sequence of fast displacement operations interleaved with the dynamical decoupling pulses, the natural bias of this binary detector can be efficiently suppressed. Finally, we introduce the concept of a heralded parity measurement, where a detector click guarantees successful multi-qubit parity detection even for finite detection efficiency.

  6. Exploration properties of biased evanescent random walkers on a one-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Esguerra, Jose Perico; Reyes, Jelian

    2017-08-01

    We investigate the combined effects of bias and evanescence on the characteristics of random walks on a one-dimensional lattice. We calculate the time-dependent return probability, eventual return probability, conditional mean return time, and the time-dependent mean number of visited sites of biased immortal and evanescent discrete-time random walkers on a one-dimensional lattice. We then extend the calculations to the case of a continuous-time step-coupled biased evanescent random walk on a one-dimensional lattice with an exponential waiting time distribution.

  7. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  8. Landsat-7 ETM+ on-orbit reflective-band radiometric characterization

    USGS Publications Warehouse

    Scaramuzza, P.L.; Markham, B.L.; Barsi, J.A.; Kaita, E.

    2004-01-01

    The Landsat-7 Enhanced Thematic Mapper Plus (ETM+) has been and continues to be radiometrically characterized using the Image Assessment System (IAS), a component of the Landsat-7 Ground System. Key radiometric properties analyzed include: overall, coherent, and impulse noise; bias stability; relative gain stability; and other artifacts. The overall instrument noise is characterized across the dynamic range of the instrument during solar diffuser deployments. Less than 1% per year increases are observed in signal-independent (dark) noise levels, while signal-dependent noise is stable with time. Several coherent noise sources exist in ETM+ data with scene-averaged magnitudes of up to 0.4 DN, and a noise component at 20 kHz whose magnitude varies across the scan and peaks at the image edges. Bit-flip noise does not exist on the ETM+. However, impulse noise due to charged particle hits on the detector array has been discovered. The instrument bias is measured every scan line using a shutter. Most bands show less than 0.1 DN variations in bias across the instrument lifetime. The panchromatic band is the exception, where the variation approaches 2 DN and is related primarily to temperature. The relative gains of the detectors, i.e., each detector's gain relative to the band average gain, have been stable to /spl plusmn/0.1% over the mission life. Two exceptions to this stability include band 2 detector 2, which dropped about 1% in gain about 3.5 years after launch and stabilized, and band 7 detector 5, which has changed several tenths of a percent several times since launch. Memory effect and scan-correlated shift, a hysteresis and a random change in bias between multiple states, respectively, both of which have been observed in previous Thematic Mapper sensors, have not been convincingly found in ETM+ data. Two artifacts, detector ringing and "oversaturation", affect a small amount of ETM+ data.

  9. Bit-level quantum color image encryption scheme with quantum cross-exchange operation and hyper-chaotic system

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian

    2018-06-01

    In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.

  10. Toward Large-Graph Comparison Measures to Understand Internet Topology Dynamics

    DTIC Science & Technology

    2013-09-01

    continuously from randomly selected vantage points in these monitors to destination IP addresses . From each IPv4 /24 prefix on the Internet, a destination is...expected to be more similar. This was verified when the esd and vsd measures applied to this dataset gave a low reading 5 An IPv4 address is a 32-bit...integer value. /24 is the prefix of the IPv4 network starting at a given address , having 24 bits allocated for the network prefix. 6 This utility

  11. Opportunistic Beamforming with Wireless Powered 1-bit Feedback Through Rectenna Array

    NASA Astrophysics Data System (ADS)

    Krikidis, Ioannis

    2015-11-01

    This letter deals with the opportunistic beamforming (OBF) scheme for multi-antenna downlink with spatial randomness. In contrast to conventional OBF, the terminals return only 1-bit feedback, which is powered by wireless power transfer through a rectenna array. We study two fundamental topologies for the combination of the rectenna elements; the direct-current combiner and the radio-frequency combiner. The beam outage probability is derived in closed form for both combination schemes, by using high order statistics and stochastic geometry.

  12. Quantum cryptography with entangled photons

    PubMed

    Jennewein; Simon; Weihs; Weinfurter; Zeilinger

    2000-05-15

    By realizing a quantum cryptography system based on polarization entangled photon pairs we establish highly secure keys, because a single photon source is approximated and the inherent randomness of quantum measurements is exploited. We implement a novel key distribution scheme using Wigner's inequality to test the security of the quantum channel, and, alternatively, realize a variant of the BB84 protocol. Our system has two completely independent users separated by 360 m, and generates raw keys at rates of 400-800 bits/s with bit error rates around 3%.

  13. Optical Security System Based on the Biometrics Using Holographic Storage Technique with a Simple Data Format

    NASA Astrophysics Data System (ADS)

    Jun, An Won

    2006-01-01

    We implement a first practical holographic security system using electrical biometrics that combines optical encryption and digital holographic memory technologies. Optical information for identification includes a picture of face, a name, and a fingerprint, which has been spatially multiplexed by random phase mask used for a decryption key. For decryption in our biometric security system, a bit-error-detection method that compares the digital bit of live fingerprint with of fingerprint information extracted from hologram is used.

  14. Comment on 'Two-way protocols for quantum cryptography with a nonmaximally entangled qubit pair'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin Sujuan; Gao Fei; Wen Qiaoyan

    2010-09-15

    Three protocols of quantum cryptography with a nonmaximally entangled qubit pair [Phys. Rev. A 80, 022323 (2009)] were recently proposed by Shimizu, Tamaki, and Fukasaka. The security of these protocols is based on the quantum-mechanical constraint for a state transformation between nonmaximally entangled states. However, we find that the second protocol is vulnerable under the correlation-elicitation attack. An eavesdropper can obtain the encoded bit M although she has no knowledge about the random bit R.

  15. Implications of scaling on static RAM bit cell stability and reliability

    NASA Astrophysics Data System (ADS)

    Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael

    1993-01-01

    In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.

  16. Plated wire random access memories

    NASA Technical Reports Server (NTRS)

    Gouldin, L. D.

    1975-01-01

    A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.

  17. True randomness from an incoherent source

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2017-11-01

    Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.

  18. Superconducting analog-to-digital converter with a triple-junction reversible flip-flop bidirectional counter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, G.S.

    1993-07-13

    A high-performance superconducting analog-to-digital converter is described, comprising: a bidirectional binary counter having n stages of triple-junction reversible flip-flops connected together in a cascade arrangement from the least significant bit (LSB) to the most significant bit (MSB) where n is the number of bits of the digital output, each triple-junction reversible flip-flop including first, second and third shunted Josephson tunnel junctions and a superconducting inductor connected in a bridge circuit, the Josephson junctions and the inductor forming upper and lower portions of the flip-flop, each reversible flip-flop being a bistable logic circuit in which the direction of the circulating currentmore » determines the state of the circuit; and means for applying an analog input current to the bidirectional counter; wherein the bidirectional counter algebraically counts incremental changes in the analog input current, increasing the binary count for positive incremental changes in the analog current and decreasing the binary count for negative incremental changes in the current, and wherein the counter does not require a gate bias, thus minimizing power dissipation.« less

  19. Single-Event Effect Performance of a Conductive-Bridge Memory EEPROM

    NASA Technical Reports Server (NTRS)

    Chen, Dakai; Wilcox, Edward; Berg, Melanie; Kim, Hak; Phan, Anthony; Figueiredo, Marco; Seidleck, Christina; LaBel, Kenneth

    2015-01-01

    We investigated the heavy ion single-event effect (SEE) susceptibility of the industry’s first stand-alone memory based on conductive-bridge memory (CBRAM) technology. The device is available as an electrically erasable programmable read-only memory (EEPROM). We found that single-event functional interrupt (SEFI) is the dominant SEE type for each operational mode (standby, dynamic read, and dynamic write/read). SEFIs occurred even while the device is statically biased in standby mode. Worst case SEFIs resulted in errors that filled the entire memory space. Power cycle did not always clear the errors. Thus the corrupted cells had to be reprogrammed in some cases. The device is also vulnerable to bit upsets during dynamic write/read tests, although the frequency of the upsets are relatively low. The linear energy transfer threshold for cell upset is between 10 and 20 megaelectron volts per square centimeter per milligram, with an upper limit cross section of 1.6 times 10(sup -11) square centimeters per bit (95 percent confidence level) at 10 megaelectronvolts per square centimeter per milligram. In standby mode, the CBRAM array appears invulnerable to bit upsets.

  20. Fully digital programmable optical frequency comb generation and application.

    PubMed

    Yan, Xianglei; Zou, Xihua; Pan, Wei; Yan, Lianshan; Azaña, José

    2018-01-15

    We propose a fully digital programmable optical frequency comb (OFC) generation scheme based on binary phase-sampling modulation, wherein an optimized bit sequence is applied to phase modulate a narrow-linewidth light wave. Programming the bit sequence enables us to tune both the comb spacing and comb-line number (i.e., number of comb lines). The programmable OFCs are also characterized by ultra-flat spectral envelope, uniform temporal envelope, and stable bias-free setup. Target OFCs are digitally programmed to have 19, 39, 61, 81, 101, or 201 comb lines and to have a 100, 50, 20, 10, 5, or 1 MHz comb spacing. As a demonstration, a scanning-free temperature sensing system using a proposed OFC with 1001 comb lines was also implemented with a sensitivity of 0.89°C/MHz.

  1. Metasurfaced Reverberation Chamber.

    PubMed

    Sun, Hengyi; Li, Zhuo; Gu, Changqing; Xu, Qian; Chen, Xinlei; Sun, Yunhe; Lu, Shengchen; Martin, Ferran

    2018-01-25

    The concept of metasurfaced reverberation chamber (RC) is introduced in this paper. It is shown that by coating the chamber wall with a rotating 1-bit random coding metasurface, it is possible to enlarge the test zone of the RC while maintaining the field uniformity as good as that in a traditional RC with mechanical stirrers. A 1-bit random coding diffusion metasurface is designed to obtain all-direction backscattering under normal incidence. Three specific cases are studied for comparisons, including a (traditional) mechanical stirrer RC, a mechanical stirrer RC with a fixed diffusion metasurface, and a RC with a rotating diffusion metasurface. Simulation results show that the compact rotating diffusion metasurface can act as a stirrer with good stirring efficiency. By using such rotating diffusion metasurface, the test region of the RC can be greatly extended.

  2. 50 Mbps free space direct detection laser diode optical communication system with Q = 4 PPM signaling

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Davidson, Frederic; Field, Christopher

    1990-01-01

    A 50 Mbps direct detection optical communication system for use in an intersatellite link was constructed with an AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector. The system used a Q = 4 PPM format. The receiver consisted of a maximum likelihood PPM detector and a timing recovery subsystem. The PPM slot clock was recovered at the receiver by using a transition detector followed by a PLL. The PPM word clock was recovered by using a second PLL whose input was derived from the presence of back-to-back PPM pulses contained in the received random PPM pulse sequences. The system achieved a bit error rate of 0.000001 at less than 50 detected signal photons/information bit. The receiver was capable of acquiring and maintaining slot and word synchronization for received signal levels greater than 20 photons/information bit, at which the receiver bit error rate was about 0.01.

  3. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  4. Method and apparatus for free-space quantum key distribution in daylight

    DOEpatents

    Hughes, Richard J.; Buttler, William T.; Lamoreaux, Steve K.; Morgan, George L.; Nordholt, Jane E.; Peterson, C. Glen; Kwiat, Paul G.

    2004-06-08

    A quantum cryptography apparatus securely generates a key to be used for secure transmission between a sender and a receiver connected by an atmospheric transmission link. A first laser outputs a timing bright light pulse; other lasers output polarized optical data pulses after having been enabled by a random bit generator. Output optics transmit output light from the lasers that is received by receiving optics. A first beam splitter receives light from the receiving optics, where a received timing bright light pulse is directed to a delay circuit for establishing a timing window for receiving light from the lasers and where an optical data pulse from one of the lasers has a probability of being either transmitted by the beam splitter or reflected by the beam splitter. A first polarizer receives transmitted optical data pulses to output one data bit value and a second polarizer receives reflected optical data pulses to output a second data bit value. A computer receives pulses representing receipt of a timing bright timing pulse and the first and second data bit values, where receipt of the first and second data bit values is indexed by the bright timing pulse.

  5. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    NASA Astrophysics Data System (ADS)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  6. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  7. A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling.

    PubMed

    Li, Bin-Bin; Wang, Ling

    2007-06-01

    This paper proposes a hybrid quantum-inspired genetic algorithm (HQGA) for the multiobjective flow shop scheduling problem (FSSP), which is a typical NP-hard combinatorial optimization problem with strong engineering backgrounds. On the one hand, a quantum-inspired GA (QGA) based on Q-bit representation is applied for exploration in the discrete 0-1 hyperspace by using the updating operator of quantum gate and genetic operators of Q-bit. Moreover, random-key representation is used to convert the Q-bit representation to job permutation for evaluating the objective values of the schedule solution. On the other hand, permutation-based GA (PGA) is applied for both performing exploration in permutation-based scheduling space and stressing exploitation for good schedule solutions. To evaluate solutions in multiobjective sense, a randomly weighted linear-sum function is used in QGA, and a nondominated sorting technique including classification of Pareto fronts and fitness assignment is applied in PGA with regard to both proximity and diversity of solutions. To maintain the diversity of the population, two trimming techniques for population are proposed. The proposed HQGA is tested based on some multiobjective FSSPs. Simulation results and comparisons based on several performance metrics demonstrate the effectiveness of the proposed HQGA.

  8. Software Obfuscation With Symmetric Cryptography

    DTIC Science & Technology

    2008-03-01

    of y = a * b + c Against Random Functions ...............84 Appendix C: Black-box Analysis of Fibonacci Against Random Functions...Metric ................... 67 Figure 19. Standard Deviations of All Fibonacci Output Bits by Metric ........................ 67 Figure 20...caveat to encryption strength is that what may be strong presently may not always be strong; the Data Encryption Standard ( DES ) was once considered

  9. Digital high speed programmable convolver

    NASA Astrophysics Data System (ADS)

    Rearick, T. C.

    1984-12-01

    A circuit module for rapidly calculating a discrete numerical convolution is described. A convolution such as finding the sum of the products of a 16 bit constant and a 16 bit variable is performed by a module which is programmable so that the constant may be changed for a new problem. In addition, the module may be programmed to find the sum of the products of 4 and 8 bit constants and variables. RAM (Random Access Memories) are loaded with partial products of the selected constant and all possible variables. Then, when the actual variable is loaded, it acts as an address to find the correct partial product in the particular RAM. The partial products from all of the RAMs are shifted to the appropriate numerical power position (if necessary) and then added in adder elements.

  10. Semi-counterfactual cryptography

    NASA Astrophysics Data System (ADS)

    Akshata Shenoy, H.; Srikanth, R.; Srinivas, T.

    2013-09-01

    In counterfactual quantum key distribution (QKD), two remote parties can securely share random polarization-encoded bits through the blocking rather than the transmission of particles. We propose a semi-counterfactual QKD, i.e., one where the secret bit is shared, and also encoded, based on the blocking or non-blocking of a particle. The scheme is thus semi-counterfactual and not based on polarization encoding. As with other counterfactual schemes and the Goldenberg-Vaidman protocol, but unlike BB84, the encoding states are orthogonal and security arises ultimately from single-particle non-locality. Unlike any of them, however, the secret bit generated is maximally indeterminate until the joint action of Alice and Bob. We prove the general security of the protocol, and study the most general photon-number-preserving incoherent attack in detail.

  11. Experimentally generated randomness certified by the impossibility of superluminal signals.

    PubMed

    Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K

    2018-04-01

    From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.

  12. Image steganography based on 2k correction and coherent bit length

    NASA Astrophysics Data System (ADS)

    Sun, Shuliang; Guo, Yongning

    2014-10-01

    In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.

  13. Driving Method for Compensating Reliability Problem of Hydrogenated Amorphous Silicon Thin Film Transistors and Image Sticking Phenomenon in Active Matrix Organic Light-Emitting Diode Displays

    NASA Astrophysics Data System (ADS)

    Shin, Min-Seok; Jo, Yun-Rae; Kwon, Oh-Kyong

    2011-03-01

    In this paper, we propose a driving method for compensating the electrical instability of hydrogenated amorphous silicon (a-Si:H) thin film transistors (TFTs) and the luminance degradation of organic light-emitting diode (OLED) devices for large active matrix OLED (AMOLED) displays. The proposed driving method senses the electrical characteristics of a-Si:H TFTs and OLEDs using current integrators and compensates them by an external compensation method. Threshold voltage shift is controlled a using negative bias voltage. After applying the proposed driving method, the measured error of the maximum emission current ranges from -1.23 to +1.59 least significant bit (LSB) of a 10-bit gray scale under the threshold voltage shift ranging from -0.16 to 0.17 V.

  14. Design and implementation of Gm-APD array readout integrated circuit for infrared 3D imaging

    NASA Astrophysics Data System (ADS)

    Zheng, Li-xia; Yang, Jun-hao; Liu, Zhao; Dong, Huai-peng; Wu, Jin; Sun, Wei-feng

    2013-09-01

    A single-photon detecting array of readout integrated circuit (ROIC) capable of infrared 3D imaging by photon detection and time-of-flight measurement is presented in this paper. The InGaAs avalanche photon diodes (APD) dynamic biased under Geiger operation mode by gate controlled active quenching circuit (AQC) are used here. The time-of-flight is accurately measured by a high accurate time-to-digital converter (TDC) integrated in the ROIC. For 3D imaging, frame rate controlling technique is utilized to the pixel's detection, so that the APD related to each pixel should be controlled by individual AQC to sense and quench the avalanche current, providing a digital CMOS-compatible voltage pulse. After each first sense, the detector is reset to wait for next frame operation. We employ counters of a two-segmental coarse-fine architecture, where the coarse conversion is achieved by a 10-bit pseudo-random linear feedback shift register (LFSR) in each pixel and a 3-bit fine conversion is realized by a ring delay line shared by all pixels. The reference clock driving the LFSR counter can be generated within the ring delay line Oscillator or provided by an external clock source. The circuit is designed and implemented by CSMC 0.5μm standard CMOS technology and the total chip area is around 2mm×2mm for 8×8 format ROIC with 150μm pixel pitch. The simulation results indicate that the relative time resolution of the proposed ROIC can achieve less than 1ns, and the preliminary test results show that the circuit function is correct.

  15. Neither fixed nor random: weighted least squares meta-regression.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2017-03-01

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Versatile optical system for static and dynamic thermomagnetic recording using a scanning laser microscope

    NASA Astrophysics Data System (ADS)

    Clegg, Warwick W.; Jenkins, David F. L.; Helian, Na; Windmill, James; Windmill, Robert

    2001-12-01

    Scanning Laser Microscopes (SLM) have been used to characterise the magnetic domain properties of various magnetic and magneto-optical materials. The SLM in our laboratory has been designed to enable both static and dynamic read-write operations to be performed on stationary media. In a conventional (static) SLM, data bits are recorded thermo-magnetically by focusing a pulse of laser light onto the sample surface. If the laser beam has a Gaussian intensity distribution (TEM00) then so will the focused laser spot. The resultant temperature profile will largely mirror the intensity distribution of the focused spot, and in the region where the temperature is sufficiently high for switching to occur, in the presence of bias field, a circular data bit will be recorded. However, in a real magneto-optical drive the bits are written onto non-stationary media, and the resultant bit will be non-circular. A versatile optical system has been developed to facilitate both recording and imaging of data bits. To simulate the action of a Magneto-Optical drive, the laser is pulsed via an Acousto-Optic Modulator, whilst being scanned across the sample using a galvanometer mounted mirror, thus imitating a storage medium rotating above a MO head with high relative velocity between the beam and medium. Static recording is simply achieved by disabling the galvanometer scan mirror. Polar magneto-optic Kerr effect images are acquired using multiple-segment photo-detectors for diffraction-limited scanned spot detection, with either specimen scanning for highest resolution or beam scanning for near real-time image acquisition. Results will be presented to illustrate the systems capabilities.

  17. Randomized clinical trials in dentistry: Risks of bias, risks of random errors, reporting quality, and methodologic quality over the years 1955–2013

    PubMed Central

    Armijo-Olivo, Susan; Cummings, Greta G.; Amin, Maryam; Flores-Mir, Carlos

    2017-01-01

    Objectives To examine the risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions and the development of these aspects over time. Methods We included 540 randomized clinical trials from 64 selected systematic reviews. We extracted, in duplicate, details from each of the selected randomized clinical trials with respect to publication and trial characteristics, reporting and methodologic characteristics, and Cochrane risk of bias domains. We analyzed data using logistic regression and Chi-square statistics. Results Sequence generation was assessed to be inadequate (at unclear or high risk of bias) in 68% (n = 367) of the trials, while allocation concealment was inadequate in the majority of trials (n = 464; 85.9%). Blinding of participants and blinding of the outcome assessment were judged to be inadequate in 28.5% (n = 154) and 40.5% (n = 219) of the trials, respectively. A sample size calculation before the initiation of the study was not performed/reported in 79.1% (n = 427) of the trials, while the sample size was assessed as adequate in only 17.6% (n = 95) of the trials. Two thirds of the trials were not described as double blinded (n = 358; 66.3%), while the method of blinding was appropriate in 53% (n = 286) of the trials. We identified a significant decrease over time (1955–2013) in the proportion of trials assessed as having inadequately addressed methodological quality items (P < 0.05) in 30 out of the 40 quality criteria, or as being inadequate (at high or unclear risk of bias) in five domains of the Cochrane risk of bias tool: sequence generation, allocation concealment, incomplete outcome data, other sources of bias, and overall risk of bias. Conclusions The risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions have improved over time; however, further efforts that contribute to the development of more stringent methodology and detailed reporting of trials are still needed. PMID:29272315

  18. Randomized clinical trials in dentistry: Risks of bias, risks of random errors, reporting quality, and methodologic quality over the years 1955-2013.

    PubMed

    Saltaji, Humam; Armijo-Olivo, Susan; Cummings, Greta G; Amin, Maryam; Flores-Mir, Carlos

    2017-01-01

    To examine the risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions and the development of these aspects over time. We included 540 randomized clinical trials from 64 selected systematic reviews. We extracted, in duplicate, details from each of the selected randomized clinical trials with respect to publication and trial characteristics, reporting and methodologic characteristics, and Cochrane risk of bias domains. We analyzed data using logistic regression and Chi-square statistics. Sequence generation was assessed to be inadequate (at unclear or high risk of bias) in 68% (n = 367) of the trials, while allocation concealment was inadequate in the majority of trials (n = 464; 85.9%). Blinding of participants and blinding of the outcome assessment were judged to be inadequate in 28.5% (n = 154) and 40.5% (n = 219) of the trials, respectively. A sample size calculation before the initiation of the study was not performed/reported in 79.1% (n = 427) of the trials, while the sample size was assessed as adequate in only 17.6% (n = 95) of the trials. Two thirds of the trials were not described as double blinded (n = 358; 66.3%), while the method of blinding was appropriate in 53% (n = 286) of the trials. We identified a significant decrease over time (1955-2013) in the proportion of trials assessed as having inadequately addressed methodological quality items (P < 0.05) in 30 out of the 40 quality criteria, or as being inadequate (at high or unclear risk of bias) in five domains of the Cochrane risk of bias tool: sequence generation, allocation concealment, incomplete outcome data, other sources of bias, and overall risk of bias. The risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions have improved over time; however, further efforts that contribute to the development of more stringent methodology and detailed reporting of trials are still needed.

  19. On VLSI Design of Rank-Order Filtering using DCRAM Architecture

    PubMed Central

    Lin, Meng-Chun; Dung, Lan-Rong

    2009-01-01

    This paper addresses on VLSI design of rank-order filtering (ROF) with a maskable memory for real-time speech and image processing applications. Based on a generic bit-sliced ROF algorithm, the proposed design uses a special-defined memory, called the dual-cell random-access memory (DCRAM), to realize major operations of ROF: threshold decomposition and polarization. Using the memory-oriented architecture, the proposed ROF processor can benefit from high flexibility, low cost and high speed. The DCRAM can perform the bit-sliced read, partial write, and pipelined processing. The bit-sliced read and partial write are driven by maskable registers. With recursive execution of the bit-slicing read and partial write, the DCRAM can effectively realize ROF in terms of cost and speed. The proposed design has been implemented using TSMC 0.18 μm 1P6M technology. As shown in the result of physical implementation, the core size is 356.1 × 427.7μm2 and the VLSI implementation of ROF can operate at 256 MHz for 1.8V supply. PMID:19865599

  20. Two-bit multi-level phase change random access memory with a triple phase change material stack structure

    NASA Astrophysics Data System (ADS)

    Gyanathan, Ashvini; Yeo, Yee-Chia

    2012-11-01

    This work demonstrates a novel two-bit multi-level device structure comprising three phase change material (PCM) layers, separated by SiN thermal barrier layers. This triple PCM stack consisted of (from bottom to top), Ge2Sb2Te5 (GST), an ultrathin SiN barrier, nitrogen-doped GST, another ultrathin SiN barrier, and Ag0.5In0.5Sb3Te6. The PCM layers can selectively amorphize to form 4 different resistance levels ("00," "01," "10," and "11") using respective voltage pulses. Electrical characterization was extensively performed on these devices. Thermal analysis was also done to understand the physics behind the phase changing characteristics of the two-bit memory devices. The melting and crystallization temperatures of the PCMs play important roles in the power consumption of the multi-level devices. The electrical resistivities and thermal conductivities of the PCMs and the SiN thermal barrier are also crucial factors contributing to the phase changing behaviour of the PCMs in the two-bit multi-level PCRAM device.

  1. Communication channels secured from eavesdropping via transmission of photonic Bell states

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    1999-07-01

    This paper proposes a quantum communication scheme for sending a definite binary sequence while confirming the security of the transmission. The scheme is very suitable for sending a ciphertext in a secret-key cryptosystem so that we can detect any eavesdropper who attempts to decipher the key. Thus we can continue to use a secret key unless we detect eavesdropping and the security of a key that is used repeatedly can be enhanced to the level of one-time-pad cryptography. In our scheme, a pair of entangled photon twins is employed as a bit carrier which is encoded in a two-term superposition of four Bell states. Different bases are employed for encoding the binary sequence of a ciphertext and a random test bit. The photon twins are measured with a Bell state analyzer and any bit can be decoded from the resultant Bell state when the receiver is later notified of the coding basis through a classical channel. By opening the positions and the values of test bits, ciphertext can be read and eavesdropping is simultaneously detected.

  2. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  3. Conceptual design of a 10 to the 8th power bit magnetic bubble domain mass storage unit and fabrication, test and delivery of a feasibility model

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The conceptual design of a highly reliable 10 to the 8th power-bit bubble domain memory for the space program is described. The memory has random access to blocks of closed-loop shift registers, and utilizes self-contained bubble domain chips with on-chip decoding. Trade-off studies show that the highest reliability and lowest power dissipation is obtained when the memory is organized on a bit-per-chip basis. The final design has 800 bits/register, 128 registers/chip, 16 chips/plane, and 112 planes, of which only seven are activated at a time. A word has 64 data bits +32 checkbits, used in a 16-adjacent code to provide correction of any combination of errors in one plane. 100 KHz maximum rotational frequency keeps power low (equal to or less than, 25 watts) and also allows asynchronous operation. Data rate is 6.4 megabits/sec, access time is 200 msec to an 800-word block and an additional 4 msec (average) to a word. The fabrication and operation are also described for a 64-bit bubble domain memory chip designed to test the concept of on-chip magnetic decoding. Access to one of the chip's four shift registers for the read, write, and clear functions is by means of bubble domain decoders utilizing the interaction between a conductor line and a bubble.

  4. Efficacy trial of bioresonance in children with atopic dermatitis.

    PubMed

    Schöni, M H; Nikolaizik, W H; Schöni-Affolter, F

    1997-03-01

    Single case reports and uncontrolled studies claim significant improvements in patients with atopic diseases treated with bioresonance therapy, also called biophysical information therapy (BIT). To assess the efficacy of this alternative method of treatment, we performed a conventional double-blind parallel group study in children hospitalized for long-lasting atopic dermatitis. Over a period of 1.5 year, 32 children with atopic dermatitis, age range 1.5-16.8 years and hospitalized for 4-6 weeks at the Alpine Children's Hospital Davos, Switzerland, were randomized according to sex, age and severity of the skin disease to receive conventional inpatient therapy and either a putatively active or a sham (placebo) BIT treatment. Short- and long-term outcome within 1 year were assessed by skin symptom scores, sleep and itch scores, blood cell activation markers of allergy, and a questionnaire. Hospitalization and conventional therapy in a high altitude climate resulted in immediate and sustained amelioration of the disease state in both the BIT-treated and sham-treated groups. BIT had no significant additive measurable effect on the outcome variables determined in this study. The statement by protagonists of this alternative form of therapy that BIT can considerably influence or even cure atopic dermatitis was not confirmed using for the first time a conventional double-blind study design. Considering the high costs and false promises caused by the promotors of this kind of therapy, it is concluded that BIT has no place in the treatment of children with atopic dermatitis.

  5. Various Attractors, Coexisting Attractors and Antimonotonicity in a Simple Fourth-Order Memristive Twin-T Oscillator

    NASA Astrophysics Data System (ADS)

    Zhou, Ling; Wang, Chunhua; Zhang, Xin; Yao, Wei

    By replacing the resistor in a Twin-T network with a generalized flux-controlled memristor, this paper proposes a simple fourth-order memristive Twin-T oscillator. Rich dynamical behaviors can be observed in the dynamical system. The most striking feature is that this system has various periodic orbits and various chaotic attractors generated by adjusting parameter b. At the same time, coexisting attractors and antimonotonicity are also detected (especially, two full Feigenbaum remerging trees in series are observed in such autonomous chaotic systems). Their dynamical features are analyzed by phase portraits, Lyapunov exponents, bifurcation diagrams and basin of attraction. Moreover, hardware experiments on a breadboard are carried out. Experimental measurements are in accordance with the simulation results. Finally, a multi-channel random bit generator is designed for encryption applications. Numerical results illustrate the usefulness of the random bit generator.

  6. Optical image encryption using fresnel zone plate mask based on fast walsh hadamard transform

    NASA Astrophysics Data System (ADS)

    Khurana, Mehak; Singh, Hukum

    2018-05-01

    A new symmetric encryption technique using Fresnel Zone Plate (FZP) based on Fast Walsh Hadamard Transform (FWHT) is proposed for security enhancement. In this technique, bits of plain image is randomized by shuffling the bits randomly. The obtained scrambled image is then masked with FZP using symmetric encryption in FWHT domain to obtain final encrypted image. FWHT has been used in the cryptosystem so as to protect image data from the quantization error and for reconstructing the image perfectly. The FZP used in proposed scheme increases the key space and makes it robust to many traditional attacks. The effectiveness and robustness of the proposed cryptosystem has been analyzed on the basis of various parameters by simulating on MATLAB 8.1.0 (R2012b). The experimental results are provided to highlight suitability of the proposed cryptosystem and prove that the system is secure.

  7. Stroop interference and food intake.

    PubMed

    Overduin, J; Jansen, A; Louwerse, E

    1995-11-01

    The Stroop task is aimed at assessing attentional bias. Words are displayed one by one on a computer screen and subjects are instructed to name the color in which every word is printed. The attentional bias is supposed to be reflected in the extent to which the word meanings interfere with the speed of color naming: The longer the color naming latency, the larger the attentional bias. Experiments using this task have demonstrated attentional bias for eating and body shape-related words in bulimic, anorexic, and restrained subjects. Explanations of these results have generally been formulated in terms of restricted food intake or emotional concerns about food and body shape-related themes. In contrast, in the present article it was proposed that Stroop interference might reflect a tendency either to withdraw or approach food or body shape-related stimuli. Fifty-one subjects (25 unrestrained, 26 restrained) were administered a Stroop task containing neutral, food, and body shape-related words. There were two conditions to which subjects were randomly allocated: the "appetizer" and "no-appetizer" condition. The appetizer was a bit of pudding to be ingested by the subject just before the Stroop task. Following the Stroop task an ice cream taste test was presented in which the subjects were allowed to eat as much as they liked. The amount of ice cream eaten was registered secretly. The results show that in unrestrained subjects Stroop interference for food words was found only in the appetizer condition. Restrained subjects, however, showed a permanent interference for food words. A significant correlation of .58 between Stroop food-word interference and ice cream intake was found only in unrestrained subjects. In restrained eaters the correlation was near 0. No effect of condition or restraint was found on Stroop body shape-word interference. The findings indicate that (1) ingestion of an appetizer seems to have evoked an attentional bias for food words in nonrestraints that correlated with food intake; (2) restrained eaters showed continuous attentional bias. This appears to support the urge-to-act explanation of Stroop interference. The lack of correlation between restraints' attentional bias and ad lib food intake could have been caused by inhibition of approach which is one of the characteristics of restrained eating: The present procedure seems not to have triggered disinhibited eating in these subjects. Among other things it is concluded that Stroop interference, as a measure of "craving" triggered by food cue, might be a useful aid in assessing the risk of relapse in treated binge eating patients.

  8. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging

    PubMed Central

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-01-01

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system. PMID:27025907

  10. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging.

    PubMed

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-03-30

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.

  11. A Unified Steganalysis Framework

    DTIC Science & Technology

    2013-04-01

    contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined

  12. Penna Bit-String Model with Constant Population

    NASA Astrophysics Data System (ADS)

    de Oliveira, P. M. C.; de Oliveira, S. Moss; Sá Martins, J. S.

    We removed from the Penna model for biological aging any random killing Verhulst factor. Deaths are due only to genetic diseases and the population size is fixed, instead of fluctuating around some constant value. We show that these modifications give qualitatively the same results obtained in an earlier paper, where the random killings (used to avoid an exponential increase of the population) were applied only to newborns.

  13. Nonvolatile GaAs Random-Access Memory

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.; Stadler, Henry L.; Wu, Jiin-Chuan

    1994-01-01

    Proposed random-access integrated-circuit electronic memory offers nonvolatile magnetic storage. Bits stored magnetically and read out with Hall-effect sensors. Advantages include short reading and writing times and high degree of immunity to both single-event upsets and permanent damage by ionizing radiation. Use of same basic material for both transistors and sensors simplifies fabrication process, with consequent benefits in increased yield and reduced cost.

  14. Modeling the Overalternating Bias with an Asymmetric Entropy Measure

    PubMed Central

    Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea

    2016-01-01

    Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418

  15. Forward-biased nanophotonic detector for ultralow-energy dissipation receiver

    NASA Astrophysics Data System (ADS)

    Nozaki, Kengo; Matsuo, Shinji; Fujii, Takuro; Takeda, Koji; Shinya, Akihiko; Kuramochi, Eiichi; Notomi, Masaya

    2018-04-01

    Generally, reverse-biased photodetectors (PDs) are used for high-speed optical receivers. The forward voltage region is only utilized in solar-cells, and this photovoltaic operation would not be concurrently obtained with high efficiency and high speed operation. Here we report that photonic-crystal waveguide PDs enable forward-biased high-speed operation at 40 Gbit/s with keeping high responsivity (0.88 A/W). Within our knowledge, this is the first demonstration of the forward-biased PDs with high responsivity. This achievement is attributed to the ultracompactness of our PD and the strong light confinement within the absorber and depleted regions, thereby enabling efficient photo-carrier generation and fast extraction. This result indicates that it is possible to construct a high-speed and ultracompact photo-receiver without an electrical amplifier nor an external bias circuit. Since there is no electrical energy required, our estimation shows that the consumption energy is just the optical energy of the injected signal pulse which is about 1 fJ/bit. Hence, it will lead to an ultimately efficient and highly integrable optical-to-electrical converter in a chip, which will be a key ingredient for dense nanophotonic communication and processors.

  16. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  17. Directional Bias and Pheromone for Discovery and Coverage on Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Berenhaut, Kenneth S.; Oehmen, Christopher S.

    2012-09-11

    Natural multi-agent systems often rely on “correlated random walks” (random walks that are biased toward a current heading) to distribute their agents over a space (e.g., for foraging, search, etc.). Our contribution involves creation of a new movement and pheromone model that applies the concept of heading bias in random walks to a multi-agent, digital-ants system designed for cyber-security monitoring. We examine the relative performance effects of both pheromone and heading bias on speed of discovery of a target and search-area coverage in a two-dimensional network layout. We found that heading bias was unexpectedly helpful in reducing search time andmore » that it was more influential than pheromone for improving coverage. We conclude that while pheromone is very important for rapid discovery, heading bias can also greatly improve both performance metrics.« less

  18. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    NASA Technical Reports Server (NTRS)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  19. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  20. Quantum random access memory.

    PubMed

    Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo

    2008-04-25

    A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.

  1. Microcontroller for automation application

    NASA Technical Reports Server (NTRS)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  2. Evolutionarily stable and convergent stable strategies in reaction-diffusion models for conditional dispersal.

    PubMed

    Lam, King-Yeung; Lou, Yuan

    2014-02-01

    We consider a mathematical model of two competing species for the evolution of conditional dispersal in a spatially varying, but temporally constant environment. Two species are different only in their dispersal strategies, which are a combination of random dispersal and biased movement upward along the resource gradient. In the absence of biased movement or advection, Hastings showed that the mutant can invade when rare if and only if it has smaller random dispersal rate than the resident. When there is a small amount of biased movement or advection, we show that there is a positive random dispersal rate that is both locally evolutionarily stable and convergent stable. Our analysis of the model suggests that a balanced combination of random and biased movement might be a better habitat selection strategy for populations.

  3. Acquisition of shape information in working memory, as a function of viewing time and number of consecutive images: evidence for a succession of discrete storage classes.

    PubMed

    Ninio, J

    1998-07-01

    The capacity of visual working memory was investigated using abstract images that were slightly distorted NxN (with generally N=8) square lattices of black or white randomly selected elements. After viewing an image, or a sequence of images, the subjects viewed couples of images containing the test image and a distractor image derived from the first one by changing the black or white value of q randomly selected elements. The number q was adjusted in each experiment to the difficulty of the task and the abilities of the subject. The fraction of recognition errors, given q and N was used to evaluate the number M of bits memorized by the subject. For untrained subjects, this number M varied in a biphasic manner as a function of the time t of presentation of the test image: it was on average 13 bits for 1 s, 16 bits for 2 to 5 s, and 20 bits for 8 s. The slow pace of acquisition, from 1 to 8 s, seems due to encoding difficulties, and not to channel capacity limitations. Beyond 8 s, M(t), accurately determined for one subject, followed a square root law, in agreement with 19th century observations on the memorization of lists of digits. When two consecutive 8x8 images were viewed and tested in the same order, the number of memorized bits was downshifted by a nearly constant amount, independent of t, and equal on average to 6-7 bits. Across the subjects, the shift was independent of M. When two consecutive test images were related, the recognition errors decreased for both images, whether the testing was performed in the presentation or the reverse order. Studies involving three subjects, indicate that, when viewing m consecutive images, the average amount of information captured per image varies with m in a stepwise fashion. The first two step boundaries were around m=3 and m=9-12. The data are compatible with a model of organization of working memory in several successive layers containing increasing numbers of units, the more remote a unit, the lower the rate at which it may acquire encoded information. Copyright 1998 Elsevier Science B.V.

  4. Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling

    NASA Astrophysics Data System (ADS)

    Bierhorst, Peter; Shalm, Lynden K.; Mink, Alan; Jordan, Stephen; Liu, Yi-Kai; Rommal, Andrea; Glancy, Scott; Christensen, Bradley; Nam, Sae Woo; Knill, Emanuel

    Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment to obtain data containing randomness that cannot be predicted by any theory that does not also allow the sending of signals faster than the speed of light. To certify and quantify the randomness, we develop a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtain 256 new random bits, uniform to within 10- 3 .

  5. Attentional bias modification training for insomnia: A double-blind placebo controlled randomized trial

    PubMed Central

    Lancee, Jaap; Yasiney, Samya L.; Brendel, Ruben S.; Boffo, Marilisa; Clarke, Patrick J. F.; Salemink, Elske

    2017-01-01

    Background Attentional bias toward sleep-related information is believed to play a key role in insomnia. If attentional bias is indeed of importance, changing this bias should then in turn have effects on insomnia complaints. In this double-blind placebo controlled randomized trial we investigated the efficacy of attentional bias modification training in the treatment of insomnia. Method We administered baseline, post-test, and one-week follow-up measurements of insomnia severity, sleep-related worry, depression, and anxiety. Participants meeting DSM-5 criteria for insomnia were randomized into an attentional bias training group (n = 67) or a placebo training group (n = 70). Both groups received eight training sessions over the course of two weeks. All participants kept a sleep diary for four consecutive weeks (one week before until one week after the training sessions). Results There was no additional benefit for the attentional bias training over the placebo training on sleep-related indices/outcome measures. Conclusions The absence of the effect may be explained by the fact that there was neither attentional bias at baseline nor any reduction in the bias after the training. Either way, this study gives no support for attentional bias modification training as a stand-alone intervention for ameliorating insomnia complaints. PMID:28423038

  6. Effect of using different cover image quality to obtain robust selective embedding in steganography

    NASA Astrophysics Data System (ADS)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  7. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X.

    PubMed

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A; Sondhi, S L

    2017-12-13

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition.This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'. © 2017 The Author(s).

  8. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A.; Sondhi, S. L.

    2017-10-01

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.

  9. On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial

    ERIC Educational Resources Information Center

    Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean

    2017-01-01

    Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…

  10. Stochastic p -Bits for Invertible Logic

    NASA Astrophysics Data System (ADS)

    Camsari, Kerem Yunus; Faria, Rafatul; Sutton, Brian M.; Datta, Supriyo

    2017-07-01

    Conventional semiconductor-based logic and nanomagnet-based memory devices are built out of stable, deterministic units such as standard metal-oxide semiconductor transistors, or nanomagnets with energy barriers in excess of ≈40 - 60 kT . In this paper, we show that unstable, stochastic units, which we call "p -bits," can be interconnected to create robust correlations that implement precise Boolean functions with impressive accuracy, comparable to standard digital circuits. At the same time, they are invertible, a unique property that is absent in standard digital circuits. When operated in the direct mode, the input is clamped, and the network provides the correct output. In the inverted mode, the output is clamped, and the network fluctuates among all possible inputs that are consistent with that output. First, we present a detailed implementation of an invertible gate to bring out the key role of a single three-terminal transistorlike building block to enable the construction of correlated p -bit networks. The results for this specific, CMOS-assisted nanomagnet-based hardware implementation agree well with those from a universal model for p -bits, showing that p -bits need not be magnet based: any three-terminal tunable random bit generator should be suitable. We present a general algorithm for designing a Boltzmann machine (BM) with a symmetric connection matrix [J ] (Ji j=Jj i) that implements a given truth table with p -bits. The [J ] matrices are relatively sparse with a few unique weights for convenient hardware implementation. We then show how BM full adders can be interconnected in a partially directed manner (Ji j≠Jj i) to implement large logic operations such as 32-bit binary addition. Hundreds of stochastic p -bits get precisely correlated such that the correct answer out of 233 (≈8 ×1 09) possibilities can be extracted by looking at the statistical mode or majority vote of a number of time samples. With perfect directivity (Jj i=0 ) a small number of samples is enough, while for less directed connections more samples are needed, but even in the former case logical invertibility is largely preserved. This combination of digital accuracy and logical invertibility is enabled by the hybrid design that uses bidirectional BM units to construct circuits with partially directed interunit connections. We establish this key result with extensive examples including a 4-bit multiplier which in inverted mode functions as a factorizer.

  11. Weighted re-randomization tests for minimization with unbalanced allocation.

    PubMed

    Han, Baoguang; Yu, Menggang; McEntegart, Damian

    2013-01-01

    Re-randomization test has been considered as a robust alternative to the traditional population model-based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate-adaptive randomization method for ensuring balance among prognostic factors. Among various re-randomization tests, fixed-entry-order re-randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed-entry-order re-randomization test is biased and thus compromised in power. We find that the bias is due to non-uniform re-allocation probabilities incurred by the re-randomization in this case. We therefore propose a weighted fixed-entry-order re-randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re-randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Improved LSB matching steganography with histogram characters reserved

    NASA Astrophysics Data System (ADS)

    Chen, Zhihong; Liu, Wenyao

    2008-03-01

    This letter bases on the researches of LSB (least significant bit, i.e. the last bit of a binary pixel value) matching steganographic method and the steganalytic method which aims at histograms of cover images, and proposes a modification to LSB matching. In the LSB matching, if the LSB of the next cover pixel matches the next bit of secret data, do nothing; otherwise, choose to add or subtract one from the cover pixel value at random. In our improved method, a steganographic information table is defined and records the changes which embedded secrete bits introduce in. Through the table, the next LSB which has the same pixel value will be judged to add or subtract one dynamically in order to ensure the histogram's change of cover image is minimized. Therefore, the modified method allows embedding the same payload as the LSB matching but with improved steganographic security and less vulnerability to attacks compared with LSB matching. The experimental results of the new method show that the histograms maintain their attributes, such as peak values and alternative trends, in an acceptable degree and have better performance than LSB matching in the respects of histogram distortion and resistance against existing steganalysis.

  13. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  14. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  15. Si photonics technology for future optical interconnection

    NASA Astrophysics Data System (ADS)

    Zheng, Xuezhe; Krishnamoorthy, Ashok V.

    2011-12-01

    Scaling of computing systems require ultra-efficient interconnects with large bandwidth density. Silicon photonics offers a disruptive solution with advantages in reach, energy efficiency and bandwidth density. We review our progress in developing building blocks for ultra-efficient WDM silicon photonic links. Employing microsolder based hybrid integration with low parasitics and high density, we optimize photonic devices on SOI platforms and VLSI circuits on more advanced bulk CMOS technology nodes independently. Progressively, we successfully demonstrated single channel hybrid silicon photonic transceivers at 5 Gbps and 10 Gbps, and 80 Gbps arrayed WDM silicon photonic transceiver using reverse biased depletion ring modulators and Ge waveguide photo detectors. Record-high energy efficiency of less than 100fJ/bit and 385 fJ/bit were achieved for the hybrid integrated transmitter and receiver, respectively. Waveguide grating based optical proximity couplers were developed with low loss and large optical bandwidth to enable multi-layer intra/inter-chip optical interconnects. Thermal engineering of WDM devices by selective substrate removal, together with WDM link using synthetic wavelength comb, we significantly improved the device tuning efficiency and reduced the tuning range. Using these innovative techniques, two orders of magnitude tuning power reduction was achieved. And tuning cost of only a few 10s of fJ/bit is expected for high data rate WDM silicon photonic links.

  16. An algorithm to compute the sequency ordered Walsh transform

    NASA Technical Reports Server (NTRS)

    Larsen, H.

    1976-01-01

    A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.

  17. Low-noise delays from dynamic Brillouin gratings based on perfect Golomb coding of pump waves.

    PubMed

    Antman, Yair; Levanon, Nadav; Zadok, Avi

    2012-12-15

    A method for long variable all-optical delay is proposed and simulated, based on reflections from localized and stationary dynamic Brillouin gratings (DBGs). Inspired by radar methods, the DBGs are inscribed by two pumps that are comodulated by perfect Golomb codes, which reduce the off-peak reflectivity. Compared with random bit sequence coding, Golomb codes improve the optical signal-to-noise ratio (OSNR) of delayed waveforms by an order of magnitude. Simulations suggest a delay of 5  Gb/s data by 9 ns, or 45 bit durations, with an OSNR of 13 dB.

  18. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  19. On estimation of secret message length in LSB steganography in spatial domain

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav

    2004-06-01

    In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.

  20. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs

    NASA Astrophysics Data System (ADS)

    Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.

    1982-12-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.

  1. Evaluation of Magnetoresistive RAM for Space Applications

    NASA Technical Reports Server (NTRS)

    Heidecker, Jason

    2014-01-01

    Magnetoresistive random-access memory (MRAM) is a non-volatile memory that exploits electronic spin, rather than charge, to store data. Instead of moving charge on and off a floating gate to alter the threshold voltage of a CMOS transistor (creating different bit states), MRAM uses magnetic fields to flip the polarization of a ferromagnetic material thus switching its resistance and bit state. These polarized states are immune to radiation-induced upset, thus making MRAM very attractive for space application. These magnetic memory elements also have infinite data retention and erase/program endurance. Presented here are results of reliability testing of two space-qualified MRAM products from Aeroflex and Honeywell.

  2. Thermal inclusions: how one spin can destroy a many-body localized phase

    NASA Astrophysics Data System (ADS)

    Ponte, Pedro; Laumann, C. R.; Huse, David A.; Chandran, A.

    2017-10-01

    Many-body localized (MBL) systems lie outside the framework of statistical mechanics, as they fail to equilibrate under their own quantum dynamics. Even basic features of MBL systems, such as their stability to thermal inclusions and the nature of the dynamical transition to thermalizing behaviour, remain poorly understood. We study a simple central spin model to address these questions: a two-level system interacting with strength J with N≫1 localized bits subject to random fields. On increasing J, the system transitions from an MBL to a delocalized phase on the vanishing scale Jc(N)˜1/N, up to logarithmic corrections. In the transition region, the single-site eigenstate entanglement entropies exhibit bimodal distributions, so that localized bits are either `on' (strongly entangled) or `off' (weakly entangled) in eigenstates. The clusters of `on' bits vary significantly between eigenstates of the same sample, which provides evidence for a heterogeneous discontinuous transition out of the localized phase in single-site observables. We obtain these results by perturbative mapping to bond percolation on the hypercube at small J and by numerical exact diagonalization of the full many-body system. Our results support the arguments that the MBL phase is unstable in systems with short-range interactions and quenched randomness in dimensions d that are high but finite. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.

  3. Biased three-intensity decoy-state scheme on the measurement-device-independent quantum key distribution using heralded single-photon sources.

    PubMed

    Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin

    2018-02-19

    At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.

  4. The persistence of the attentional bias to regularities in a changing environment.

    PubMed

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  5. Analysis of using interpulse intervals to generate 128-bit biometric random binary sequences for securing wireless body sensor networks.

    PubMed

    Zhang, Guang-He; Poon, Carmen C Y; Zhang, Yuan-Ting

    2012-01-01

    Wireless body sensor network (WBSN), a key building block for m-Health, demands extremely stringent resource constraints and thus lightweight security methods are preferred. To minimize resource consumption, utilizing information already available to a WBSN, particularly common to different sensor nodes of a WBSN, for security purposes becomes an attractive solution. In this paper, we tested the randomness and distinctiveness of the 128-bit biometric binary sequences (BSs) generated from interpulse intervals (IPIs) of 20 healthy subjects as well as 30 patients suffered from myocardial infarction and 34 subjects with other cardiovascular diseases. The encoding time of a biometric BS on a WBSN node is on average 23 ms and memory occupation is 204 bytes for any given IPI sequence. The results from five U.S. National Institute of Standards and Technology statistical tests suggest that random biometric BSs can be generated from both healthy subjects and cardiovascular patients and can potentially be used as authentication identifiers for securing WBSNs. Ultimately, it is preferred that these biometric BSs can be used as encryption keys such that key distribution over the WBSN can be avoided.

  6. Local randomness: Examples and application

    NASA Astrophysics Data System (ADS)

    Fu, Honghao; Miller, Carl A.

    2018-03-01

    When two players achieve a superclassical score at a nonlocal game, their outputs must contain intrinsic randomness. This fact has many useful implications for quantum cryptography. Recently it has been observed [C. Miller and Y. Shi, Quantum Inf. Computat. 17, 0595 (2017)] that such scores also imply the existence of local randomness—that is, randomness known to one player but not to the other. This has potential implications for cryptographic tasks between two cooperating but mistrustful players. In the current paper we bring this notion toward practical realization, by offering near-optimal bounds on local randomness for the CHSH game, and also proving the security of a cryptographic application of local randomness (single-bit certified deletion).

  7. Game, cloud architecture and outreach for The BIG Bell Test

    NASA Astrophysics Data System (ADS)

    Abellan, Carlos; Tura, Jordi; Garcia, Marta; Beduini, Federica; Hirschmann, Alina; Pruneri, Valerio; Acin, Antonio; Marti, Maria; Mitchell, Morgan

    The BIG Bell test uses the input from the Bellsters, self-selected human participants introducing zeros and ones through an online videogame, to perform a suite of quantum physics experiments. In this talk, we will explore the videogame, the data infrastructure and the outreach efforts of the BIG Bell test collaboration. First, we will discuss how the game was designed so as to eliminate possible feedback mechanisms that could influence people's behavior. Second, we will discuss the cloud architecture design for scalability as well as explain how we sent each individual bit from the users to the labs. Also, and using all the bits collected via the BIG Bell test interface, we will show a data analysis on human randomness, e.g. are younger Bellsters more random than older Bellsters? Finally, we will talk about the outreach and communication efforts of the BIG Bell test collaboration, exploring both the social media campaigns as well as the close interaction with teachers and educators to bring the project into classrooms.

  8. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  9. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  10. Exchange bias mechanism in FM/FM/AF spin valve systems in the presence of random unidirectional anisotropy field at the AF interface: The role played by the interface roughness due to randomness

    NASA Astrophysics Data System (ADS)

    Yüksel, Yusuf

    2018-05-01

    We propose an atomistic model and present Monte Carlo simulation results regarding the influence of FM/AF interface structure on the hysteresis mechanism and exchange bias behavior for a spin valve type FM/FM/AF magnetic junction. We simulate perfectly flat and roughened interface structures both with uncompensated interfacial AF moments. In order to simulate rough interface effect, we introduce the concept of random exchange anisotropy field induced at the interface, and acting on the interface AF spins. Our results yield that different types of the random field distributions of anisotropy field may lead to different behavior of exchange bias.

  11. Wire Stripper Holds Insulation Debris

    NASA Technical Reports Server (NTRS)

    Cook, Allen D.; Morris, Henry S.; Bauer, Laverne

    1994-01-01

    Attachment to standard wire-stripping tool catches bits of insulation as they are removed from electrical wire and retains them for proper disposal. Prevents insulation particles from falling at random, contaminating electronic equipment and soiling workspace. Commercial tool modified by attaching small collection box to one of the jaws.

  12. Fast Clock Recovery for Digital Communications

    NASA Technical Reports Server (NTRS)

    Tell, R. G.

    1985-01-01

    Circuit extracts clock signal from random non-return-to-zero data stream, locking onto clock within one bit period at 1-gigabitper-second data rate. Circuit used for synchronization in opticalfiber communications. Derives speed from very short response time of gallium arsenide metal/semiconductor field-effect transistors (MESFET's).

  13. An Assessment of the Risk of Bias in Randomized Controlled Trial Reports Published in Prosthodontic and Implant Dentistry Journals.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-01-01

    The objective of this study was to assess the risk of bias of randomized controlled trials (RCTs) published in prosthodontic and implant dentistry journals. The last 30 issues of 9 journals in the field of prosthodontic and implant dentistry (Clinical Implant Dentistry and Related Research, Clinical Oral Implants Research, Implant Dentistry, International Journal of Oral & Maxillofacial Implants, International Journal of Periodontics and Restorative Dentistry, International Journal of Prosthodontics, Journal of Dentistry, Journal of Oral Rehabilitation, and Journal of Prosthetic Dentistry) were hand-searched for RCTs. Risk of bias was assessed using the Cochrane Collaboration's risk of bias tool and analyzed descriptively. From the 3,667 articles screened, a total of 147 RCTs were identified and included. The number of published RCTs increased with time. The overall distribution of a high risk of bias assessment varied across the domains of the Cochrane risk of bias tool: 8% for random sequence generation, 18% for allocation concealment, 41% for masking, 47% for blinding of outcome assessment, 7% for incomplete outcome data, 12% for selective reporting, and 41% for other biases. The distribution of high risk of bias for RCTs published in the selected prosthodontic and implant dentistry journals varied among journals and ranged from 8% to 47%, which can be considered as substantial.

  14. OFDM-based broadband underwater wireless optical communication system using a compact blue LED

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Kong, Meiwei; Lin, Aobo; Song, Yuhang; Yu, Xiangyu; Qu, Fengzhong; Han, Jun; Deng, Ning

    2016-06-01

    We propose and experimentally demonstrate an IM/DD-OFDM-based underwater wireless optical communication system. We investigate the dependence of its BER performance on the training symbol number as well as LED's bias voltage and driving voltage. With single compact blue LED and a low-cost PIN photodiode, we achieve net bit rates of 225.90 Mb/s at a BER of 1.54×10-3 using 16-QAM and 231.95 Mb/s at a BER of 3.28×10-3 using 32-QAM, respectively, over a 2-m air channel. Over a 2-m underwater channel, we achieve net bit rates of 161.36 Mb/s using 16-QAM, 156.31 Mb/s using 32-QAM, and 127.07 Mb/s using 64-QAM, respectively. The corresponding BERs are 2.5×10-3, 7.42×10-4, and 3.17×10-3, respectively, which are all below the FEC threshold.

  15. Processing and Prolonged 500 C Testing of 4H-SiC JFET Integrated Circuits with Two Levels of Metal Interconnect

    NASA Technical Reports Server (NTRS)

    Spry, David J.; Neudeck, Philip G.; Chen, Liangyu; Lukco, Dorothy; Chang, Carl W.; Beheim, Glenn M.; Krasowski, Michael J.; Prokop, Norman F.

    2015-01-01

    Complex integrated circuit (IC) chips rely on more than one level of interconnect metallization for routing of electrical power and signals. This work reports the processing and testing of 4H-SiC junction field effect transistor (JFET) prototype ICs with two levels of metal interconnect capable of prolonged operation at 500 C. Packaged functional circuits including 3-and 11-stage ring oscillators, a 4-bit digital to analog converter, and a 4-bit address decoder and random access memory cell have been demonstrated at 500 C. A 3-stage oscillator functioned for over 3000 hours at 500 C in air ambient.

  16. Negotiation: How to Be Effective.

    PubMed

    Weiss, Arnold-Peter C

    2017-01-01

    The art of successful negotiation is not as random or difficult as it might seem at first glance. Most negotiations end up with both sides receiving something of value as well as giving up something valuable in return. It has been said that the best negotiated outcomes occur when both parties walk away a bit disappointed or just a little bit happy. The goal of this short primer is to give some hints as to how to get a slightly better deal than the other party most of the time. There are several points to remember to be able to achieve such an outcome frequently. Copyright © 2017 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  17. Time delay and distance measurement

    NASA Technical Reports Server (NTRS)

    Abshire, James B. (Inventor); Sun, Xiaoli (Inventor)

    2011-01-01

    A method for measuring time delay and distance may include providing an electromagnetic radiation carrier frequency and modulating one or more of amplitude, phase, frequency, polarization, and pointing angle of the carrier frequency with a return to zero (RZ) pseudo random noise (PN) code. The RZ PN code may have a constant bit period and a pulse duration that is less than the bit period. A receiver may detect the electromagnetic radiation and calculate the scattering profile versus time (or range) by computing a cross correlation function between the recorded received signal and a three-state RZ PN code kernel in the receiver. The method also may be used for pulse delay time (i.e., PPM) communications.

  18. Not all numbers are equal: preferences and biases among children and adults when generating random sequences.

    PubMed

    Towse, John N; Loetscher, Tobias; Brugger, Peter

    2014-01-01

    We investigate the number preferences of children and adults when generating random digit sequences. Previous research has shown convincingly that adults prefer smaller numbers when randomly choosing between responses 1-6. We analyze randomization choices made by both children and adults, considering a range of experimental studies and task configurations. Children - most of whom are between 8 and 11~years - show a preference for relatively large numbers when choosing numbers 1-10. Adults show a preference for small numbers with the same response set. We report a modest association between children's age and numerical bias. However, children also exhibit a small number bias with a smaller response set available, and they show a preference specifically for the numbers 1-3 across many datasets. We argue that number space demonstrates both continuities (numbers 1-3 have a distinct status) and change (a developmentally emerging bias toward the left side of representational space or lower numbers).

  19. "Fair Play": A Videogame Designed to Address Implicit Race Bias Through Active Perspective Taking.

    PubMed

    Gutierrez, Belinda; Kaatz, Anna; Chu, Sarah; Ramirez, Dennis; Samson-Samuel, Clem; Carnes, Molly

    2014-12-01

    Having diverse faculty in academic health centers will help diversify the healthcare workforce and reduce health disparities. Implicit race bias is one factor that contributes to the underrepresentation of Black faculty. We designed the videogame "Fair Play" in which players assume the role of a Black graduate student named Jamal Davis. As Jamal, players experience subtle race bias while completing "quests" to obtain a science degree. We hypothesized that participants randomly assigned to play the game would have greater empathy for Jamal and lower implicit race bias than participants randomized to read narrative text describing Jamal's experience. University of Wisconsin-Madison graduate students were recruited via e-mail and randomly assigned to play "Fair Play" or read narrative text through an online link. Upon completion, participants took an Implicit Association Test to measure implicit bias and answered survey questions assessing empathy toward Jamal and awareness of bias. As hypothesized, gameplayers showed the least implicit bias but only when they also showed high empathy for Jamal (P=0.013). Gameplayers did not show greater empathy than text readers, and women in the text condition reported the greatest empathy for Jamal (P=0.008). However, high empathy only predicted lower levels of implicit bias among those who actively took Jamal's perspective through gameplay (P=0.014). A videogame in which players experience subtle race bias as a Black graduate student has the potential to reduce implicit bias, possibly because of a game's ability to foster empathy through active perspective taking.

  20. New Quantum Key Distribution Scheme Based on Random Hybrid Quantum Channel with EPR Pairs and GHZ States

    NASA Astrophysics Data System (ADS)

    Yan, Xing-Yu; Gong, Li-Hua; Chen, Hua-Ying; Zhou, Nan-Run

    2018-05-01

    A theoretical quantum key distribution scheme based on random hybrid quantum channel with EPR pairs and GHZ states is devised. In this scheme, EPR pairs and tripartite GHZ states are exploited to set up random hybrid quantum channel. Only one photon in each entangled state is necessary to run forth and back in the channel. The security of the quantum key distribution scheme is guaranteed by more than one round of eavesdropping check procedures. It is of high capacity since one particle could carry more than two bits of information via quantum dense coding.

  1. Critical issues regarding SEU in avionics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Normand, E.; McNulty, P.J.

    1993-01-01

    The energetic neutrons in the atmosphere cause microelectronics in avionic system to malfunction through a mechanism called single-event upsets (SEUs), and single-event latchup is a potential threat. Data from military and experimental flights as well as laboratory testing indicate that typical non-radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant SEU rate at aircraft altitudes. Microelectronics in avionics systems have been demonstrated to be susceptible to SEU. Of all device types, RAMs are the most sensitive because they have the largest number of bits on a chip (e.g., an SRAM may have from 64K to 1Mmore » bits, a microprocessor 3K to 10K bits, and a logic device like an analog-to-digital converter, 12 bits). Avionics designers will need to take this susceptibility into account in current and future designs. A number of techniques are available for dealing with SEU: EDAC, redundancy, use of SEU-hard parts, reset and/or watchdog timer capability, etc. Specifications should be developed to guide avionics vendors in the analysis, prevention, and verification of neutron-induced SEU. Areas for additional research include better definition of the atmospheric neutrons and protons, development of better calculational models (e.g., those used for protons[sup 11]), and better characterization of neutron-induced latchup.« less

  2. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  3. Autosophy: an alternative vision for satellite communication, compression, and archiving

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus; Holtz, Eric; Kalienky, Diana

    2006-08-01

    Satellite communication and archiving systems are now designed according to an outdated Shannon information theory where all data is transmitted in meaningless bit streams. Video bit rates, for example, are determined by screen size, color resolution, and scanning rates. The video "content" is irrelevant so that totally random images require the same bit rates as blank images. An alternative system design, based on the newer Autosophy information theory, is now evolving, which transmits data "contend" or "meaning" in a universally compatible 64bit format. This would allow mixing all multimedia transmissions in the Internet's packet stream. The new systems design uses self-assembling data structures, which grow like data crystals or data trees in electronic memories, for both communication and archiving. The advantages for satellite communication and archiving may include: very high lossless image and video compression, unbreakable encryption, resistance to transmission errors, universally compatible data formats, self-organizing error-proof mass memories, immunity to the Internet's Quality of Service problems, and error-proof secure communication protocols. Legacy data transmission formats can be converted by simple software patches or integrated chipsets to be forwarded through any media - satellites, radio, Internet, cable - without needing to be reformatted. This may result in orders of magnitude improvements for all communication and archiving systems.

  4. Randomization in clinical trials in orthodontics: its significance in research design and methods to achieve it.

    PubMed

    Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore

    2011-12-01

    Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.

  5. KMCLib 1.1: Extended random number support and technical updates to the KMCLib general framework for kinetic Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-11-01

    We here present a revised version, v1.1, of the KMCLib general framework for kinetic Monte-Carlo (KMC) simulations. The generation of random numbers in KMCLib now relies on the C++11 standard library implementation, and support has been added for the user to choose from a set of C++11 implemented random number generators. The Mersenne-twister, the 24 and 48 bit RANLUX and a 'minimal-standard' PRNG are supported. We have also included the possibility to use true random numbers via the C++11 std::random_device generator. This release also includes technical updates to support the use of an extended range of operating systems and compilers.

  6. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  7. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  8. Lessons Learned Through the Implementation of an eHealth Physical Activity Gaming Intervention with High School Youth.

    PubMed

    Pope, Lizzy; Garnett, Bernice; Dibble, Marguerite

    2018-04-01

    To encourage high school students to meet physical activity goals using a newly developed game, and to document the feasibility, benefits, and challenges of using an electronic gaming application to promote physical activity in high school students. Working with youth and game designers an electronic game, Camp Conquer, was developed to motivate high school students to meet physical activity goals. One-hundred-five high school students were recruited to participate in a 12-week pilot test of the game and randomly assigned to a Game Condition or Control Condition. Students in both conditions received a FitBit to track their activity, and participants in the Game Condition received access to Camp Conquer. Number of steps and active minutes each day were tracked for all participants. FitBit use, game logins, and qualitative feedback from researchers, school personnel, and participants were used to determine intervention engagement. The majority of study participants did not consistently wear their FitBit or engage with the gaming intervention. Numerous design challenges and barriers to successful implementation such as the randomized design, absence of a true school-based champion, ease of use, and game glitches were identified. Developing games is an exciting technique for motivating the completion of a variety of health behaviors. Although the present intervention was not successful in increasing physical activity in high school students, important lessons were learned regarding how to best structure a gaming intervention for the high school population.

  9. Nature of magnetization and lateral spin-orbit interaction in gated semiconductor nanowires.

    PubMed

    Karlsson, H; Yakimenko, I I; Berggren, K-F

    2018-05-31

    Semiconductor nanowires are interesting candidates for realization of spintronics devices. In this paper we study electronic states and effects of lateral spin-orbit coupling (LSOC) in a one-dimensional asymmetrically biased nanowire using the Hartree-Fock method with Dirac interaction. We have shown that spin polarization can be triggered by LSOC at finite source-drain bias,as a result of numerical noise representing a random magnetic field due to wiring or a random background magnetic field by Earth magnetic field, for instance. The electrons spontaneously arrange into spin rows in the wire due to electron interactions leading to a finite spin polarization. The direction of polarization is, however, random at zero source-drain bias. We have found that LSOC has an effect on orientation of spin rows only in the case when source-drain bias is applied.

  10. Nature of magnetization and lateral spin–orbit interaction in gated semiconductor nanowires

    NASA Astrophysics Data System (ADS)

    Karlsson, H.; Yakimenko, I. I.; Berggren, K.-F.

    2018-05-01

    Semiconductor nanowires are interesting candidates for realization of spintronics devices. In this paper we study electronic states and effects of lateral spin–orbit coupling (LSOC) in a one-dimensional asymmetrically biased nanowire using the Hartree–Fock method with Dirac interaction. We have shown that spin polarization can be triggered by LSOC at finite source-drain bias,as a result of numerical noise representing a random magnetic field due to wiring or a random background magnetic field by Earth magnetic field, for instance. The electrons spontaneously arrange into spin rows in the wire due to electron interactions leading to a finite spin polarization. The direction of polarization is, however, random at zero source-drain bias. We have found that LSOC has an effect on orientation of spin rows only in the case when source-drain bias is applied.

  11. A mathematical model of case-ascertainment bias: Applied to case-control studies nested within a randomized screening trial.

    PubMed

    Jansen, Rick J; Alexander, Bruce H; Hayes, Richard B; Miller, Anthony B; Wacholder, Sholom; Church, Timothy R

    2018-01-01

    When some individuals are screen-detected before the beginning of the study, but otherwise would have been diagnosed symptomatically during the study, this results in different case-ascertainment probabilities among screened and unscreened participants, referred to here as lead-time-biased case-ascertainment (LTBCA). In fact, this issue can arise even in risk-factor studies nested within a randomized screening trial; even though the screening intervention is randomly allocated to trial arms, there is no randomization to potential risk-factors and uptake of screening can differ by risk-factor strata. Under the assumptions that neither screening nor the risk factor affects underlying incidence and no other forms of bias operate, we simulate and compare the underlying cumulative incidence and that observed in the study due to LTBCA. The example used will be constructed from the randomized Prostate, Lung, Colorectal, and Ovarian cancer screening trial. The derived mathematical model is applied to simulating two nested studies to evaluate the potential for screening bias in observational lung cancer studies. Because of differential screening under plausible assumptions about preclinical incidence and duration, the simulations presented here show that LTBCA due to chest x-ray screening can significantly increase the estimated risk of lung cancer due to smoking by 1% and 50%. Traditional adjustment methods cannot account for this bias, as the influence screening has on observational study estimates involves events outside of the study observation window (enrollment and follow-up) that change eligibility for potential participants, thus biasing case ascertainment.

  12. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.

  13. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  14. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  15. Randomness Testing of the Advanced Encryption Standard Finalist Candidates

    DTIC Science & Technology

    2000-03-28

    Excursions Variant 18 168-185 Rank 1 7 Serial 2 186-187 Spectral DFT 1 8 Lempel - Ziv Compression 1 188 Aperiodic Templates 148 9-156 Linear Complexity...256 bits) for each of the algorithms , for a total of 80 different data sets10. These data sets were selected based on the belief that they would be...useful in evaluating the randomness of cryptographic algorithms . Table 2 lists the eight data types. For a description of the data types, see Appendix

  16. Adaptive box filters for removal of random noise from digital images

    USGS Publications Warehouse

    Eliason, E.M.; McEwen, A.S.

    1990-01-01

    We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors

  17. Optical bias selector based on a multilayer a-SiC:H optical filter

    NASA Astrophysics Data System (ADS)

    Vieira, M.; Vieira, M. A.; Louro, P.

    2017-08-01

    In this paper we present a MUX/DEMUX device based on a multilayer a-SiC:H optical filter that requires nearultraviolet steady state optical switches to select desired wavelengths in the visible range. Spectral response and transmittance measurements are presented and show the feasibility of tailoring the wavelength and bandwidth of a polychromatic mixture of different wavelengths. The selector filter is realized by using a two terminal double pi'n/pin a-SiC:H photodetector. Five visible communication channels are transmitted together, each one with a specific bit sequence. The combined optical signal is analyzed by reading out the photocurrent, under near-UV front steady state background. Data shows that 25 current levels are detected and corresponds to the thirty-two on/off possible states. The proximity of the magnitude of consecutive levels causes occasional errors in the decoded information. To minimize the errors, four parity bit are generated and stored along with the data word. The parity of the word is checked after reading the word to detect and correct the transmitted data. Results show that the background works as a selector in the visible range, shifting the sensor sensitivity and together with the parity check bits allows the identification and decoding of the different input channels. A transmission capability of 60 kbps using the generated codeword was achieved. An optoeletronic model gives insight on the system physics.

  18. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  19. Randomization Methods in Emergency Setting Trials: A Descriptive Review

    ERIC Educational Resources Information Center

    Corbett, Mark Stephen; Moe-Byrne, Thirimon; Oddie, Sam; McGuire, William

    2016-01-01

    Background: Quasi-randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods: We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic…

  20. Quantum cryptography for secure free-space communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, R.J.; Buttler, W.T.; Kwiat, P.G.

    1999-03-01

    The secure distribution of the secret random bit sequences known as key material, is an essential precursor to their use for the encryption and decryption of confidential communications. Quantum cryptography is a new technique for secure key distribution with single-photon transmissions: Heisenberg`s uncertainty principle ensures that an adversary can neither successfully tap the key transmissions, nor evade detection (eavesdropping raises the key error rate above a threshold value). The authors have developed experimental quantum cryptography systems based on the transmission of non-orthogonal photon polarization states to generate shared key material over line-of-sight optical links. Key material is built up usingmore » the transmission of a single-photon per bit of an initial secret random sequence. A quantum-mechanically random subset of this sequence is identified, becoming the key material after a data reconciliation stage with the sender. The authors have developed and tested a free-space quantum key distribution (QKD) system over an outdoor optical path of {approximately}1 km at Los Alamos National Laboratory under nighttime conditions. Results show that free-space QKD can provide secure real-time key distribution between parties who have a need to communicate secretly. Finally, they examine the feasibility of surface to satellite QKD.« less

  1. Secure communications using quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, R.J.; Buttler, W.T.; Kwiat, P.G.

    1997-08-01

    The secure distribution of the secret random bit sequences known as {open_quotes}key{close_quotes} material, is an essential precursor to their use for the encryption and decryption of confidential communications. Quantum cryptography is an emerging technology for secure key distribution with single-photon transmissions, nor evade detection (eavesdropping raises the key error rate above a threshold value). We have developed experimental quantum cryptography systems based on the transmission of non-orthogonal single-photon states to generate shared key material over multi-kilometer optical fiber paths and over line-of-sight links. In both cases, key material is built up using the transmission of a single-photon per bit ofmore » an initial secret random sequence. A quantum-mechanically random subset of this sequence is identified, becoming the key material after a data reconciliation stage with the sender. In our optical fiber experiment we have performed quantum key distribution over 24-km of underground optical fiber using single-photon interference states, demonstrating that secure, real-time key generation over {open_quotes}open{close_quotes} multi-km node-to-node optical fiber communications links is possible. We have also constructed a quantum key distribution system for free-space, line-of-sight transmission using single-photon polarization states, which is currently undergoing laboratory testing. 7 figs.« less

  2. The Use of Propensity Scores and Observational Data to Estimate Randomized Controlled Trial Generalizability Bias

    PubMed Central

    Pressler, Taylor R.; Kaizar, Eloise E.

    2014-01-01

    While randomized controlled trials (RCT) are considered the “gold standard” for clinical studies, the use of exclusion criteria may impact the external validity of the results. It is unknown whether estimators of effect size are biased by excluding a portion of the target population from enrollment. We propose to use observational data to estimate the bias due to enrollment restrictions, which we term generalizability bias. In this paper we introduce a class of estimators for the generalizability bias and use simulation to study its properties in the presence of non-constant treatment effects. We find the surprising result that our estimators can be unbiased for the true generalizability bias even when all potentially confounding variables are not measured. In addition, our proposed doubly robust estimator performs well even for mis-specified models. PMID:23553373

  3. Randomized controlled trial of attention bias modification in a racially diverse, socially anxious, alcohol dependent sample.

    PubMed

    Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P

    2016-12-01

    Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Randomized Controlled Trial of Attention Bias Modification in a Racially Diverse, Socially Anxious, Alcohol Dependent Sample

    PubMed Central

    Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.

    2016-01-01

    Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918

  5. “Fair Play”: A Videogame Designed to Address Implicit Race Bias Through Active Perspective Taking

    PubMed Central

    Kaatz, Anna; Chu, Sarah; Ramirez, Dennis; Samson-Samuel, Clem; Carnes, Molly

    2014-01-01

    Abstract Objective: Having diverse faculty in academic health centers will help diversify the healthcare workforce and reduce health disparities. Implicit race bias is one factor that contributes to the underrepresentation of Black faculty. We designed the videogame “Fair Play” in which players assume the role of a Black graduate student named Jamal Davis. As Jamal, players experience subtle race bias while completing “quests” to obtain a science degree. We hypothesized that participants randomly assigned to play the game would have greater empathy for Jamal and lower implicit race bias than participants randomized to read narrative text describing Jamal's experience. Materials and Methods: University of Wisconsin–Madison graduate students were recruited via e-mail and randomly assigned to play “Fair Play” or read narrative text through an online link. Upon completion, participants took an Implicit Association Test to measure implicit bias and answered survey questions assessing empathy toward Jamal and awareness of bias. Results: As hypothesized, gameplayers showed the least implicit bias but only when they also showed high empathy for Jamal (P=0.013). Gameplayers did not show greater empathy than text readers, and women in the text condition reported the greatest empathy for Jamal (P=0.008). However, high empathy only predicted lower levels of implicit bias among those who actively took Jamal's perspective through gameplay (P=0.014). Conclusions: A videogame in which players experience subtle race bias as a Black graduate student has the potential to reduce implicit bias, possibly because of a game's ability to foster empathy through active perspective taking. PMID:26192644

  6. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    NASA Astrophysics Data System (ADS)

    Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan

    2017-11-01

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.

  7. A new perspective on sexual mixing among men who have sex with men by body image.

    PubMed

    Leung, Ka-Kit; Wong, Horas T H; Naftalin, Claire M; Lee, Shui Shan

    2014-01-01

    "Casual sex" is seldom as non-selective and random as it may sound. During each sexual encounter, people consciously and unconsciously seek their casual sex partners according to different attributes. Influential to a sexual network, research focusing on quantifying the effects of physical appearance on sexual network has been sparse. We evaluated the application of Log odds score (LOD) to assess the mixing patterns of 326 men who have sex with men (MSM) in Hong Kong in their networking of casual sex partners by Body Image Type (BIT). This involved an analysis of 1,196 respondents-casual sex partner pairs. Seven BITs were used in the study: Bear, Chubby, Slender, Lean toned, Muscular, Average and Other. A hierarchical pattern was observed in the preference of MSM for casual sex partners by the latter's BIT. Overall, Muscular men were most preferred, followed by Lean toned while the least preferred was Slender, as illustrated by LOD going down along the hierarchy in the same direction. Marked avoidance was found between men who self-identified as Chubby and men of Other body type (within-group-LOD: 1.25-2.89; between-group-LOD: <-1). None of the respondents reported to have networked a man who self-identified as Average for casual sex. We have demonstrated the possibility of adopting a mathematical prototype to investigate the influence of BIT in a sexual network of MSM. Construction of matrix based on culture-specific BIT and cross-cultural comparisons would generate new knowledge on the mixing behaviors of MSM.

  8. Evaluating the Bias of Alternative Cost Progress Models: Tests Using Aerospace Industry Acquisition Programs

    DTIC Science & Technology

    1992-12-01

    suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model

  9. PUFKEY: A High-Security and High-Throughput Hardware True Random Number Generator for Sensor Networks

    PubMed Central

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-01-01

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks. PMID:26501283

  10. PUFKEY: a high-security and high-throughput hardware true random number generator for sensor networks.

    PubMed

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-10-16

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.

  11. Realistic noise-tolerant randomness amplification using finite number of devices.

    PubMed

    Brandão, Fernando G S L; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-21

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  12. Realistic noise-tolerant randomness amplification using finite number of devices

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  13. Realistic noise-tolerant randomness amplification using finite number of devices

    PubMed Central

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-01-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology. PMID:27098302

  14. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    PubMed

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  15. The beneficial effects of a positive attention bias amongst children with a history of psychosocial deprivation.

    PubMed

    Troller-Renfree, Sonya; McLaughlin, Katie A; Sheridan, Margaret A; Nelson, Charles A; Zeanah, Charles H; Fox, Nathan A

    2017-01-01

    Children raised in institutions experience psychosocial deprivation that has detrimental influences on attention and mental health. The current study examined patterns of attention biases in children from institutions who were randomized at approximately 21.6 months to receive either a high-quality foster care intervention or care-as-usual. At age 12, children performed a dot-probe task and indices of attention bias were calculated. Additionally, children completed a social stress paradigm and cortisol reactivity was computed. Children randomized into foster care (N=40) exhibited an attention bias toward positive stimuli but not threat, whereas children who received care-as-usual (N=40) and a never-institutionalized comparison group (N=47) showed no bias. Stability of foster care placement was related to positive bias, while instability of foster care placement was related to threat bias. The magnitude of the positive bias was associated with fewer internalizing problems and better coping mechanisms. Within the foster care group, positive attention bias was related to less blunted cortisol reactivity. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Flip-chip bonded optoelectronic integration based on ultrathin silicon (UTSi) CMOS

    NASA Astrophysics Data System (ADS)

    Hong, Sunkwang; Ho, Tawei; Zhang, Liping; Sawchuk, Alexander A.

    2003-06-01

    We describe the design and test of flip-chip bonded optoelectronic CMOS devices based on Peregrine Semiconductor's 0.5 micron Ultra-Thin Silicon on sapphire (UTSi) technology. The UTSi process eliminates the substrate leakage that typically results in crosstalk and reduces parasitic capacitance to the substrate, providing many benefits compared to bulk silicon CMOS. The low-loss synthetic sapphire substrate is optically transparent and has a coefficient of thermal expansion suitable for flip-chip bonding of vertical cavity surface emitting lasers (VCSELs) and detectors. We have designed two different UTSi CMOS chips. One contains a flip-chip bonded 1 x 4 photodiode array, a receiver array, a double edge triggered D-flip flop-based 2047-pattern pseudo random bit stream (PRBS) generator and a quadrature-phase LC-voltage controlled oscillator (VCO). The other chip contains a flip-chip bonded 1 x 4 VCSEL array, a driver array based on high-speed low-voltage differential signals (LVDS) and a full-balanced differential LC-VCO. Each VCSEL driver and receiver has individual input and bias voltage adjustments. Each UTSi chip is mounted on different printed circuit boards (PCBs) which have holes with about 1 mm radius for optical output and input paths through the sapphire substrate. We discuss preliminary testing of these chips.

  17. All-digital phase-lock loops for noise-free signals

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.

    1973-01-01

    Bit-synchronizers utilize all-digital phase-lock loops that are referenced to a high frequency digital clock. Phase-lock loop of first design acquires frequency within nominal range and tracks phase; second design is modified for random binary data by addition of simple transition detector; and third design acquires frequency over wide dynamic range.

  18. Error free physically unclonable function with programmed resistive random access memory using reliable resistance states by specific identification-generation method

    NASA Astrophysics Data System (ADS)

    Tseng, Po-Hao; Hsu, Kai-Chieh; Lin, Yu-Yu; Lee, Feng-Min; Lee, Ming-Hsiu; Lung, Hsiang-Lan; Hsieh, Kuang-Yeu; Chung Wang, Keh; Lu, Chih-Yuan

    2018-04-01

    A high performance physically unclonable function (PUF) implemented with WO3 resistive random access memory (ReRAM) is presented in this paper. This robust ReRAM-PUF can eliminated bit flipping problem at very high temperature (up to 250 °C) due to plentiful read margin by using initial resistance state and set resistance state. It is also promised 10 years retention at the temperature range of 210 °C. These two stable resistance states enable stable operation at automotive environments from -40 to 125 °C without need of temperature compensation circuit. The high uniqueness of PUF can be achieved by implementing a proposed identification (ID)-generation method. Optimized forming condition can move 50% of the cells to low resistance state and the remaining 50% remain at initial high resistance state. The inter- and intra-PUF evaluations with unlimited separation of hamming distance (HD) are successfully demonstrated even under the corner condition. The number of reproduction was measured to exceed 107 times with 0% bit error rate (BER) at read voltage from 0.4 to 0.7 V.

  19. Polarization chaos and random bit generation in nonlinear fiber optics induced by a time-delayed counter-propagating feedback loop.

    PubMed

    Morosi, J; Berti, N; Akrout, A; Picozzi, A; Guasoni, M; Fatome, J

    2018-01-22

    In this manuscript, we experimentally and numerically investigate the chaotic dynamics of the state-of-polarization in a nonlinear optical fiber due to the cross-interaction between an incident signal and its intense backward replica generated at the fiber-end through an amplified reflective delayed loop. Thanks to the cross-polarization interaction between the two-delayed counter-propagating waves, the output polarization exhibits fast temporal chaotic dynamics, which enable a powerful scrambling process with moving speeds up to 600-krad/s. The performance of this all-optical scrambler was then evaluated on a 10-Gbit/s On/Off Keying telecom signal achieving an error-free transmission. We also describe how these temporal and chaotic polarization fluctuations can be exploited as an all-optical random number generator. To this aim, a billion-bit sequence was experimentally generated and successfully confronted to the dieharder benchmarking statistic tools. Our experimental analysis are supported by numerical simulations based on the resolution of counter-propagating coupled nonlinear propagation equations that confirm the observed behaviors.

  20. Processing and Prolonged 500 C Testing of 4H-SiC JFET Integrated Circuits with Two Levels of Metal Interconnect

    NASA Technical Reports Server (NTRS)

    Spry, David J.; Neudeck, Philip G.; Chen, Liangyu; Lukco, Dorothy; Chang, Carl W.; Beheim, Glenn M.; Krasowski, Michael J.; Prokop, Norman F.

    2015-01-01

    Complex integrated circuit (IC) chips rely on more than one level of interconnect metallization for routing of electrical power and signals. This work reports the processing and testing of 4H-SiC junction field effect transistor (JFET) prototype IC's with two levels of metal interconnect capable of prolonged operation at 500 C. Packaged functional circuits including 3- and 11-stage ring oscillators, a 4-bit digital to analog converter, and a 4-bit address decoder and random access memory cell have been demonstrated at 500 C. A 3-stage oscillator functioned for over 3000 hours at 500 C in air ambient. Improved reproducibility remains to be accomplished.

  1. High density submicron magnetoresistive random access memory (invited)

    NASA Astrophysics Data System (ADS)

    Tehrani, S.; Chen, E.; Durlam, M.; DeHerrera, M.; Slaughter, J. M.; Shi, J.; Kerszykowski, G.

    1999-04-01

    Various giant magnetoresistance material structures were patterned and studied for their potential as memory elements. The preferred memory element, based on pseudo-spin valve structures, was designed with two magnetic stacks (NiFeCo/CoFe) of different thickness with Cu as an interlayer. The difference in thickness results in dissimilar switching fields due to the shape anisotropy at deep submicron dimensions. It was found that a lower switching current can be achieved when the bits have a word line that wraps around the bit 1.5 times. Submicron memory elements integrated with complementary metal-oxide-semiconductor (CMOS) transistors maintained their characteristics and no degradation to the CMOS devices was observed. Selectivity between memory elements in high-density arrays was demonstrated.

  2. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    NASA Astrophysics Data System (ADS)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  3. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  4. Training Anxious Children to Disengage Attention from Threat: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Bar-Haim, Yair; Morag, Inbar; Glickman, Shlomit

    2011-01-01

    Background: Threat-related attention biases have been implicated in the etiology and maintenance of anxiety disorders. As a result, attention bias modification (ABM) protocols have been employed as treatments for anxious adults. However, they have yet to emerge for children. A randomized, double-blind placebo-controlled trial was conducted to…

  5. POLICY IMPLICATIONS OF ADJUSTING RANDOMIZED TRIAL DATA FOR ECONOMIC EVALUATIONS: A DEMONSTRATION FROM THE ASCUS-LSIL TRIAGE STUDY

    PubMed Central

    Campos, Nicole G.; Castle, Philip E.; Schiffman, Mark; Kim, Jane J.

    2013-01-01

    Background Although the randomized controlled trial (RCT) is widely considered the most reliable method for evaluation of health care interventions, challenges to both internal and external validity exist. Thus, the efficacy of an intervention in a trial setting does not necessarily represent the real-world performance that decision makers seek to inform comparative effectiveness studies and economic evaluations. Methods Using data from the ASCUS-LSIL Triage Study (ALTS), we performed a simplified economic evaluation of age-based management strategies to detect cervical intraepithelial neoplasia grade 3 (CIN3) among women who were referred to the study with low-grade squamous intraepithelial lesions (LSIL). We used data from the trial itself to adjust for 1) potential lead time bias and random error that led to variation in the observed prevalence of CIN3 by study arm, and 2) potential ascertainment bias among providers in the most aggressive management arm. Results We found that using unadjusted RCT data may result in counterintuitive cost-effectiveness results when random error and/or bias are present. Following adjustment, the rank order of management strategies changed for two of the three age groups we considered. Conclusion Decision analysts need to examine study design, available trial data and cost-effectiveness results closely in order to detect evidence of potential bias. Adjustment for random error and bias in RCTs may yield different policy conclusions relative to unadjusted trial data. PMID:22147881

  6. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  7. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering.

    PubMed

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-10-24

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.

  8. Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.

    1982-04-01

    This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.

  9. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering

    PubMed Central

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-01-01

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064

  10. MacWilliams Identity for M-Spotty Weight Enumerator

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Fujiwara, Eiji

    M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.

  11. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  12. Word timing recovery in direct detection optical PPM communication systems with avalanche photodiodes using a phase lock loop

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Davidson, Frederic M.

    1990-01-01

    A technique for word timing recovery in a direct-detection optical PPM communication system is described. It tracks on back-to-back pulse pairs in the received random PPM data sequences with the use of a phase locked loop. The experimental system consisted of an 833-nm AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector, and it used Q = 4 PPM signaling at source data rate 25 Mb/s. The mathematical model developed to describe system performance is shown to be in good agreement with the experimental measurements. Use of this recovered PPM word clock with a slot clock recovery system caused no measurable penalty in receiver sensitivity. The completely self-synchronized receiver was capable of acquiring and maintaining both slot and word synchronizations for input optical signal levels as low as 20 average detected photons per information bit. The receiver achieved a bit error probability of 10 to the -6th at less than 60 average detected photons per information bit.

  13. A hybrid-type quantum random number generator

    NASA Astrophysics Data System (ADS)

    Hai-Qiang, Ma; Wu, Zhu; Ke-Jin, Wei; Rui-Xue, Li; Hong-Wei, Liu

    2016-05-01

    This paper proposes a well-performing hybrid-type truly quantum random number generator based on the time interval between two independent single-photon detection signals, which is practical and intuitive, and generates the initial random number sources from a combination of multiple existing random number sources. A time-to-amplitude converter and multichannel analyzer are used for qualitative analysis to demonstrate that each and every step is random. Furthermore, a carefully designed data acquisition system is used to obtain a high-quality random sequence. Our scheme is simple and proves that the random number bit rate can be dramatically increased to satisfy practical requirements. Project supported by the National Natural Science Foundation of China (Grant Nos. 61178010 and 11374042), the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China, and the Fundamental Research Funds for the Central Universities of China (Grant No. bupt2014TS01).

  14. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  15. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  16. Source-Device-Independent Ultrafast Quantum Random Number Generation.

    PubMed

    Marangon, Davide G; Vallone, Giuseppe; Villoresi, Paolo

    2017-02-10

    Secure random numbers are a fundamental element of many applications in science, statistics, cryptography and more in general in security protocols. We present a method that enables the generation of high-speed unpredictable random numbers from the quadratures of an electromagnetic field without any assumption on the input state. The method allows us to eliminate the numbers that can be predicted due to the presence of classical and quantum side information. In particular, we introduce a procedure to estimate a bound on the conditional min-entropy based on the entropic uncertainty principle for position and momentum observables of infinite dimensional quantum systems. By the above method, we experimentally demonstrated the generation of secure true random bits at a rate greater than 1.7 Gbit/s.

  17. Perceptions of Randomness: Why Three Heads Are Better than Four

    ERIC Educational Resources Information Center

    Hahn, Ulrike; Warren, Paul A.

    2009-01-01

    A long tradition of psychological research has lamented the systematic errors and biases in people's perception of the characteristics of sequences generated by a random mechanism such as a coin toss. It is proposed that once the likely nature of people's actual experience of such processes is taken into account, these "errors" and "biases"…

  18. Mapping ecological systems with a random foret model: tradeoffs between errors and bias

    Treesearch

    Emilie Grossmann; Janet Ohmann; James Kagan; Heather May; Matthew Gregory

    2010-01-01

    New methods for predictive vegetation mapping allow improved estimations of plant community composition across large regions. Random Forest (RF) models limit over-fitting problems of other methods, and are known for making accurate classification predictions from noisy, nonnormal data, but can be biased when plot samples are unbalanced. We developed two contrasting...

  19. The Evaluation of Bias of the Weighted Random Effects Model Estimators. Research Report. ETS RR-11-13

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…

  20. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications

    PubMed Central

    Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831

  1. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    PubMed

    Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.

  2. Reader reaction on estimation of treatment effects in all-comers randomized clinical trials with a predictive marker.

    PubMed

    Korn, Edward L; Freidlin, Boris

    2017-06-01

    For a fallback randomized clinical trial design with a marker, Choai and Matsui (2015, Biometrics 71, 25-32) estimate the bias of the estimator of the treatment effect in the marker-positive subgroup conditional on the treatment effect not being statistically significant in the overall population. This is used to construct and examine conditionally bias-corrected estimators of the treatment effect for the marker-positive subgroup. We argue that it may not be appropriate to correct for conditional bias in this setting. Instead, we consider the unconditional bias of estimators of the treatment effect for marker-positive patients. © 2016, The International Biometric Society.

  3. Reducing bias in survival under non-random temporary emigration

    USGS Publications Warehouse

    Peñaloza, Claudia L.; Kendall, William L.; Langtimm, Catherine Ann

    2014-01-01

    Despite intensive monitoring, temporary emigration from the sampling area can induce bias severe enough for managers to discard life-history parameter estimates toward the terminus of the times series (terminal bias). Under random temporary emigration unbiased parameters can be estimated with CJS models. However, unmodeled Markovian temporary emigration causes bias in parameter estimates and an unobservable state is required to model this type of emigration. The robust design is most flexible when modeling temporary emigration, and partial solutions to mitigate bias have been identified, nonetheless there are conditions were terminal bias prevails. Long-lived species with high adult survival and highly variable non-random temporary emigration present terminal bias in survival estimates, despite being modeled with the robust design and suggested constraints. Because this bias is due to uncertainty about the fate of individuals that are undetected toward the end of the time series, solutions should involve using additional information on survival status or location of these individuals at that time. Using simulation, we evaluated the performance of models that jointly analyze robust design data and an additional source of ancillary data (predictive covariate on temporary emigration, telemetry, dead recovery, or auxiliary resightings) in reducing terminal bias in survival estimates. The auxiliary resighting and predictive covariate models reduced terminal bias the most. Additional telemetry data was effective at reducing terminal bias only when individuals were tracked for a minimum of two years. High adult survival of long-lived species made the joint model with recovery data ineffective at reducing terminal bias because of small-sample bias. The naïve constraint model (last and penultimate temporary emigration parameters made equal), was the least efficient, though still able to reduce terminal bias when compared to an unconstrained model. Joint analysis of several sources of data improved parameter estimates and reduced terminal bias. Efforts to incorporate or acquire such data should be considered by researchers and wildlife managers, especially in the years leading up to status assessments of species of interest. Simulation modeling is a very cost effective method to explore the potential impacts of using different sources of data to produce high quality demographic data to inform management.

  4. A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mo, C. D.

    1978-01-01

    An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.

  5. Single-Word Multiple-Bit Upsets in Static Random Access Devices

    DTIC Science & Technology

    1998-01-15

    Transactions on Nuclear Science, NS-33, 1616- 1619,1986. Criswell, T.L., P.R. Measel , and K.L. Walin, "Single Event Upset Testing with Relativistic...Heavy Ions," IEEE Transactions on Nuclear Science, NS-31, 1559- 1561,1984. 1946 3. Criswell, T.L., D.L. Oberg, J.L. Wert, P.R. Measel , and W.E

  6. CCD image sensor induced error in PIV applications

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  7. Continuous chain bit with downhole cycling capability

    DOEpatents

    Ritter, Don F.; St. Clair, Jack A.; Togami, Henry K.

    1983-01-01

    A continuous chain bit for hard rock drilling is capable of downhole cycling. A drill head assembly moves axially relative to a support body while the chain on the head assembly is held in position so that the bodily movement of the chain cycles the chain to present new composite links for drilling. A pair of spring fingers on opposite sides of the chain hold the chain against movement. The chain is held in tension by a spring-biased tensioning bar. A head at the working end of the chain supports the working links. The chain is centered by a reversing pawl and piston actuated by the pressure of the drilling mud. Detent pins lock the head assembly with respect to the support body and are also operated by the drilling mud pressure. A restricted nozzle with a divergent outlet sprays drilling mud into the cavity to remove debris. Indication of the centered position of the chain is provided by noting a low pressure reading indicating proper alignment of drilling mud slots on the links with the corresponding feed branches.

  8. Attention bias modification augments cognitive-behavioral group therapy for social anxiety disorder: a randomized controlled trial.

    PubMed

    Lazarov, Amit; Marom, Sofi; Yahalom, Naomi; Pine, Daniel S; Hermesh, Haggai; Bar-Haim, Yair

    2017-12-20

    Cognitive-behavioral group therapy (CBGT) is a first-line treatment for social anxiety disorder (SAD). However, since many patients remain symptomatic post-treatment, there is a need for augmenting procedures. This randomized controlled trial (RCT) examined the potential augmentation effect of attention bias modification (ABM) for CBGT. Fifty patients with SAD from three therapy groups were randomized to receive an 18-week standard CBGT with either ABM designed to shift attention away from threat (CBGT + ABM), or a placebo protocol not designed to modify threat-related attention (CBGT + placebo). Therapy groups took place in a large mental health center. Clinician and self-report measures of social anxiety and depression were acquired pre-treatment, post-treatment, and at 3-month follow-up. Attention bias was assessed at pre- and post-treatment. Patients randomized to the CBGT + ABM group, relative to those randomized to the CBGT + placebo group, showed greater reductions in clinician-rated SAD symptoms post-treatment, with effects maintained at 3-month follow-up. Group differences were not evident for self-report or attention-bias measures, with similar reductions in both groups. Finally, reduction in attention bias did not mediate the association between group and reduction in Liebowitz Social Anxiety Scale Structured Interview (LSAS) scores. This is the first RCT to examine the possible augmenting effect of ABM added to group-based cognitive-behavioral therapy for adult SAD. Training patients' attention away from threat might augment the treatment response to standard CBGT in SAD, a possibility that could be further evaluated in large-scale RCTs.

  9. Criticality of Low-Energy Protons in Single-Event Effects Testing of Highly-Scaled Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Marshall, Paul W.; Rodbell, Kenneth P.; Gordon, Michael S.; LaBel, Kenneth A.; Schwank, James R.; Dodds, Nathaniel A.; Castaneda, Carlos M.; Berg, Melanie D.; Kim, Hak S.; hide

    2014-01-01

    We report low-energy proton and low-energy alpha particle single-event effects (SEE) data on a 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) latches and static random access memory (SRAM) that demonstrates the criticality of using low-energy protons for SEE testing of highly-scaled technologies. Low-energy protons produced a significantly higher fraction of multi-bit upsets relative to single-bit upsets when compared to similar alpha particle data. This difference highlights the importance of performing hardness assurance testing with protons that include energy distribution components below 2 megaelectron-volt. The importance of low-energy protons to system-level single-event performance is based on the technology under investigation as well as the target radiation environment.

  10. A Wearable Healthcare System With a 13.7 μA Noise Tolerant ECG Processor.

    PubMed

    Izumi, Shintaro; Yamashita, Ken; Nakano, Masanao; Kawaguchi, Hiroshi; Kimura, Hiromitsu; Marumoto, Kyoji; Fuchikami, Takaaki; Fujimori, Yoshikazu; Nakajima, Hiroshi; Shiga, Toshikazu; Yoshimoto, Masahiko

    2015-10-01

    To prevent lifestyle diseases, wearable bio-signal monitoring systems for daily life monitoring have attracted attention. Wearable systems have strict size and weight constraints, which impose significant limitations of the battery capacity and the signal-to-noise ratio of bio-signals. This report describes an electrocardiograph (ECG) processor for use with a wearable healthcare system. It comprises an analog front end, a 12-bit ADC, a robust Instantaneous Heart Rate (IHR) monitor, a 32-bit Cortex-M0 core, and 64 Kbyte Ferroelectric Random Access Memory (FeRAM). The IHR monitor uses a short-term autocorrelation (STAC) algorithm to improve the heart-rate detection accuracy despite its use in noisy conditions. The ECG processor chip consumes 13.7 μA for heart rate logging application.

  11. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    PubMed

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.

  12. Membrane distributed-reflector laser integrated with SiOx-based spot-size converter on Si substrate.

    PubMed

    Nishi, Hidetaka; Fujii, Takuro; Takeda, Koji; Hasebe, Koichi; Kakitsuka, Takaaki; Tsuchizawa, Tai; Yamamoto, Tsuyoshi; Yamada, Koji; Matsuo, Shinji

    2016-08-08

    We demonstrate monolithic integration of a 50-μm-long-cavity membrane distributed-reflector laser with a spot-size converter, consisting of a tapered InP wire waveguide and an SiOx waveguide, on SiO2/Si substrate. The device exhibits 9.4-GHz/mA0.5 modulation efficiency with a 2.2-dB fiber coupling loss. We demonstrate 25.8-Gbit/s direct modulation with a bias current of 2.5 mA, resulting in a low energy cost of 132 fJ/bit.

  13. Ultralow drive voltage silicon traveling-wave modulator.

    PubMed

    Baehr-Jones, Tom; Ding, Ran; Liu, Yang; Ayazi, Ali; Pinguet, Thierry; Harris, Nicholas C; Streshinsky, Matt; Lee, Poshen; Zhang, Yi; Lim, Andy Eu-Jin; Liow, Tsung-Yang; Teo, Selin Hwee-Gee; Lo, Guo-Qiang; Hochberg, Michael

    2012-05-21

    There has been great interest in the silicon platform as a material system for integrated photonics. A key challenge is the development of a low-power, low drive voltage, broadband modulator. Drive voltages at or below 1 Vpp are desirable for compatibility with CMOS processes. Here we demonstrate a CMOS-compatible broadband traveling-wave modulator based on a reverse-biased pn junction. We demonstrate operation with a drive voltage of 0.63 Vpp at 20 Gb/s, a significant improvement in the state of the art, with an RF energy consumption of only 200 fJ/bit.

  14. Undesirable Choice Biases with Small Differences in the Spatial Structure of Chance Stimulus Sequences.

    PubMed

    Herrera, David; Treviño, Mario

    2015-01-01

    In two-alternative discrimination tasks, experimenters usually randomize the location of the rewarded stimulus so that systematic behavior with respect to irrelevant stimuli can only produce chance performance on the learning curves. One way to achieve this is to use random numbers derived from a discrete binomial distribution to create a 'full random training schedule' (FRS). When using FRS, however, sporadic but long laterally-biased training sequences occur by chance and such 'input biases' are thought to promote the generation of laterally-biased choices (i.e., 'output biases'). As an alternative, a 'Gellerman-like training schedule' (GLS) can be used. It removes most input biases by prohibiting the reward from appearing on the same location for more than three consecutive trials. The sequence of past rewards obtained from choosing a particular discriminative stimulus influences the probability of choosing that same stimulus on subsequent trials. Assuming that the long-term average ratio of choices matches the long-term average ratio of reinforcers, we hypothesized that a reduced amount of input biases in GLS compared to FRS should lead to a reduced production of output biases. We compared the choice patterns produced by a 'Rational Decision Maker' (RDM) in response to computer-generated FRS and GLS training sequences. To create a virtual RDM, we implemented an algorithm that generated choices based on past rewards. Our simulations revealed that, although the GLS presented fewer input biases than the FRS, the virtual RDM produced more output biases with GLS than with FRS under a variety of test conditions. Our results reveal that the statistical and temporal properties of training sequences interacted with the RDM to influence the production of output biases. Thus, discrete changes in the training paradigms did not translate linearly into modifications in the pattern of choices generated by a RDM. Virtual RDMs could be further employed to guide the selection of proper training schedules for perceptual decision-making studies.

  15. The effects of noise due to random undetected tilts and paleosecular variation on regional paleomagnetic directions

    USGS Publications Warehouse

    Calderone, G.J.; Butler, R.F.

    1991-01-01

    Random tilting of a single paleomagnetic vector produces a distribution of vectors which is not rotationally symmetric about the original vector and therefore not Fisherian. Monte Carlo simulations were performed on two types of vector distributions: 1) distributions of vectors formed by perturbing a single original vector with a Fisher distribution of bedding poles (each defining a tilt correction) and 2) standard Fisher distributions. These simulations demonstrate that inclinations of vectors drawn from both distributions are biased toward shallow inclinations. The Fisher mean direction of the distribution of vectors formed by perturbing a single vector with random undetected tilts is biased toward shallow inclinations, but this bias is insignificant for angular dispersions of bedding poles less than 20??. -from Authors

  16. Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials

    PubMed Central

    Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.

    2013-01-01

    Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072

  17. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  18. Scale of reference bias and the evolution of health.

    PubMed

    Groot, Wim

    2003-09-01

    The analysis of subjective measures of well-being-such as self-reports by individuals about their health status is frequently hampered by the problem of scale of reference bias. A particular form of scale of reference bias is age norming. In this study we corrected for scale of reference bias by allowing for individual specific effects in an equation on subjective health. A random effects ordered response model was used to analyze scale of reference bias in self-reported health measures. The results indicate that if we do not control for unobservable individual specific effects, the response to a subjective health state measure suffers from age norming. Age norming can be controlled for by a random effects estimation technique using longitudinal data. Further, estimates are presented on the rate of depreciation of health. Finally, simulations of life expectancy indicate that the estimated model provides a reasonably good fit of the true life expectancy.

  19. Balancing the popularity bias of object similarities for personalised recommendation

    NASA Astrophysics Data System (ADS)

    Hou, Lei; Pan, Xue; Liu, Kecheng

    2018-03-01

    Network-based similarity measures have found wide applications in recommendation algorithms and made significant contributions for uncovering users' potential interests. However, existing measures are generally biased in terms of popularity, that the popular objects tend to have more common neighbours with others and thus are considered more similar to others. Such popularity bias of similarity quantification will result in the biased recommendations, with either poor accuracy or poor diversity. Based on the bipartite network modelling of the user-object interactions, this paper firstly calculates the expected number of common neighbours of two objects with given popularities in random networks. A Balanced Common Neighbour similarity index is accordingly developed by removing the random-driven common neighbours, estimated as the expected number, from the total number. Recommendation experiments in three data sets show that balancing the popularity bias in a certain degree can significantly improve the recommendations' accuracy and diversity simultaneously.

  20. Student Sorting and Bias in Value Added Estimation: Selection on Observables and Unobservables. NBER Working Paper No. 14666

    ERIC Educational Resources Information Center

    Rothstein, Jesse

    2009-01-01

    Non-random assignment of students to teachers can bias value added estimates of teachers' causal effects. Rothstein (2008a, b) shows that typical value added models indicate large counter-factual effects of 5th grade teachers on students' 4th grade learning, indicating that classroom assignments are far from random. This paper quantifies the…

  1. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  2. Trial Registration: Understanding and Preventing Reporting Bias in Social Work Research

    ERIC Educational Resources Information Center

    Harrison, Bronwyn A.; Mayo-Wilson, Evan

    2014-01-01

    Randomized controlled trials are considered the gold standard for evaluating social work interventions. However, published reports can systematically overestimate intervention effects when researchers selectively report large and significant findings. Publication bias and other types of reporting biases can be minimized through prospective trial…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hymel, Ross

    The Public Key (PK) FPGA software performs asymmetric authentication using the 163-bit Elliptic Curve Digital Signature Algorithm (ECDSA) on an embedded FPGA platform. A digital signature is created on user-supplied data, and communication with a host system is performed via a Serial Peripheral Interface (SPI) bus. Software includes all components necessary for signing, including custom random number generator for key creation and SHA-256 for data hashing.

  4. Army Communicator. Volume 34, Number 2

    DTIC Science & Technology

    2009-01-01

    tunneled into the NIPRNet traffic. The encryption hides the contents of the SIPRNet data through a process that randomizes the bit patterns...and technologies such as desktop applications, Virtual Private Network, Blackberry support, and the training and troubleshoot- ing of complex computer...to your own Standing Operating Procedure and then contract for services off the backside to a local Strategic Entry Point or tunnel through

  5. Applying Neural Networks in Optical Communication Systems: Possible Pitfalls

    NASA Astrophysics Data System (ADS)

    Eriksson, Tobias A.; Bulow, Henning; Leven, Andreas

    2017-12-01

    We investigate the risk of overestimating the performance gain when applying neural network based receivers in systems with pseudo random bit sequences or with limited memory depths, resulting in repeated short patterns. We show that with such sequences, a large artificial gain can be obtained which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.

  6. Scanner Based Assessment in Exams Organized with Personalized Thesis Randomly Generated via Microsoft Word

    ERIC Educational Resources Information Center

    Teneqexhi, Romeo; Qirko, Margarita; Sharko, Genci; Vrapi, Fatmir; Kuneshka, Loreta

    2017-01-01

    Exams assessment is one of the most tedious work for university teachers all over the world. Multiple choice theses make exams assessment a little bit easier, but the teacher cannot prepare more than 3-4 variants; in this case, the possibility of students for cheating from one another becomes a risk for "objective assessment outcome." On…

  7. Computational work and time on finite machines.

    NASA Technical Reports Server (NTRS)

    Savage, J. E.

    1972-01-01

    Measures of the computational work and computational delay required by machines to compute functions are given. Exchange inequalities are developed for random access, tape, and drum machines to show that product inequalities between storage and time, number of drum tracks and time, number of bits in an address and time, etc., must be satisfied to compute finite functions on bounded machines.

  8. Plated wire memory subsystem

    NASA Technical Reports Server (NTRS)

    Reynolds, L.; Tweed, H.

    1972-01-01

    The work performed entailed the design, development, construction and testing of a 4000 word by 18 bit random access, NDRO plated wire memory for use in conjunction with a spacecraft imput/output unit and central processing unit. The primary design parameters, in order of importance, were high reliability, low power, volume and weight. A single memory unit, referred to as a qualification model, was delivered.

  9. The Effects of Cognitive Jamming on Wireless Sensor Networks Used for Geolocation

    DTIC Science & Technology

    2012-03-01

    continuously sends out random bits to the channel without following any MAC-layer etiquette [31]. Normally, the underlying MAC protocol allows...23 UDP User Datagram Protocol . . . . . . . . . . . . . . . . . . . 30 MIMO Multiple Input Multiple Output . . . . . . . . . . . . . . . 70...information is packaged and distributed on the network layer, only the physical measurements are considered. This protocol is used to detect faulty nodes

  10. High Data Rate Quantum Cryptography

    NASA Astrophysics Data System (ADS)

    Kwiat, Paul; Christensen, Bradley; McCusker, Kevin; Kumor, Daniel; Gauthier, Daniel

    2015-05-01

    While quantum key distribution (QKD) systems are now commercially available, the data rate is a limiting factor for some desired applications (e.g., secure video transmission). Most QKD systems receive at most a single random bit per detection event, causing the data rate to be limited by the saturation of the single-photon detectors. Recent experiments have begun to explore using larger degree of freedoms, i.e., temporal or spatial qubits, to optimize the data rate. Here, we continue this exploration using entanglement in multiple degrees of freedom. That is, we use simultaneous temporal and polarization entanglement to reach up to 8.3 bits of randomness per coincident detection. Due to current technology, we are unable to fully secure the temporal degree of freedom against all possible future attacks; however, by assuming a technologically-limited eavesdropper, we are able to obtain 23.4 MB/s secure key rate across an optical table, after error reconciliation and privacy amplification. In this talk, we will describe our high-rate QKD experiment, with a short discussion on our work towards extending this system to ship-to-ship and ship-to-shore communication, aiming to secure the temporal degree of freedom and to implement a 30-km free-space link over a marine environment.

  11. Steganographic optical image encryption system based on reversible data hiding and double random phase encoding

    NASA Astrophysics Data System (ADS)

    Chuang, Cheng-Hung; Chen, Yen-Lin

    2013-02-01

    This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.

  12. Electric properties of a textured BiNaKTiO3 ceramic for energy harvesting system

    NASA Astrophysics Data System (ADS)

    Lim, D. H.; Song, T. K.; Lee, D. S.; Jeong, S. J.; Kim, Min-Soo; Song, Jae-Sung

    2012-01-01

    Piezoelectric ceramics with microstructural texturing were fabricated and evaluated to investigate their possibility for use in piezoelectric energy harvest devices in response to external mechanical impact. The microstructural evolution and properties of a Bi0.5(Na0.425K0.075) TiO3 (BNKT) ceramic material with platelike Bi4Ti3O12 (BiT) were investigated. The platelike Bi4Ti3O12 (BiT) was used as a template to induce grain growth under a proper heat treatment. The textured BNKTs were fabricated and heated at 1150 °C for 10 h. They exhibited <001>-oriented large grains and improved of ferroelectric properties. The textured microstructure was due to the occurrence of grain growth around the templates. When subjected to a low stress of 0.8 MPa, the textured BNKT had a slightly larger voltage and power than the randomly-oriented BNKT. Meanwhile, when high stresses over 2 MPa were applied, the voltage and the power of the textured specimen were larger than those of the randomly-oriented specimen. The microstructure textured along the <100> direction may contribute to the improved power generation.

  13. Experimentally feasible quantum-key-distribution scheme using qubit-like qudits and its comparison with existing qubit- and qudit-based protocols

    NASA Astrophysics Data System (ADS)

    Chau, H. F.; Wang, Qinan; Wong, Cardythy

    2017-02-01

    Recently, Chau [Phys. Rev. A 92, 062324 (2015), 10.1103/PhysRevA.92.062324] introduced an experimentally feasible qudit-based quantum-key-distribution (QKD) scheme. In that scheme, one bit of information is phase encoded in the prepared state in a 2n-dimensional Hilbert space in the form (|i > ±|j >) /√{2 } with n ≥2 . For each qudit prepared and measured in the same two-dimensional Hilbert subspace, one bit of raw secret key is obtained in the absence of transmission error. Here we show that by modifying the basis announcement procedure, the same experimental setup can generate n bits of raw key for each qudit prepared and measured in the same basis in the noiseless situation. The reason is that in addition to the phase information, each qudit also carries information on the Hilbert subspace used. The additional (n -1 ) bits of raw key comes from a clever utilization of this extra piece of information. We prove the unconditional security of this modified protocol and compare its performance with other existing provably secure qubit- and qudit-based protocols on market in the one-way classical communication setting. Interestingly, we find that for the case of n =2 , the secret key rate of this modified protocol using nondegenerate random quantum code to perform one-way entanglement distillation is equal to that of the six-state scheme.

  14. Comparison of reproduce signal and noise of conventional and keepered CoCrTa/Cr thin film media

    NASA Astrophysics Data System (ADS)

    Sin, Kyusik; Ding, Juren; Glijer, Pawel; Sivertsen, John M.; Judy, Jack H.; Zhu, Jian-Gang

    1994-05-01

    We studied keepered high coercivity CoCrTa/Cr thin film media with a Cr isolation layer between the CoCrTa storage and an overcoating of an isotropic NiFe soft magnetic layer. The influence of the thickness of the NiFe and Cr layers, and the effects of head bias current on the signal output and noise, were studied using a thin film head. The reproduced signal increased by 7.3 dB, but the signal-to-noise ratio decreased by 4 dB at a linear density of 2100 fr/mm (53.3 kfr/in.) with a 1000 Å thick NiFe keeper layer. The medium noise increased with increasing NiFe thickness and the signal output decreased with decreasing Cr thickness. A low output signal obtained with very thin Cr may be due to magnetic interactions between the keeper layer and magnetic media layer. It is observed that signal distortion and timing asymmetry of the output signals depend on the thickness of the keeper layer and the head bias current. The signal distortion increased and the timing asymmetry decreased as the head bias current was increased. These results may be associated with different permeability of the keeper under the poles of the thin film head due to the superposition of head bias and bit fields.

  15. Design, processing, and testing of lsi arrays for space station

    NASA Technical Reports Server (NTRS)

    Lile, W. R.; Hollingsworth, R. J.

    1972-01-01

    The design of a MOS 256-bit Random Access Memory (RAM) is discussed. Technological achievements comprise computer simulations that accurately predict performance; aluminum-gate COS/MOS devices including a 256-bit RAM with current sensing; and a silicon-gate process that is being used in the construction of a 256-bit RAM with voltage sensing. The Si-gate process increases speed by reducing the overlap capacitance between gate and source-drain, thus reducing the crossover capacitance and allowing shorter interconnections. The design of a Si-gate RAM, which is pin-for-pin compatible with an RCA bulk silicon COS/MOS memory (type TA 5974), is discussed in full. The Integrated Circuit Tester (ICT) is limited to dc evaluation, but the diagnostics and data collecting are under computer control. The Silicon-on-Sapphire Memory Evaluator (SOS-ME, previously called SOS Memory Exerciser) measures power supply drain and performs a minimum number of tests to establish operation of the memory devices. The Macrodata MD-100 is a microprogrammable tester which has capabilities of extensive testing at speeds up to 5 MHz. Beam-lead technology was successfully integrated with SOS technology to make a simple device with beam leads. This device and the scribing are discussed.

  16. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  17. Capacity-optimized mp2 audio watermarking

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Dittmann, Jana

    2003-06-01

    Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.

  18. A 1024×768-12μm Digital ROIC for uncooled microbolometer FPAs

    NASA Astrophysics Data System (ADS)

    Eminoglu, Selim

    2017-02-01

    This paper reports the development of a new digital microbolometer Readout Integrated Circuit (D-ROIC), called MT10212BD. It has a format of 1024 × 768 (XGA) and a pixel pitch of 12μm. MT10212BD is Mikro Tasarim's second 12μm pitch microbolometer ROIC, which is developed specifically for surface micro machined microbolometer detector arrays with small pixel pitch using high-TCR pixel materials, such as VOx and a Si. MT10212BD has an alldigital system on-chip architecture, which generates programmable timing and biasing, and performs 14-bit analog to digital conversion (ADC). The signal processing chain in the ROIC is composed of pixel bias circuitry, integrator based programmable gain amplifier followed by column parallel ADC circuitry. MT10212BD has a serial programming interface that can be used to configure the programmable ROIC features and to load the Non-Uniformity-Correction (NUC) date to the ROIC. MT10212BD has a total of 8 high-speed serial digital video outputs, which can be programmed to operate in the 2, 4, and 8-output modes and can support frames rates above 60 fps. The high-speed serial digital outputs supports data rates as high as 400 Mega-bits/s, when operated at 50 MHz system clock frequency. There is an on-chip phase-locked-loop (PLL) based timing circuitry to generate the high speed clocks used in the ROIC. The ROIC is designed to support pixel resistance values ranging from 30KΩ to 90kΩ, with a nominal value of 60KΩ. The ROIC has a globally programmable gain in the column readout, which can be adjusted based on the detector resistance value.

  19. Participation Bias among Suicidal Adults in a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Stirman, Shannon Wiltsey; Brown, Gregory K.; Ghahramanlou-Holloway, Marjan; Fox, Allison J.; Chohan, Mariam Zahid; Beck, Aaron T.

    2011-01-01

    Although individuals who attempt suicide have poor compliance rates with treatment recommendations, the nature and degree of participation bias in clinical treatment research among these individuals is virtually unknown. The purpose of this study was to examine participation bias by comparing the demographic and diagnostic characteristics of adult…

  20. Propensity Score Matching: Retrospective Randomization?

    PubMed

    Jupiter, Daniel C

    Randomized controlled trials are viewed as the optimal study design. In this commentary, we explore the strength of this design and its complexity. We also discuss some situations in which these trials are not possible, or not ethical, or not economical. In such situations, specifically, in retrospective studies, we should make every effort to recapitulate the rigor and strength of the randomized trial. However, we could be faced with an inherent indication bias in such a setting. Thus, we consider the tools available to address that bias. Specifically, we examine matching and introduce and explore a new tool: propensity score matching. This tool allows us to group subjects according to their propensity to be in a particular treatment group and, in so doing, to account for the indication bias. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Long-term reliable physically unclonable function based on oxide tunnel barrier breakdown on two-transistors two-magnetic-tunnel-junctions cell-based embedded spin transfer torque magnetoresistive random access memory

    NASA Astrophysics Data System (ADS)

    Takaya, Satoshi; Tanamoto, Tetsufumi; Noguchi, Hiroki; Ikegami, Kazutaka; Abe, Keiko; Fujita, Shinobu

    2017-04-01

    Among the diverse applications of spintronics, security for internet-of-things (IoT) devices is one of the most important. A physically unclonable function (PUF) with a spin device (spin transfer torque magnetoresistive random access memory, STT-MRAM) is presented. Oxide tunnel barrier breakdown is used to realize long-term stability for PUFs. A secure PUF has been confirmed by evaluating the Hamming distance of a 32-bit STT-MRAM-PUF fabricated using 65 nm CMOS technology.

  2. Enhancement of Speed Margins for 16× Digital Versatile Disc-Random Access Memory

    NASA Astrophysics Data System (ADS)

    Watanabe, Koichi; Minemura, Hiroyuki; Miyamoto, Makoto; Iimura, Makoto

    2006-02-01

    We have evaluated the speed margins of write/read 16× digital versatile disc-random access memory (DVD-RAM) test discs using write strategies for 6--16× constant angular velocity (CAV) control. Our approach is to determine the writing parameters for the middle zones by interpolating the zone numbers. Using this interpolation strategy, we successfully obtained overwrite jitter values of less than 8% and bit error rates of less than 10-5 in 6--16× DVD-RAM. Moreover, we confirmed that the speed margins were ± 20% for a 6--16× CAV.

  3. Randomized controlled trials in dentistry: common pitfalls and how to avoid them.

    PubMed

    Fleming, Padhraig S; Lynch, Christopher D; Pandis, Nikolaos

    2014-08-01

    Clinical trials are used to appraise the effectiveness of clinical interventions throughout medicine and dentistry. Randomized controlled trials (RCTs) are established as the optimal primary design and are published with increasing frequency within the biomedical sciences, including dentistry. This review outlines common pitfalls associated with the conduct of randomized controlled trials in dentistry. Common failings in RCT design leading to various types of bias including selection, performance, detection and attrition bias are discussed in this review. Moreover, methods of minimizing and eliminating bias are presented to ensure that maximal benefit is derived from RCTs within dentistry. Well-designed RCTs have both upstream and downstream uses acting as a template for development and populating systematic reviews to permit more precise estimates of treatment efficacy and effectiveness. However, there is increasing awareness of waste in clinical research, whereby resource-intensive studies fail to provide a commensurate level of scientific evidence. Waste may stem either from inappropriate design or from inadequate reporting of RCTs; the importance of robust conduct of RCTs within dentistry is clear. Optimal reporting of randomized controlled trials within dentistry is necessary to ensure that trials are reliable and valid. Common shortcomings leading to important forms or bias are discussed and approaches to minimizing these issues are outlined. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Optimization of a PCRAM Chip for high-speed read and highly reliable reset operations

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyun; Chen, Houpeng; Li, Xi; Wang, Qian; Fan, Xi; Hu, Jiajun; Lei, Yu; Zhang, Qi; Tian, Zhen; Song, Zhitang

    2016-10-01

    The widely used traditional Flash memory suffers from its performance limits such as its serious crosstalk problems, and increasing complexity of floating gate scaling. Phase change random access memory (PCRAM) becomes one of the most potential nonvolatile memories among the new memory techniques. In this paper, a 1M-bit PCRAM chip is designed based on the SMIC 40nm CMOS technology. Focusing on the read and write performance, two new circuits with high-speed read operation and highly reliable reset operation are proposed. The high-speed read circuit effectively reduces the reading time from 74ns to 40ns. The double-mode reset circuit improves the chip yield. This 1M-bit PCRAM chip has been simulated on cadence. After layout design is completed, the chip will be taped out for post-test.

  5. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  6. Robust random telegraph conductivity noise in single crystals of the ferromagnetic insulating manganite La0.86Ca0.14MnO3

    NASA Astrophysics Data System (ADS)

    Przybytek, J.; Fink-Finowicki, J.; Puźniak, R.; Shames, A.; Markovich, V.; Mogilyansky, D.; Jung, G.

    2017-03-01

    Robust random telegraph conductivity fluctuations have been observed in La0.86Ca0.14MnO3 manganite single crystals. At room temperatures, the spectra of conductivity fluctuations are featureless and follow a 1 /f shape in the entire experimental frequency and bias range. Upon lowering the temperature, clear Lorentzian bias-dependent excess noise appears on the 1 /f background and eventually dominates the spectral behavior. In the time domain, fully developed Lorentzian noise appears as pronounced two-level random telegraph noise with a thermally activated switching rate, which does not depend on bias current and applied magnetic field. The telegraph noise is very robust and persists in the exceptionally wide temperature range of more than 50 K. The amplitude of the telegraph noise decreases exponentially with increasing bias current in exactly the same manner as the sample resistance increases with the current, pointing out the dynamic current redistribution between percolation paths dominated by phase-separated clusters with different conductivity as a possible origin of two-level conductivity fluctuations.

  7. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  8. Estimating causal contrasts involving intermediate variables in the presence of selection bias.

    PubMed

    Valeri, Linda; Coull, Brent A

    2016-11-20

    An important goal across the biomedical and social sciences is the quantification of the role of intermediate factors in explaining how an exposure exerts an effect on an outcome. Selection bias has the potential to severely undermine the validity of inferences on direct and indirect causal effects in observational as well as in randomized studies. The phenomenon of selection may arise through several mechanisms, and we here focus on instances of missing data. We study the sign and magnitude of selection bias in the estimates of direct and indirect effects when data on any of the factors involved in the analysis is either missing at random or not missing at random. Under some simplifying assumptions, the bias formulae can lead to nonparametric sensitivity analyses. These sensitivity analyses can be applied to causal effects on the risk difference and risk-ratio scales irrespectively of the estimation approach employed. To incorporate parametric assumptions, we also develop a sensitivity analysis for selection bias in mediation analysis in the spirit of the expectation-maximization algorithm. The approaches are applied to data from a health disparities study investigating the role of stage at diagnosis on racial disparities in colorectal cancer survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. R. A. Fisher and his advocacy of randomization.

    PubMed

    Hall, Nancy S

    2007-01-01

    The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.

  10. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    PubMed

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  11. The Effect of Framing on Surrogate Optimism Bias: A Simulation Study

    PubMed Central

    Patel, Dev; Cohen, Elan D.; Barnato, Amber E.

    2016-01-01

    Purpose To explore the effect of emotion priming and physician communication behaviors on optimism bias. Materials and Methods We conducted a 5 × 2 between-subject randomized factorial experiment using a web-based interactive video designed to simulate a family meeting for a critically ill spouse/parent. Eligibility included age ≥ 35 and self-identifying as the surrogate for a spouse/parent. The primary outcome was the surrogate's election of code status. We defined optimism bias as the surrogate's estimate of prognosis with CPR > their recollection of the physician's estimate. Results 256/373 respondents (69%) logged-in and were randomized and 220 (86%) had non-missing data for prognosis. 67/220 (30%) overall, and 56/173 (32%) of those with an accurate recollection of the physician's estimate had optimism bias. Optimism bias correlated with choosing CPR (p<.001). Emotion priming (p=.397), physician attention to emotion (p=.537), and framing of CPR as the social norm (p=.884) did not affect optimism bias. Framing the decision as the patient's vs. the surrogate's (25% vs. 36%, p=.066) and describing the alternative to CPR as “allow natural death” instead of “do not resuscitate” (25% vs. 37%, p =.035) decreased optimism bias. Conclusions Framing of CPR choice during code status conversations may influence surrogates’ optimism bias. PMID:26796950

  12. Effect of Alcohol on Risk of Coronary Heart Disease and Stroke: Causality, Bias, or a Bit of Both?

    PubMed Central

    Emberson, Jonathan R; Bennett, Derrick A

    2006-01-01

    Epidemiological studies of middle-aged populations generally find the relationship between alcohol intake and the risk of coronary heart disease (CHD) and stroke to be either U- or J-shaped. This review describes the extent that these relationships are likely to be causal, and the extent that they may be due to specific methodological weaknesses in epidemiological studies. The consistency in the vascular benefit associated with moderate drinking (compared with non-drinking) observed across different studies, together with the existence of credible biological pathways, strongly suggests that at least some of this benefit is real. However, because of biases introduced by: choice of reference categories; reverse causality bias; variations in alcohol intake over time; and confounding, some of it is likely to be an artefact. For heavy drinking, different study biases have the potential to act in opposing directions, and as such, the true effects of heavy drinking on vascular risk are uncertain. However, because of the known harmful effects of heavy drinking on non-vascular mortality, the problem is an academic one. Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake. In particular, it is likely that any promotion of alcohol for health reasons would do substantially more harm than good. PMID:17326330

  13. Correcting Biases in a lower resolution global circulation model with data assimilation

    NASA Astrophysics Data System (ADS)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model's equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d'études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented.

  14. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  15. Beam test results of the BTeV silicon pixel detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabriele Chiodini et al.

    2000-09-28

    The authors have described the results of the BTeV silicon pixel detector beam test. The pixel detectors under test used samples of the first two generations of Fermilab pixel readout chips, FPIX0 and FPIX1, (indium bump-bonded to ATLAS sensor prototypes). The spatial resolution achieved using analog charge information is excellent for a large range of track inclination. The resolution is still very good using only 2-bit charge information. A relatively small dependence of the resolution on bias voltage is observed. The resolution is observed to depend dramatically on the discriminator threshold, and it deteriorates rapidly for threshold above 4000e{sup {minus}}.

  16. MetaABC--an integrated metagenomics platform for data adjustment, binning and clustering.

    PubMed

    Su, Chien-Hao; Hsu, Ming-Tsung; Wang, Tse-Yi; Chiang, Sufeng; Cheng, Jen-Hao; Weng, Francis C; Kao, Cheng-Yan; Wang, Daryi; Tsai, Huai-Kuang

    2011-08-15

    MetaABC is a metagenomic platform that integrates several binning tools coupled with methods for removing artifacts, analyzing unassigned reads and controlling sampling biases. It allows users to arrive at a better interpretation via series of distinct combinations of analysis tools. After execution, MetaABC provides outputs in various visual formats such as tables, pie and bar charts as well as clustering result diagrams. MetaABC source code and documentation are available at http://bits2.iis.sinica.edu.tw/MetaABC/ CONTACT: dywang@gate.sinica.edu.tw; hktsai@iis.sinica.edu.tw Supplementary data are available at Bioinformatics online.

  17. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    PubMed

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  18. Method and apparatus for in-situ characterization of energy storage and energy conversion devices

    DOEpatents

    Christophersen, Jon P [Idaho Falls, ID; Motloch, Chester G [Idaho Falls, ID; Morrison, John L [Butte, MT; Albrecht, Weston [Layton, UT

    2010-03-09

    Disclosed are methods and apparatuses for determining an impedance of an energy-output device using a random noise stimulus applied to the energy-output device. A random noise signal is generated and converted to a random noise stimulus as a current source correlated to the random noise signal. A bias-reduced response of the energy-output device to the random noise stimulus is generated by comparing a voltage at the energy-output device terminal to an average voltage signal. The random noise stimulus and bias-reduced response may be periodically sampled to generate a time-varying current stimulus and a time-varying voltage response, which may be correlated to generate an autocorrelated stimulus, an autocorrelated response, and a cross-correlated response. Finally, the autocorrelated stimulus, the autocorrelated response, and the cross-correlated response may be combined to determine at least one of impedance amplitude, impedance phase, and complex impedance.

  19. Enhancing Security of Double Random Phase Encoding Based on Random S-Box

    NASA Astrophysics Data System (ADS)

    Girija, R.; Singh, Hukum

    2018-06-01

    In this paper, we propose a novel asymmetric cryptosystem for double random phase encoding (DRPE) using random S-Box. While utilising S-Box separately is not reliable and DRPE does not support non-linearity, so, our system unites the effectiveness of S-Box with an asymmetric system of DRPE (through Fourier transform). The uniqueness of proposed cryptosystem lies on employing high sensitivity dynamic S-Box for our DRPE system. The randomness and scalability achieved due to applied technique is an additional feature of the proposed solution. The firmness of random S-Box is investigated in terms of performance parameters such as non-linearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. S-Boxes convey nonlinearity to cryptosystems which is a significant parameter and very essential for DRPE. The strength of proposed cryptosystem has been analysed using various parameters such as MSE, PSNR, correlation coefficient analysis, noise analysis, SVD analysis, etc. Experimental results are conferred in detail to exhibit proposed cryptosystem is highly secure.

  20. Fast physical-random number generation using laser diode's frequency noise: influence of frequency discriminator

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo

    2018-02-01

    Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.

  1. Improvement of the software Bernese for SLR data processing in the Main Metrological Centre of the State Time and Frequency Service

    NASA Astrophysics Data System (ADS)

    Tsyba, E.; Kaufman, M.

    2015-08-01

    Preparatory works for resuming operational calculations of the Earth rotation parameters based on the results of satellite laser ranging data processing (LAGEOS 1, LAGEOS 2) are to be completed in the Main Metrology Centre Of The State Time And Frequency Service (VNIIFTRI) in 2014. For this purpose BERNESE 5.2 software (Dach & Walser, 2014) was chosen as a base software which has been used for many years in the Main Metrological Centre of the State Time and Frequency Service to process phase observations of GLONASS and GPS satellites. Although in the BERNESE 5.2 software announced presentation the possibility of the SLR data processing is declared, it has not been fully implemented. In particular there is no such an essential element as corrective action (as input or resulting parameters) in the local time scale ("time bias"), etc. Therefore, additional program blocks have been developed and integrated into the BERNESE 5.2 software environment. The program blocks are written in Perl and Matlab program languages and can be used both for Windows and Linux, 32-bit and 64-bit platforms.

  2. The 30-GHz monolithic receive module

    NASA Technical Reports Server (NTRS)

    Sokolov, V.; Geddes, J.; Bauhahn, P.

    1983-01-01

    Key requirements for a 30 GHz GaAs monolithic receive module for spaceborne communication antenna feed array applications include an overall receive module noise figure of 5 dB, a 30 dB RF to IF gain with six levels of intermediate gain control, a five-bit phase shifter, and a maximum power consumption of 250 mW. The RF designs for each of the four submodules (low noise amplifier, some gain control, phase shifter, and RF to IF sub-module) are presented. Except for the phase shifter, high frequency, low noise FETs with sub-half micron gate lengths are employed in the submodules. For the gain control, a two stage dual gate FET amplifier is used. The phase shifter is of the passive switched line type and consists of 5-bits. It uses relatively large gate width FETs (with zero drain to source bias) as the switching elements. A 20 GHz local oscillator buffer amplifier, a FET compatible balanced mixer, and a 5-8 GHz IF amplifier constitute the RF/IF sub-module. Phase shifter fabrication using ion implantation and a self-aligned gate technique is described. Preliminary RF results obtained on such phase shifters are included.

  3. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: An instrumental variables re-analysis of randomized clinical trials

    PubMed Central

    Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.

    2014-01-01

    Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504

  4. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: an instrumental variables re-analysis of randomized clinical trials.

    PubMed

    Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H

    2014-11-01

    Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.

  5. Considering the reversibility of passive and reactive transport problems: Are forward-in-time and backward-in-time models ever equivalent?

    NASA Astrophysics Data System (ADS)

    Engdahl, N.

    2017-12-01

    Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.

  6. Lifting the Curtain on the Wizard of Oz: Biased Voice-Based Impressions of Speaker Size

    ERIC Educational Resources Information Center

    Rendall, Drew; Vokey, John R.; Nemeth, Christie

    2007-01-01

    The consistent, but often wrong, impressions people form of the size of unseen speakers are not random but rather point to a consistent misattribution bias, one that the advertising, broadcasting, and entertainment industries also routinely exploit. The authors report 3 experiments examining the perceptual basis of this bias. The results indicate…

  7. Causal Inference and Omitted Variable Bias in Financial Aid Research: Assessing Solutions

    ERIC Educational Resources Information Center

    Riegg, Stephanie K.

    2008-01-01

    This article highlights the problem of omitted variable bias in research on the causal effect of financial aid on college-going. I first describe the problem of self-selection and the resulting bias from omitted variables. I then assess and explore the strengths and weaknesses of random assignment, multivariate regression, proxy variables, fixed…

  8. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  9. Simultaneous classical communication and quantum key distribution using continuous variables*

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2016-10-01

    Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.

  10. A fast key generation method based on dynamic biometrics to secure wireless body sensor networks for p-health.

    PubMed

    Zhang, G H; Poon, Carmen C Y; Zhang, Y T

    2010-01-01

    Body sensor networks (BSNs) have emerged as a new technology for healthcare applications, but the security of communication in BSNs remains a formidable challenge yet to be resolved. The paper discusses the typical attacks faced by BSNs and proposes a fast biometric based approach to generate keys for ensuing confidentiality and authentication in BSN communications. The approach was tested on 900 segments of electrocardiogram. Each segment was 4 seconds long and used to generate a 128-bit key. The results of the study found that entropy of 96% of the keys were above 0.95 and 99% of the hamming distances calculated from any two keys were above 50 bits. Based on the randomness and distinctiveness of these keys, it is concluded that the fast biometric based approach has great potential to be used to secure communication in BSNs for health applications.

  11. Performance of correlation receivers in the presence of impulse noise.

    NASA Technical Reports Server (NTRS)

    Moore, J. D.; Houts, R. C.

    1972-01-01

    An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.

  12. An Overview of Randomization and Minimization Programs for Randomized Clinical Trials

    PubMed Central

    Saghaei, Mahmoud

    2011-01-01

    Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659

  13. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  14. Fault-Tolerant Sequencer Using FPGA-Based Logic Designs for Space Applications

    DTIC Science & Technology

    2013-12-01

    Prototype Board SBU single bit upset SDK software development kit SDRAM synchronous dynamic random-access memory SEB single-event burnout ...current VHDL VHSIC hardware description language VHSIC very-high-speed integrated circuits VLSI very-large- scale integration VQFP very...transient pulse, called a single-event transient (SET), or even cause permanent damage to the device in the form of a burnout or gate rupture. The SEE

  15. Practical Applicability of Exact and Approximate Forms of the Randomization Test for Two Independent Samples.

    DTIC Science & Technology

    1987-09-01

    long been recognized as powerful nonparametric statistical methods since the introduction of the principal ideas by R.A. Fisher in 1935 . Even when...couldIatal eoand1rmSncepriet ::’x.OUld st:ll have to he - epre -erte-.. I-Itma’v in any: ph,:sIcal computing devl’:c by a C\\onux ot bit,Aa n. the

  16. Locating Encrypted Data Hidden Among Non-Encrypted Data Using Statistical Tools

    DTIC Science & Technology

    2007-03-01

    length of a compressed sequence). If a bit sequence can be significantly compressed , then it is not random. Lempel - Ziv Compression Test This test...communication, targeting, and a host other of tasks. This software will most assuredly contain classified data or algorithms requiring protection in...containing the classified data and algorithms . As the program is executed the solider would have access to the common unclassified tasks, however, to

  17. Averaging of phase noise in PSK signals by an opto-electrical feed-forward circuit

    NASA Astrophysics Data System (ADS)

    Inoue, K.; Ohta, M.

    2013-10-01

    This paper proposes an opto-electrical feed-forward circuit that reduces phase noise in binary PSK signals by averaging the noise. Random and independent phase noise is averaged over several bit slots by externally modulating a phase-fluctuating PSK signal with feed-forward signal obtained from signal processing of the outputs of delay interferometers. The simulation results demonstrate a reduction in the phase noise.

  18. Applications of Probabilistic Combiners on Linear Feedback Shift Register Sequences

    DTIC Science & Technology

    2016-12-01

    on the resulting output strings show a drastic increase in complexity, while simultaneously passing the stringent randomness tests required by the...a three-variable function. Our tests on the resulting output strings show a drastic increase in complex- ity, while simultaneously passing the...10001101 01000010 11101001 Decryption of a message that has been encrypted using bitwise XOR is quite simple. Since each bit is its own additive inverse

  19. Studies Of Single-Event-Upset Models

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.

    1988-01-01

    Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.

  20. A Cautious Note on Auxiliary Variables That Can Increase Bias in Missing Data Problems.

    PubMed

    Thoemmes, Felix; Rose, Norman

    2014-01-01

    The treatment of missing data in the social sciences has changed tremendously during the last decade. Modern missing data techniques such as multiple imputation and full-information maximum likelihood are used much more frequently. These methods assume that data are missing at random. One very common approach to increase the likelihood that missing at random is achieved consists of including many covariates as so-called auxiliary variables. These variables are either included based on data considerations or in an inclusive fashion; that is, taking all available auxiliary variables. In this article, we point out that there are some instances in which auxiliary variables exhibit the surprising property of increasing bias in missing data problems. In a series of focused simulation studies, we highlight some situations in which this type of biasing behavior can occur. We briefly discuss possible ways how one can avoid selecting bias-inducing covariates as auxiliary variables.

  1. A surface-bound molecule that undergoes optically biased Brownian rotation.

    PubMed

    Hutchison, James A; Uji-i, Hiroshi; Deres, Ania; Vosch, Tom; Rocha, Susana; Müller, Sibylle; Bastian, Andreas A; Enderlein, Jörg; Nourouzi, Hassan; Li, Chen; Herrmann, Andreas; Müllen, Klaus; De Schryver, Frans; Hofkens, Johan

    2014-02-01

    Developing molecular systems with functions analogous to those of macroscopic machine components, such as rotors, gyroscopes and valves, is a long-standing goal of nanotechnology. However, macroscopic analogies go only so far in predicting function in nanoscale environments, where friction dominates over inertia. In some instances, ratchet mechanisms have been used to bias the ever-present random, thermally driven (Brownian) motion and drive molecular diffusion in desired directions. Here, we visualize the motions of surface-bound molecular rotors using defocused fluorescence imaging, and observe the transition from hindered to free Brownian rotation by tuning medium viscosity. We show that the otherwise random rotations can be biased by the polarization of the excitation light field, even though the associated optical torque is insufficient to overcome thermal fluctuations. The biased rotation is attributed instead to a fluctuating-friction mechanism in which photoexcitation of the rotor strongly inhibits its diffusion rate.

  2. Explicit Bias Toward High-Income-Country Research: A Randomized, Blinded, Crossover Experiment Of English Clinicians.

    PubMed

    Harris, Matthew; Marti, Joachim; Watt, Hillary; Bhatti, Yasser; Macinko, James; Darzi, Ara W

    2017-11-01

    Unconscious bias may interfere with the interpretation of research from some settings, particularly from lower-income countries. Most studies of this phenomenon have relied on indirect outcomes such as article citation counts and publication rates; few have addressed or proven the effect of unconscious bias in evidence interpretation. In this randomized, blinded crossover experiment in a sample of 347 English clinicians, we demonstrate that changing the source of a research abstract from a low- to a high-income country significantly improves how it is viewed, all else being equal. Using fixed-effects models, we measured differences in ratings for strength of evidence, relevance, and likelihood of referral to a peer. Having a high-income-country source had a significant overall impact on respondents' ratings of relevance and recommendation to a peer. Unconscious bias can have far-reaching implications for the diffusion of knowledge and innovations from low-income countries.

  3. Online attentional bias modification training targeting anxiety and depression in unselected adolescents: Short- and long-term effects of a randomized controlled trial.

    PubMed

    de Voogd, E L; Wiers, R W; Prins, P J M; de Jong, P J; Boendermaker, W J; Zwitser, R J; Salemink, E

    2016-12-01

    Based on information processing models of anxiety and depression, we investigated the efficacy of multiple sessions of online attentional bias modification training to reduce attentional bias and symptoms of anxiety and depression, and to increase emotional resilience in youth. Unselected adolescents (N = 340, age: 11-18 years) were randomly allocated to eight sessions of a dot-probe, or a visual search-based attentional training, or one of two corresponding placebo control conditions. Cognitive and emotional measures were assessed pre- and post-training; emotional outcome measures also at three, six and twelve months follow-up. Only visual search training enhanced attention for positive information, and this effect was stronger for participants who completed more training sessions. Symptoms of anxiety and depression reduced, whereas emotional resilience improved. However, these effects were not especially pronounced in the active conditions. Thus, this large-scale randomized controlled study provided no support for the efficacy of the current online attentional bias modification training as a preventive intervention to reduce symptoms of anxiety or depression or to increase emotional resilience in unselected adolescents. However, the absence of biased attention related to symptomatology at baseline, and the large drop-out rates at follow-up preclude strong conclusions. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  5. Evolutionary model with genetics, aging, and knowledge

    NASA Astrophysics Data System (ADS)

    Bustillos, Armando Ticona; de Oliveira, Paulo Murilo

    2004-02-01

    We represent a process of learning by using bit strings, where 1 bits represent the knowledge acquired by individuals. Two ways of learning are considered: individual learning by trial and error, and social learning by copying knowledge from other individuals or from parents in the case of species with parental care. The age-structured bit string allows us to study how knowledge is accumulated during life and its influence over the genetic pool of a population after many generations. We use the Penna model to represent the genetic inheritance of each individual. In order to study how the accumulated knowledge influences the survival process, we include it to help individuals to avoid the various death situations. Modifications in the Verhulst factor do not show any special feature due to its random nature. However, by adding years to life as a function of the accumulated knowledge, we observe an improvement of the survival rates while the genetic fitness of the population becomes worse. In this latter case, knowledge becomes more important in the last years of life where individuals are threatened by diseases. Effects of offspring overprotection and differences between individual and social learning can also be observed. Sexual selection as a function of knowledge shows some effects when fidelity is imposed.

  6. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  7. Science faculty's subtle gender biases favor male students.

    PubMed

    Moss-Racusin, Corinne A; Dovidio, John F; Brescoll, Victoria L; Graham, Mark J; Handelsman, Jo

    2012-10-09

    Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty exhibit a bias against female students that could contribute to the gender disparity in academic science. In a randomized double-blind study (n = 127), science faculty from research-intensive universities rated the application materials of a student-who was randomly assigned either a male or female name-for a laboratory manager position. Faculty participants rated the male applicant as significantly more competent and hireable than the (identical) female applicant. These participants also selected a higher starting salary and offered more career mentoring to the male applicant. The gender of the faculty participants did not affect responses, such that female and male faculty were equally likely to exhibit bias against the female student. Mediation analyses indicated that the female student was less likely to be hired because she was viewed as less competent. We also assessed faculty participants' preexisting subtle bias against women using a standard instrument and found that preexisting subtle bias against women played a moderating role, such that subtle bias against women was associated with less support for the female student, but was unrelated to reactions to the male student. These results suggest that interventions addressing faculty gender bias might advance the goal of increasing the participation of women in science.

  8. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    NASA Astrophysics Data System (ADS)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  9. The effect of framing on surrogate optimism bias: A simulation study.

    PubMed

    Patel, Dev; Cohen, Elan D; Barnato, Amber E

    2016-04-01

    To explore the effect of emotion priming and physician communication behaviors on optimism bias. We conducted a 5 × 2 between-subject randomized factorial experiment using a Web-based interactive video designed to simulate a family meeting for a critically ill spouse/parent. Eligibility included age at least 35 years and self-identifying as the surrogate for a spouse/parent. The primary outcome was the surrogate's election of code status. We defined optimism bias as the surrogate's estimate of prognosis with cardiopulmonary resuscitation (CPR) > their recollection of the physician's estimate. Of 373 respondents, 256 (69%) logged in and were randomized and 220 (86%) had nonmissing data for prognosis. Sixty-seven (30%) of 220 overall and 56 of (32%) 173 with an accurate recollection of the physician's estimate had optimism bias. Optimism bias correlated with choosing CPR (P < .001). Emotion priming (P = .397), physician attention to emotion (P = .537), and framing of CPR as the social norm (P = .884) did not affect optimism bias. Framing the decision as the patient's vs the surrogate's (25% vs 36%, P = .066) and describing the alternative to CPR as "allow natural death" instead of "do not resuscitate" (25% vs 37%, P = .035) decreased optimism bias. Framing of CPR choice during code status conversations may influence surrogates' optimism bias. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  11. Evaluation of the Cochrane tool for assessing risk of bias in randomized clinical trials: overview of published comments and analysis of user practice in Cochrane and non-Cochrane reviews.

    PubMed

    Jørgensen, Lars; Paludan-Müller, Asger S; Laursen, David R T; Savović, Jelena; Boutron, Isabelle; Sterne, Jonathan A C; Higgins, Julian P T; Hróbjartsson, Asbjørn

    2016-05-10

    The Cochrane risk of bias tool for randomized clinical trials was introduced in 2008 and has frequently been commented on and used in systematic reviews. We wanted to evaluate the tool by reviewing published comments on its strengths and challenges and by describing and analysing how the tool is applied to both Cochrane and non-Cochrane systematic reviews. A review of published comments (searches in PubMed, The Cochrane Methodology Register and Google Scholar) and an observational study (100 Cochrane and 100 non-Cochrane reviews from 2014). Our review included 68 comments, 15 of which were categorised as major. The main strengths of the tool were considered to be its aim (to assess trial conduct and not reporting), its developmental basis (wide consultation, empirical and theoretical evidence) and its transparent procedures. The challenges of the tool were mainly considered to be its choice of core bias domains (e.g. not involving funding/conflicts of interest) and issues to do with implementation (i.e. modest inter-rater agreement) and terminology. Our observational study found that the tool was used in all Cochrane reviews (100/100) and was the preferred tool in non-Cochrane reviews (31/100). Both types of reviews frequently implemented the tool in non-recommended ways. Most Cochrane reviews planned to use risk of bias assessments as basis for sensitivity analyses (70 %), but only a minority conducted such analyses (19 %) because, in many cases, few trials were assessed as having "low" risk of bias for all standard domains (6 %). The judgement of at least one risk of bias domain as "unclear" was found in 89 % of included randomized clinical trials (1103/1242). The Cochrane tool has become the standard approach to assess risk of bias in randomized clinical trials but is frequently implemented in a non-recommended way. Based on published comments and how it is applied in practice in systematic reviews, the tool may be further improved by a revised structure and more focused guidance.

  12. The Risk of Bias in Randomized Trials in General Dentistry Journals.

    PubMed

    Hinton, Stephanie; Beyari, Mohammed M; Madden, Kim; Lamfon, Hanadi A

    2015-01-01

    The use of a randomized controlled trial (RCT) research design is considered the gold standard for conducting evidence-based clinical research. In this present study, we aimed to assess the quality of RCTs in dentistry and create a general foundation for evidence-based dentistry on which to perform subsequent RCTs. We conducted a systematic assessment of bias of RCTs in seven general dentistry journals published between January 2011 and March 2012. We extracted study characteristics in duplicate and assessed each trial's quality using the Cochrane Risk of Bias tool. We compared risk of bias across studies graphically. Among 1,755 studies across seven journals, we identified 67 RCTs. Many included studies were conducted in Europe (39%), with an average sample size of 358 participants. These studies included 52% female participants and the maximum follow-up period was 13 years. Overall, we found a high percentage of unclear risk of bias among included RCTs, indicating poor quality of reporting within the included studies. An overall high proportion of trials with an "unclear risk of bias" suggests the need for better quality of reporting in dentistry. As such, key concepts in dental research and future trials should focus on high-quality reporting.

  13. Instruments for Assessing Risk of Bias and Other Methodological Criteria of Published Animal Studies: A Systematic Review

    PubMed Central

    Krauth, David; Woodruff, Tracey J.

    2013-01-01

    Background: Results from animal toxicology studies are critical to evaluating the potential harm from exposure to environmental chemicals or the safety of drugs prior to human testing. However, there is significant debate about how to evaluate the methodology and potential biases of the animal studies. There is no agreed-upon approach, and a systematic evaluation of current best practices is lacking. Objective: We performed a systematic review to identify and evaluate instruments for assessing the risk of bias and/or other methodological criteria of animal studies. Method: We searched Medline (January 1966–November 2011) to identify all relevant articles. We extracted data on risk of bias criteria (e.g., randomization, blinding, allocation concealment) and other study design features included in each assessment instrument. Discussion: Thirty distinct instruments were identified, with the total number of assessed risk of bias, methodological, and/or reporting criteria ranging from 2 to 25. The most common criteria assessed were randomization (25/30, 83%), investigator blinding (23/30, 77%), and sample size calculation (18/30, 60%). In general, authors failed to empirically justify why these or other criteria were included. Nearly all (28/30, 93%) of the instruments have not been rigorously tested for validity or reliability. Conclusion: Our review highlights a number of risk of bias assessment criteria that have been empirically tested for animal research, including randomization, concealment of allocation, blinding, and accounting for all animals. In addition, there is a need for empirically testing additional methodological criteria and assessing the validity and reliability of a standard risk of bias assessment instrument. Citation: Krauth D, Woodruff TJ, Bero L. 2013. Instruments for assessing risk of bias and other methodological criteria of published animal studies: a systematic review. Environ Health Perspect 121:985–992 (2013); http://dx.doi.org/10.1289/ehp.1206389 PMID:23771496

  14. Evaluation of bias and logistics in a survey of adults at increased risk for oral health decrements.

    PubMed

    Gilbert, G H; Duncan, R P; Kulley, A M; Coward, R T; Heft, M W

    1997-01-01

    Designing research to include sufficient respondents in groups at highest risk for oral health decrements can present unique challenges. Our purpose was to evaluate bias and logistics in this survey of adults at increased risk for oral health decrements. We used a telephone survey methodology that employed both listed numbers and random digit dialing to identify dentate persons 45 years old or older and to oversample blacks, poor persons, and residents of nonmetropolitan counties. At a second stage, a subsample of the respondents to the initial telephone screening was selected for further study, which consisted of a baseline in-person interview and a clinical examination. We assessed bias due to: (1) limiting the sample to households with telephones, (2) using predominantly listed numbers instead of random digit dialing, and (3) nonresponse at two stages of data collection. While this approach apparently created some biases in the sample, they were small in magnitude. Specifically, limiting the sample to households with telephones biased the sample overall toward more females, larger households, and fewer functionally impaired persons. Using predominantly listed numbers led to a modest bias toward selection of persons more likely to be younger, healthier, female, have had a recent dental visit, and reside in smaller households. Blacks who were selected randomly at a second stage were more likely to participate in baseline data gathering than their white counterparts. Comparisons of the data obtained in this survey with those from recent national surveys suggest that this methodology for sampling high-risk groups did not substantively bias the sample with respect to two important dental parameters, prevalence of edentulousness and dental care use, nor were conclusions about multivariate associations with dental care recency substantively affected. This method of sampling persons at high risk for oral health decrements resulted in only modest bias with respect to the population of interest.

  15. Estimating bias in causes of death ascertainment in the Finnish Randomized Study of Screening for Prostate Cancer.

    PubMed

    Kilpeläinen, Tuomas P; Mäkinen, Tuukka; Karhunen, Pekka J; Aro, Jussi; Lahtela, Jorma; Taari, Kimmo; Talala, Kirsi; Tammela, Teuvo L J; Auvinen, Anssi

    2016-12-01

    Precise cause of death (CoD) ascertainment is crucial in any cancer screening trial to avoid bias from misclassification due to excessive recording of diagnosed cancer as a CoD in death certificates instead of non-cancer disease that actually caused death. We estimated whether there was bias in CoD determination between screening (SA) and control arms (CA) in a population-based prostate cancer (PCa) screening trial. Our trial is the largest component of the European Randomized Study of Screening for Prostate Cancer with more than 80,000 men. Randomly selected deaths in men with PCa (N=442/2568 cases, 17.2%) were reviewed by an independent CoD committee. Median follow-up was 16.8 years in both arms. Overdiagnosis of PCa was present in the SA as the risk ratio for PCa incidence was 1.19 (95% confidence interval (CI) 1.14-1.24). The hazard ratio (HR) for PCa mortality was 0.94 (95%CI 0.82-1.08) in favor of the SA. Agreement with official CoD registry was 94.6% (κ=0.88) in the SA and 95.4% (κ=0.91) in the CA. Altogether 14 PCa deaths were estimated as false-positive in both arms and exclusion of these resulted in HR 0.92 (95% CI 0.80-1.06). A small differential misclassification bias in ascertainment of CoD was present, most likely due to attribution bias (overdiagnosis in the SA). Maximum precision in CoD ascertainment can only be achieved with independent review of all deaths in the diseased population. However, this is cumbersome and expensive and may provide little benefit compared to random sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Exploring activity-driven network with biased walks

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Wu, Ding Juan; Lv, Fang; Su, Meng Long

    We investigate the concurrent dynamics of biased random walks and the activity-driven network, where the preferential transition probability is in terms of the edge-weighting parameter. We also obtain the analytical expressions for stationary distribution and the coverage function in directed and undirected networks, all of which depend on the weight parameter. Appropriately adjusting this parameter, more effective search strategy can be obtained when compared with the unbiased random walk, whether in directed or undirected networks. Since network weights play a significant role in the diffusion process.

  17. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  18. Seven Deadly Sins in Trauma Outcomes Research: An Epidemiologic Post-Mortem for Major Causes of Bias

    PubMed Central

    del Junco, Deborah J.; Fox, Erin E.; Camp, Elizabeth A.; Rahbar, Mohammad H.; Holcomb, John B.

    2013-01-01

    Background Because randomized clinical trials (RCTs) in trauma outcomes research are expensive and complex, they have rarely been the basis for the clinical care of trauma patients. Most published findings are derived from retrospective and occasionally prospective observational studies that may be particularly susceptible to bias. The sources of bias include some common to other clinical domains, such as heterogeneous patient populations with competing and interdependent short- and long-term outcomes. Other sources of bias are unique to trauma, such as rapidly changing multi-system responses to injury that necessitate highly dynamic treatment regimes like blood product transfusion. The standard research design and analysis strategies applied in published observational studies are often inadequate to address these biases. Methods Drawing on recent experience in the design, data collection, monitoring and analysis of the 10-site observational PROMMTT study, seven common and sometimes overlapping biases are described through examples and resolution strategies. Results Sources of bias in trauma research include ignoring 1) variation in patients’ indications for treatment (indication bias), 2) the dependency of intervention delivery on patient survival (survival bias), 3) time-varying treatment, 4) time-dependent confounding, 5) non-uniform intervention effects over time, 6) non-random missing data mechanisms, and 7) imperfectly defined variables. This list is not exhaustive. Conclusion The mitigation strategies to overcome these threats to validity require epidemiologic and statistical vigilance. Minimizing the highlighted types of bias in trauma research will facilitate clinical translation of more accurate and reproducible findings and improve the evidence-base that clinicians apply in their care of injured patients. PMID:23778519

  19. Effect of PDC bit design and confining pressure on bit-balling tendencies while drilling shale using water base mud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hariharan, P.R.; Azar, J.J.

    1996-09-01

    A good majority of all oilwell drilling occurs in shale and other clay-bearing rocks. In the light of relatively fewer studies conducted, the problem of bit-balling in PDC bits while drilling shale has been addressed with the primary intention of attempting to quantify the degree of balling, as well as to investigate the influence of bit design and confining pressures. A series of full-scale laboratory drilling tests under simulated down hole conditions were conducted utilizing seven different PDC bits in Catoosa shale. Test results have indicated that the non-dimensional parameter R{sub d} [(bit torque).(weight-on-bit)/(bit diameter)] is a good indicator ofmore » the degree of bit-balling and that it correlated well with Specific-Energy. Furthermore, test results have shown bit-profile and bit-hydraulic design to be key parameters of bit design that dictate the tendency of balling in shales under a given set of operating conditions. A bladed bit was noticed to ball less compared to a ribbed or open-faced bit. Likewise, related to bit profile, test results have indicated that the parabolic profile has a lesser tendency to ball compared to round and flat profiles. The tendency of PDC bits to ball was noticed to increase with increasing confining pressures for the set of drilling conditions used.« less

  20. Proper nozzle location, bit profile, and cutter arrangement affect PDC-bit performance significantly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Gavito, D.; Azar, J.J.

    1994-09-01

    During the past 20 years, the drilling industry has looked to new technology to halt the exponentially increasing costs of drilling oil, gas, and geothermal wells. This technology includes bit design innovations to improve overall drilling performance and reduce drilling costs. These innovations include development of drag bits that use PDC cutters, also called PDC bits, to drill long, continuous intervals of soft to medium-hard formations more economically than conventional three-cone roller-cone bits. The cost advantage is the result of higher rates of penetration (ROP's) and longer bit life obtained with the PDC bits. An experimental study comparing the effectsmore » of polycrystalline-diamond-compact (PDC)-bit design features on the dynamic pressure distribution at the bit/rock interface was conducted on a full-scale drilling rig. Results showed that nozzle location, bit profile, and cutter arrangement are significant factors in PDC-bit performance.« less

  1. Random Assignment: Practical Considerations from Field Experiments.

    ERIC Educational Resources Information Center

    Dunford, Franklyn W.

    1990-01-01

    Seven qualitative issues associated with randomization that have the potential to weaken or destroy otherwise sound experimental designs are reviewed and illustrated via actual field experiments. Issue areas include ethics and legality, liability risks, manipulation of randomized outcomes, hidden bias, design intrusiveness, case flow, and…

  2. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.

  3. Acquisition and Retaining Granular Samples via a Rotating Coring Bit

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart

    2013-01-01

    This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the frictional force must be greater than the weight of the sample. The bit can be designed with an internal sleeve to serve as a container for granular samples. This tube-shaped component can be extracted upon completion of the sampling, and the bottom can be capped by placing the bit onto a corklike component. Then, upon removal of the internal tube, the top section can be sealed. The novel features of this device are: center dot A mechanism of acquiring and retaining granular samples using a coring bit without a closed door. center dot An acquisition bit that has internal structure such as a waffle pattern for compartmentalizing or helical internal flute to propel the sample inside the bit and help in acquiring and retaining granular samples. center dot A bit with an internal spiral into which the various particles wedge. center dot A design that provides a method of testing frictional properties of the granular samples and potentially segregating particles based on size and density. A controlled acceleration or deceleration may be used to drop the least-frictional particles or to eventually shear the unconsolidated material near the bit center.

  4. Method of recording bioelectrical signals using a capacitive coupling

    NASA Astrophysics Data System (ADS)

    Simon, V. A.; Gerasimov, V. A.; Kostrin, D. K.; Selivanov, L. M.; Uhov, A. A.

    2017-11-01

    In this article a technique for the bioelectrical signals acquisition by means of the capacitive sensors is described. A feedback loop for the ultra-high impedance biasing of the input instrumentation amplifier, which provides receiving of the electrical cardiac signal (ECS) through a capacitive coupling, is proposed. The mains 50/60 Hz noise is suppressed by a narrow-band stop filter with an independent notch frequency and quality factor tuning. Filter output is attached to a ΣΔ analog-to-digital converter (ADC), which acquires the filtered signal with a 24-bit resolution. Signal processing board is connected through universal serial bus interface to a personal computer, where ECS in a digital form is recorded and processed.

  5. SNR characteristics of 850-nm OEIC receiver with a silicon avalanche photodetector.

    PubMed

    Youn, Jin-Sung; Lee, Myung-Jae; Park, Kang-Yeob; Rücker, Holger; Choi, Woo-Young

    2014-01-13

    We investigate signal-to-noise ratio (SNR) characteristics of an 850-nm optoelectronic integrated circuit (OEIC) receiver fabricated with standard 0.25-µm SiGe bipolar complementary metal-oxide-semiconductor (BiCMOS) technology. The OEIC receiver is composed of a Si avalanche photodetector (APD) and BiCMOS analog circuits including a transimpedance amplifier with DC-balanced buffer, a tunable equalizer, a limiting amplifier, and an output buffer with 50-Ω loads. We measure APD SNR characteristics dependence on the reverse bias voltage as well as BiCMOS circuit noise characteristics. From these, we determine the SNR characteristics of the entire OEIC receiver, and finally, the results are verified with bit-error rate measurement.

  6. Explicating the Conditions Under Which Multilevel Multiple Imputation Mitigates Bias Resulting from Random Coefficient-Dependent Missing Longitudinal Data.

    PubMed

    Gottfredson, Nisha C; Sterba, Sonya K; Jackson, Kristina M

    2017-01-01

    Random coefficient-dependent (RCD) missingness is a non-ignorable mechanism through which missing data can arise in longitudinal designs. RCD, for which we cannot test, is a problematic form of missingness that occurs if subject-specific random effects correlate with propensity for missingness or dropout. Particularly when covariate missingness is a problem, investigators typically handle missing longitudinal data by using single-level multiple imputation procedures implemented with long-format data, which ignores within-person dependency entirely, or implemented with wide-format (i.e., multivariate) data, which ignores some aspects of within-person dependency. When either of these standard approaches to handling missing longitudinal data is used, RCD missingness leads to parameter bias and incorrect inference. We explain why multilevel multiple imputation (MMI) should alleviate bias induced by a RCD missing data mechanism under conditions that contribute to stronger determinacy of random coefficients. We evaluate our hypothesis with a simulation study. Three design factors are considered: intraclass correlation (ICC; ranging from .25 to .75), number of waves (ranging from 4 to 8), and percent of missing data (ranging from 20 to 50%). We find that MMI greatly outperforms the single-level wide-format (multivariate) method for imputation under a RCD mechanism. For the MMI analyses, bias was most alleviated when the ICC is high, there were more waves of data, and when there was less missing data. Practical recommendations for handling longitudinal missing data are suggested.

  7. SRAM As An Array Of Energetic-Ion Detectors

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Blaes, Brent R.; Lieneweg, Udo; Nixon, Robert H.

    1993-01-01

    Static random-access memory (SRAM) designed for use as array of energetic-ion detectors. Exploits well-known tendency of incident energetic ions to cause bit flips in cells of electronic memories. Design of ion-detector SRAM involves modifications of standard SRAM design to increase sensitivity to ions. Device fabricated by use of conventional complementary metal oxide/semiconductor (CMOS) processes. Potential uses include gas densimetry, position sensing, and measurement of cosmic-ray spectrum.

  8. Motor Learning Curve and Long-Term Effectiveness of Modified Constraint-Induced Movement Therapy in Children with Unilateral Cerebral Palsy: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Geerdink, Yvonne; Aarts, Pauline; Geurts, Alexander C.

    2013-01-01

    The goal of this study was to determine the progression of manual dexterity during 6 weeks (54 h) (modified) constraint-induced movement therapy ((m)CIMT) followed by 2 weeks (18 h) bimanual training (BiT) in children with unilateral spastic cerebral palsy (CP), to establish whether and when a maximal training effect was reached and which factors…

  9. Multi-wavelength access gate for WDM-formatted words in optical RAM row architectures

    NASA Astrophysics Data System (ADS)

    Fitsios, D.; Alexoudi, T.; Vagionas, C.; Miliou, A.; Kanellos, G. T.; Pleros, N.

    2013-03-01

    Optical RAM has emerged as a promising solution for overcoming the "Memory Wall" of electronics, indicating the use of light in RAM architectures as the approach towards enabling ps-regime memory access times. Taking a step further towards exploiting the unique wavelength properties of optical signals, we reveal new architectural perspectives in optical RAM structures by introducing WDM principles in the storage area. To this end, we demonstrate a novel SOAbased multi-wavelength Access Gate for utilization in a 4x4 WDM optical RAM bank architecture. The proposed multiwavelength Access Gate can simultaneously control random access to a 4-bit optical word, exploiting Cross-Gain-Modulation (XGM) to process 8 Bit and Bit channels encoded in 8 different wavelengths. It also suggests simpler optical RAM row architectures, allowing for the effective sharing of one multi-wavelength Access Gate for each row, substituting the eight AGs in the case of conventional optical RAM architectures. The scheme is shown to support 10Gbit/s operation for the incoming 4-bit data streams, with a power consumption of 15mW/Gbit/s. All 8 wavelength channels demonstrate error-free operation with a power penalty lower than 3 dB for all channels, compared to Back-to-Back measurements. The proposed optical RAM architecture reveals that exploiting the WDM capabilities of optical components can lead to RAM bank implementations with smarter column/row encoders/decoders, increased circuit simplicity, reduced number of active elements and associated power consumption. Moreover, exploitation of the wavelength entity can release significant potential towards reconfigurable optical cache mapping schemes when using the wavelength dimension for memory addressing.

  10. Verification testing of the compression performance of the HEVC screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng

    2017-09-01

    This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.

  11. Analyzing mobile WiMAX base station deployment under different frequency planning strategies

    NASA Astrophysics Data System (ADS)

    Salman, M. K.; Ahmad, R. B.; Ali, Ziad G.; Aldhaibani, Jaafar A.; Fayadh, Rashid A.

    2015-05-01

    The frequency spectrum is a precious resource and scarce in the communication markets. Therefore, different techniques are adopted to utilize the available spectrum in deploying WiMAX base stations (BS) in cellular networks. In this paper several types of frequency planning techniques are illustrated, and a comprehensive comparative study between conventional frequency reuse of 1 (FR of 1) and fractional frequency reuse (FFR) is presented. These techniques are widely used in network deployment, because they employ universal frequency (using all the available bandwidth) in their base station installation/configuration within network system. This paper presents a network model of 19 base stations in order to be employed in the comparison of the aforesaid frequency planning techniques. Users are randomly distributed within base stations, users' resource mapping and their burst profile selection are based on the measured signal to interference plus-noise ratio (SINR). Simulation results reveal that the FFR has advantages over the conventional FR of 1 in various metrics. 98 % of downlink resources (slots) are exploited when FFR is applied, whilst it is 81 % at FR of 1. Data rate of FFR has been increased to 10.6 Mbps, while it is 7.98 Mbps at FR of 1. The spectral efficiency is better enhanced (1.072 bps/Hz) at FR of 1 than FFR (0.808 bps/Hz), since FR of 1 exploits all the Bandwidth. The subcarrier efficiency shows how many data bits that can be carried by subcarriers under different frequency planning techniques, the system can carry more data bits under FFR (2.40 bit/subcarrier) than FR of 1 (1.998 bit/subcarrier). This study confirms that FFR can perform better than conventional frequency planning (FR of 1) which made it a strong candidate for WiMAX BS deployment in cellular networks.

  12. Non-Hermitian localization in biological networks.

    PubMed

    Amir, Ariel; Hatano, Naomichi; Nelson, David R

    2016-04-01

    We explore the spectra and localization properties of the N-site banded one-dimensional non-Hermitian random matrices that arise naturally in sparse neural networks. Approximately equal numbers of random excitatory and inhibitory connections lead to spatially localized eigenfunctions and an intricate eigenvalue spectrum in the complex plane that controls the spontaneous activity and induced response. A finite fraction of the eigenvalues condense onto the real or imaginary axes. For large N, the spectrum has remarkable symmetries not only with respect to reflections across the real and imaginary axes but also with respect to 90^{∘} rotations, with an unusual anisotropic divergence in the localization length near the origin. When chains with periodic boundary conditions become directed, with a systematic directional bias superimposed on the randomness, a hole centered on the origin opens up in the density-of-states in the complex plane. All states are extended on the rim of this hole, while the localized eigenvalues outside the hole are unchanged. The bias-dependent shape of this hole tracks the bias-independent contours of constant localization length. We treat the large-N limit by a combination of direct numerical diagonalization and using transfer matrices, an approach that allows us to exploit an electrostatic analogy connecting the "charges" embodied in the eigenvalue distribution with the contours of constant localization length. We show that similar results are obtained for more realistic neural networks that obey "Dale's law" (each site is purely excitatory or inhibitory) and conclude with perturbation theory results that describe the limit of large directional bias, when all states are extended. Related problems arise in random ecological networks and in chains of artificial cells with randomly coupled gene expression patterns.

  13. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  14. Non-Hermitian localization in biological networks

    NASA Astrophysics Data System (ADS)

    Amir, Ariel; Hatano, Naomichi; Nelson, David R.

    2016-04-01

    We explore the spectra and localization properties of the N -site banded one-dimensional non-Hermitian random matrices that arise naturally in sparse neural networks. Approximately equal numbers of random excitatory and inhibitory connections lead to spatially localized eigenfunctions and an intricate eigenvalue spectrum in the complex plane that controls the spontaneous activity and induced response. A finite fraction of the eigenvalues condense onto the real or imaginary axes. For large N , the spectrum has remarkable symmetries not only with respect to reflections across the real and imaginary axes but also with respect to 90∘ rotations, with an unusual anisotropic divergence in the localization length near the origin. When chains with periodic boundary conditions become directed, with a systematic directional bias superimposed on the randomness, a hole centered on the origin opens up in the density-of-states in the complex plane. All states are extended on the rim of this hole, while the localized eigenvalues outside the hole are unchanged. The bias-dependent shape of this hole tracks the bias-independent contours of constant localization length. We treat the large-N limit by a combination of direct numerical diagonalization and using transfer matrices, an approach that allows us to exploit an electrostatic analogy connecting the "charges" embodied in the eigenvalue distribution with the contours of constant localization length. We show that similar results are obtained for more realistic neural networks that obey "Dale's law" (each site is purely excitatory or inhibitory) and conclude with perturbation theory results that describe the limit of large directional bias, when all states are extended. Related problems arise in random ecological networks and in chains of artificial cells with randomly coupled gene expression patterns.

  15. A Concurrent Smalltalk Compiler for the Message-Driven Processor

    DTIC Science & Technology

    1988-05-01

    apj with bits from low-bit (inclusive) to high-bit (exclusive) set. ;;;Low-bit defaults to zero. (defmacro brange (high-bit &optional low-bit) (list...n2) (null (cddr num))) (aetg bits (b+ bits (if (>- nl n2) ( brange (1+ nl) n2) ( brange (1+ n2) ni)))) (error "Bad bmap range: -S" flu.)))) (t (error...vlocs) flat ((vlive (b- finst-vllv* mast) *I.( brange firat-context-slot-nun))) (next (inst-next last))) (if (bempty vlive) (delete-module module inat

  16. Science faculty’s subtle gender biases favor male students

    PubMed Central

    Moss-Racusin, Corinne A.; Dovidio, John F.; Brescoll, Victoria L.; Graham, Mark J.; Handelsman, Jo

    2012-01-01

    Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty exhibit a bias against female students that could contribute to the gender disparity in academic science. In a randomized double-blind study (n = 127), science faculty from research-intensive universities rated the application materials of a student—who was randomly assigned either a male or female name—for a laboratory manager position. Faculty participants rated the male applicant as significantly more competent and hireable than the (identical) female applicant. These participants also selected a higher starting salary and offered more career mentoring to the male applicant. The gender of the faculty participants did not affect responses, such that female and male faculty were equally likely to exhibit bias against the female student. Mediation analyses indicated that the female student was less likely to be hired because she was viewed as less competent. We also assessed faculty participants’ preexisting subtle bias against women using a standard instrument and found that preexisting subtle bias against women played a moderating role, such that subtle bias against women was associated with less support for the female student, but was unrelated to reactions to the male student. These results suggest that interventions addressing faculty gender bias might advance the goal of increasing the participation of women in science. PMID:22988126

  17. New PDC bit design reduces vibrational problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mensa-Wilmot, G.; Alexander, W.L.

    1995-05-22

    A new polycrystalline diamond compact (PDC) bit design combines cutter layout, load balancing, unsymmetrical blades and gauge pads, and spiraled blades to reduce problematic vibrations without limiting drilling efficiency. Stabilization improves drilling efficiency and also improves dull characteristics for PDC bits. Some PDC bit designs mitigate one vibrational mode (such as bit whirl) through drilling parameter manipulation yet cause or excite another vibrational mode (such as slip-stick). An alternative vibration-reducing concept which places no limitations on the operational environment of a PDC bit has been developed to ensure optimization of the bit`s available mechanical energy. The paper discusses bit stabilization,more » vibration reduction, vibration prevention, cutter arrangement, load balancing, blade layout, spiraled blades, and bit design.« less

  18. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    PubMed

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  19. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  20. The need for randomization in animal trials: an overview of systematic reviews.

    PubMed

    Hirst, Jennifer A; Howick, Jeremy; Aronson, Jeffrey K; Roberts, Nia; Perera, Rafael; Koshiaris, Constantinos; Heneghan, Carl

    2014-01-01

    Randomization, allocation concealment, and blind outcome assessment have been shown to reduce bias in human studies. Authors from the Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies (CAMARADES) collaboration recently found that these features protect against bias in animal stroke studies. We extended the scope the work from CAMARADES to include investigations of treatments for any condition. We conducted an overview of systematic reviews. We searched Medline and Embase for systematic reviews of animal studies testing any intervention (against any control) and we included any disease area and outcome. We included reviews comparing randomized versus not randomized (but otherwise controlled), concealed versus unconcealed treatment allocation, or blinded versus unblinded outcome assessment. Thirty-one systematic reviews met our inclusion criteria: 20 investigated treatments for experimental stroke, 4 reviews investigated treatments for spinal cord diseases, while 1 review each investigated treatments for bone cancer, intracerebral hemorrhage, glioma, multiple sclerosis, Parkinson's disease, and treatments used in emergency medicine. In our sample 29% of studies reported randomization, 15% of studies reported allocation concealment, and 35% of studies reported blinded outcome assessment. We pooled the results in a meta-analysis, and in our primary analysis found that failure to randomize significantly increased effect sizes, whereas allocation concealment and blinding did not. In our secondary analyses we found that randomization, allocation concealment, and blinding reduced effect sizes, especially where outcomes were subjective. Our study demonstrates the need for randomization, allocation concealment, and blind outcome assessment in animal research across a wide range of outcomes and disease areas. Since human studies are often justified based on results from animal studies, our results suggest that unduly biased animal studies should not be allowed to constitute part of the rationale for human trials.

  1. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    PubMed

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  2. An AES chip with DPA resistance using hardware-based random order execution

    NASA Astrophysics Data System (ADS)

    Bo, Yu; Xiangyu, Li; Cong, Chen; Yihe, Sun; Liji, Wu; Xiangmin, Zhang

    2012-06-01

    This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm2. A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance.

  3. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Critique of a Hughes shuttle Ku-band data sampler/bit synchronizer

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.

    1980-01-01

    An alternative bit synchronizer proposed for shuttle was analyzed in a noise-free environment by considering the basic operation of the loop via timing diagrams and by linearizing the bit synchronizer as an equivalent, continuous, phased-lock loop (PLL). The loop is composed of a high-frequency phase-frequency detector which is capable of detecting both phase and frequency errors and is used to track the clock, and a bit transition detector which attempts to track the transitions of the data bits. It was determined that the basic approach was a good design which, with proper implementation of the accumulator, up/down counter and logic should provide accurate mid-bit sampling with symmetric bits. However, when bit asymmetry occurs, the bit synchronizer can lock up with a large timing error, yet be quasi-stable (timing will not change unless the clock and bit sequence drift). This will result in incorrectly detecting some bits.

  5. Network sampling coverage II: The effect of non-random missing data on network measurement

    PubMed Central

    Smith, Jeffrey A.; Moody, James; Morgan, Jonathan

    2016-01-01

    Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data. PMID:27867254

  6. Network sampling coverage II: The effect of non-random missing data on network measurement.

    PubMed

    Smith, Jeffrey A; Moody, James; Morgan, Jonathan

    2017-01-01

    Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data.

  7. Probability theory plus noise: Replies to Crupi and Tentori (2016) and to Nilsson, Juslin, and Winman (2016).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-01-01

    A standard assumption in much of current psychology is that people do not reason about probability using the rules of probability theory but instead use various heuristics or "rules of thumb," which can produce systematic reasoning biases. In Costello and Watts (2014), we showed that a number of these biases can be explained by a model where people reason according to probability theory but are subject to random noise. More importantly, that model also predicted agreement with probability theory for certain expressions that cancel the effects of random noise: Experimental results strongly confirmed this prediction, showing that probabilistic reasoning is simultaneously systematically biased and "surprisingly rational." In their commentaries on that paper, both Crupi and Tentori (2016) and Nilsson, Juslin, and Winman (2016) point to various experimental results that, they suggest, our model cannot explain. In this reply, we show that our probability theory plus noise model can in fact explain every one of the results identified by these authors. This gives a degree of additional support to the view that people's probability judgments embody the rational rules of probability theory and that biases in those judgments can be explained as simply effects of random noise. (c) 2015 APA, all rights reserved).

  8. Application of holographic optical techniques to bulk memory.

    NASA Technical Reports Server (NTRS)

    Anderson, L. K.

    1971-01-01

    Current efforts to exploit the spatial redundancy and built-in imaging of holographic optical techniques to provide high information densities without critical alignment and tight mechanical tolerances are reviewed. Read-write-erase in situ operation is possible but is presently impractical because of limitations in available recording media. As these are overcome, it should prove feasible to build holographic bulk memories with mechanically replaceable hologram plates featuring very fast (less than 2 microsec) random access to large (greater than 100 million bit) data blocks and very high throughput (greater than 500 Mbit/sec). Using volume holographic storage it may eventually be possible to realize random-access mass memories which require no mechanical motion and yet provide very high capacity.

  9. Acanthopria and Mimopriella parasitoid wasps (Diapriidae) attack Cyphomyrmex fungus-growing ants (Formicidae, Attini)

    NASA Astrophysics Data System (ADS)

    Fernández-Marín, Hermógenes; Zimmerman, Jess K.; Wcislo, William T.

    2006-01-01

    New World diapriine wasps are abundant and diverse, but the biology of most species is unknown. We provide the first description of the biology of diapriine wasps, Acanthopria spp. and Mimopriella sp., which attack the larvae of Cyphomyrmex fungus-growing ants. In Puerto Rico, the koinobiont parasitoids Acanthopria attack Cyphomyrmex minutus, while in Panama at least four morphospecies of Acanthopria and one of Mimopriella attack Cyphomyrmex rimosus. Of the total larvae per colony, 0 100% were parasitized, and 27 70% of the colonies per population were parasitized. Parasitism rate and colony size were negatively correlated for C. rimosus but not for C. minutus. Worker ants grasped at, bit, and in some cases, killed adult wasps that emerged in artificial nests or tried to enter natural nests. Parasitoid secondary sex ratios were female-biased for eclosing wasps, while field collections showed a male-biased sex ratio. Based on their abundance and success in attacking host ants, these minute wasps present excellent opportunities to explore how natural enemies impact ant colony demography and population biology.

  10. Non-volatile spin bistability based on ferromagnet-semiconductor quantum dot hybrid nanostructure

    NASA Astrophysics Data System (ADS)

    Semenov, Yuriy; Enaya, Hani; Zavada, John; Kim, Ki Wook

    2008-03-01

    Electrical manipulation of a memory cell based on bistability effect in a nanostructure consisting of a semiconductor quantum dot (QD) adjoining on opposite sides with a dielectric ferromagnetic layer (DFL) and a reservoir of itinerant holes is investigated theoretically. The operating principle is based on the interplay between the exchange field of the holes Bh acting on the magnetization vector of the DFL M perpendicular to structure plane and the anisotropy field Ba which aligns M along the plane. At low hole population of the QD (Bh<Ba), the subsequent M rotation will decrease the hole energy in the QD; hence the high hole population state is sustained (second stable state ``1'') under a fixed electro-chemical potential set by the reservoir even after bias is removed. The analysis of bit retention time of the proposed memory demonstrates the feasibility of the device with lateral QD size at least 30 nm under room temperature operation. Another advantage is the extremely small dissipative energy for Write/Erase operations.

  11. Bit-1 is an essential regulator of myogenic differentiation

    PubMed Central

    Griffiths, Genevieve S.; Doe, Jinger; Jijiwa, Mayumi; Van Ry, Pam; Cruz, Vivian; de la Vega, Michelle; Ramos, Joe W.; Burkin, Dean J.; Matter, Michelle L.

    2015-01-01

    Muscle differentiation requires a complex signaling cascade that leads to the production of multinucleated myofibers. Genes regulating the intrinsic mitochondrial apoptotic pathway also function in controlling cell differentiation. How such signaling pathways are regulated during differentiation is not fully understood. Bit-1 (also known as PTRH2) mutations in humans cause infantile-onset multisystem disease with muscle weakness. We demonstrate here that Bit-1 controls skeletal myogenesis through a caspase-mediated signaling pathway. Bit-1-null mice exhibit a myopathy with hypotrophic myofibers. Bit-1-null myoblasts prematurely express muscle-specific proteins. Similarly, knockdown of Bit-1 expression in C2C12 myoblasts promotes early differentiation, whereas overexpression delays differentiation. In wild-type mice, Bit-1 levels increase during differentiation. Bit-1-null myoblasts exhibited increased levels of caspase 9 and caspase 3 without increased apoptosis. Bit-1 re-expression partially rescued differentiation. In Bit-1-null muscle, Bcl-2 levels are reduced, suggesting that Bcl-2-mediated inhibition of caspase 9 and caspase 3 is decreased. Bcl-2 re-expression rescued Bit-1-mediated early differentiation in Bit-1-null myoblasts and C2C12 cells with knockdown of Bit-1 expression. These results support an unanticipated yet essential role for Bit-1 in controlling myogenesis through regulation of Bcl-2. PMID:25770104

  12. Workshop on Incomplete Network Data Held at Sandia National Labs – Livermore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soundarajan, Sucheta; Wendt, Jeremy D.

    2016-06-01

    While network analysis is applied in a broad variety of scientific fields (including physics, computer science, biology, and the social sciences), how networks are constructed and the resulting bias and incompleteness have drawn more limited attention. For example, in biology, gene networks are typically developed via experiment -- many actual interactions are likely yet to be discovered. In addition to this incompleteness, the data-collection processes can introduce significant bias into the observed network datasets. For instance, if you observe part of the World Wide Web network through a classic random walk, then high degree nodes are more likely to bemore » found than if you had selected nodes at random. Unfortunately, such incomplete and biasing data collection methods must be often used.« less

  13. Immediate movement history influences reach-to-grasp action selection in children and adults.

    PubMed

    Kent, Samuel W; Wilson, Andrew D; Plumb, Mandy S; Williams, Justin H G; Mon-Williams, Mark

    2009-01-01

    Action selection is subject to many biases. Immediate movement history is one such bias seen in young infants. Is this bias strong enough to affect adult behavior? Adult participants reached and grasped a cylinder positioned to require either pronation or supination of the hand. Successive cylinder positions changed either randomly or systematically between trials. Random positioning led to optimized economy of movement. In contrast, systematic changes in position biased action selection toward previously selected actions at the expense of movement economy. Thus, one switches to a new movement only when the savings outweigh the costs of the switch. Immediate movement history had an even larger influence on children aged 7-15 years. This suggests that switching costs are greater in children, which is consistent with their reduced grasping experience. The presence of this effect in adults suggests that immediate movement history exerts a more widespread and pervasive influence on patterns of action selection than researchers had previously recognized.

  14. [Periodontal treatment for cardiovascular risk factors: a systematic review].

    PubMed

    Deng, Linkai; Li, Chunjie; Li, Qian; Zhang, Yukui; Zhao, Hongwei

    2013-10-01

    To evaluate the efficacy of periodontal treatment for the management of cardiovascular risk factors. Eligible studies in Cochrane Controlled Trials Register/CENTRAL, PubMed, EMBASE, and China Biology Medicine disc (CBMdisc) were searched until October 13, 2011. References of the included studies were hand searched. Two reviewers assessed the risk of bias and extracted the data of the included studies in duplicate. Meta-analysis was conducted with Revman 5.1. Six randomized controlled trials involving 682 participants were included. One case had low risk of bias, another one had moderate risk of bias, and the remaining four had high risk of bias. Meta-analysis showed that periodontal treatment has no significant effect on C-reactive protein, total cholesterol, low-density lipoprotein cholesterol, and triglycerides (P > 0.05). However, the treatment had a significant effect on high-density lipoprotein cholesterol [MD = 0.05, 95% CI (0.00, 0.09), P = 0.04]. Periodontal treatment has good effects on controlling high-density lipoprotein cholesterol although more randomized controlled trials must be conducted to verify its effectiveness.

  15. Wide brick tunnel randomization - an unequal allocation procedure that limits the imbalance in treatment totals.

    PubMed

    Kuznetsova, Olga M; Tymofyeyev, Yevgen

    2014-04-30

    In open-label studies, partial predictability of permuted block randomization provides potential for selection bias. To lessen the selection bias in two-arm studies with equal allocation, a number of allocation procedures that limit the imbalance in treatment totals at a pre-specified level but do not require the exact balance at the ends of the blocks were developed. In studies with unequal allocation, however, the task of designing a randomization procedure that sets a pre-specified limit on imbalance in group totals is not resolved. Existing allocation procedures either do not preserve the allocation ratio at every allocation or do not include all allocation sequences that comply with the pre-specified imbalance threshold. Kuznetsova and Tymofyeyev described the brick tunnel randomization for studies with unequal allocation that preserves the allocation ratio at every step and, in the two-arm case, includes all sequences that satisfy the smallest possible imbalance threshold. This article introduces wide brick tunnel randomization for studies with unequal allocation that allows all allocation sequences with imbalance not exceeding any pre-specified threshold while preserving the allocation ratio at every step. In open-label studies, allowing a larger imbalance in treatment totals lowers selection bias because of the predictability of treatment assignments. The applications of the technique in two-arm and multi-arm open-label studies with unequal allocation are described. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  17. Cavity QED with hybrid nanocircuits: from atomic-like physics to condensed matter phenomena

    NASA Astrophysics Data System (ADS)

    Cottet, Audrey; Dartiailh, Matthieu C.; Desjardins, Matthieu M.; Cubaynes, Tino; Contamin, Lauriane C.; Delbecq, Matthieu; Viennot, Jérémie J.; Bruhat, Laure E.; Douçot, Benoit; Kontos, Takis

    2017-11-01

    Circuit QED techniques have been instrumental in manipulating and probing with exquisite sensitivity the quantum state of superconducting quantum bits coupled to microwave cavities. Recently, it has become possible to fabricate new devices in which the superconducting quantum bits are replaced by hybrid mesoscopic circuits combining nanoconductors and metallic reservoirs. This mesoscopic QED provides a new experimental playground to study the light-matter interaction in electronic circuits. Here, we present the experimental state of the art of mesoscopic QED and its theoretical description. A first class of experiments focuses on the artificial atom limit, where some quasiparticles are trapped in nanocircuit bound states. In this limit, the circuit QED techniques can be used to manipulate and probe electronic degrees of freedom such as confined charges, spins, or Andreev pairs. A second class of experiments uses cavity photons to reveal the dynamics of electron tunneling between a nanoconductor and fermionic reservoirs. For instance, the Kondo effect, the charge relaxation caused by grounded metallic contacts, and the photo-emission caused by voltage-biased reservoirs have been studied. The tunnel coupling between nanoconductors and fermionic reservoirs also enable one to obtain split Cooper pairs, or Majorana bound states. Cavity photons represent a qualitatively new tool to study these exotic condensed matter states.

  18. Cavity QED with hybrid nanocircuits: from atomic-like physics to condensed matter phenomena.

    PubMed

    Cottet, Audrey; Dartiailh, Matthieu C; Desjardins, Matthieu M; Cubaynes, Tino; Contamin, Lauriane C; Delbecq, Matthieu; Viennot, Jérémie J; Bruhat, Laure E; Douçot, Benoit; Kontos, Takis

    2017-11-01

    Circuit QED techniques have been instrumental in manipulating and probing with exquisite sensitivity the quantum state of superconducting quantum bits coupled to microwave cavities. Recently, it has become possible to fabricate new devices in which the superconducting quantum bits are replaced by hybrid mesoscopic circuits combining nanoconductors and metallic reservoirs. This mesoscopic QED provides a new experimental playground to study the light-matter interaction in electronic circuits. Here, we present the experimental state of the art of mesoscopic QED and its theoretical description. A first class of experiments focuses on the artificial atom limit, where some quasiparticles are trapped in nanocircuit bound states. In this limit, the circuit QED techniques can be used to manipulate and probe electronic degrees of freedom such as confined charges, spins, or Andreev pairs. A second class of experiments uses cavity photons to reveal the dynamics of electron tunneling between a nanoconductor and fermionic reservoirs. For instance, the Kondo effect, the charge relaxation caused by grounded metallic contacts, and the photo-emission caused by voltage-biased reservoirs have been studied. The tunnel coupling between nanoconductors and fermionic reservoirs also enable one to obtain split Cooper pairs, or Majorana bound states. Cavity photons represent a qualitatively new tool to study these exotic condensed matter states.

  19. Identifying Items to Assess Methodological Quality in Physical Therapy Trials: A Factor Analysis

    PubMed Central

    Cummings, Greta G.; Fuentes, Jorge; Saltaji, Humam; Ha, Christine; Chisholm, Annabritt; Pasichnyk, Dion; Rogers, Todd

    2014-01-01

    Background Numerous tools and individual items have been proposed to assess the methodological quality of randomized controlled trials (RCTs). The frequency of use of these items varies according to health area, which suggests a lack of agreement regarding their relevance to trial quality or risk of bias. Objective The objectives of this study were: (1) to identify the underlying component structure of items and (2) to determine relevant items to evaluate the quality and risk of bias of trials in physical therapy by using an exploratory factor analysis (EFA). Design A methodological research design was used, and an EFA was performed. Methods Randomized controlled trials used for this study were randomly selected from searches of the Cochrane Database of Systematic Reviews. Two reviewers used 45 items gathered from 7 different quality tools to assess the methodological quality of the RCTs. An exploratory factor analysis was conducted using the principal axis factoring (PAF) method followed by varimax rotation. Results Principal axis factoring identified 34 items loaded on 9 common factors: (1) selection bias; (2) performance and detection bias; (3) eligibility, intervention details, and description of outcome measures; (4) psychometric properties of the main outcome; (5) contamination and adherence to treatment; (6) attrition bias; (7) data analysis; (8) sample size; and (9) control and placebo adequacy. Limitation Because of the exploratory nature of the results, a confirmatory factor analysis is needed to validate this model. Conclusions To the authors' knowledge, this is the first factor analysis to explore the underlying component items used to evaluate the methodological quality or risk of bias of RCTs in physical therapy. The items and factors represent a starting point for evaluating the methodological quality and risk of bias in physical therapy trials. Empirical evidence of the association among these items with treatment effects and a confirmatory factor analysis of these results are needed to validate these items. PMID:24786942

  20. Identifying items to assess methodological quality in physical therapy trials: a factor analysis.

    PubMed

    Armijo-Olivo, Susan; Cummings, Greta G; Fuentes, Jorge; Saltaji, Humam; Ha, Christine; Chisholm, Annabritt; Pasichnyk, Dion; Rogers, Todd

    2014-09-01

    Numerous tools and individual items have been proposed to assess the methodological quality of randomized controlled trials (RCTs). The frequency of use of these items varies according to health area, which suggests a lack of agreement regarding their relevance to trial quality or risk of bias. The objectives of this study were: (1) to identify the underlying component structure of items and (2) to determine relevant items to evaluate the quality and risk of bias of trials in physical therapy by using an exploratory factor analysis (EFA). A methodological research design was used, and an EFA was performed. Randomized controlled trials used for this study were randomly selected from searches of the Cochrane Database of Systematic Reviews. Two reviewers used 45 items gathered from 7 different quality tools to assess the methodological quality of the RCTs. An exploratory factor analysis was conducted using the principal axis factoring (PAF) method followed by varimax rotation. Principal axis factoring identified 34 items loaded on 9 common factors: (1) selection bias; (2) performance and detection bias; (3) eligibility, intervention details, and description of outcome measures; (4) psychometric properties of the main outcome; (5) contamination and adherence to treatment; (6) attrition bias; (7) data analysis; (8) sample size; and (9) control and placebo adequacy. Because of the exploratory nature of the results, a confirmatory factor analysis is needed to validate this model. To the authors' knowledge, this is the first factor analysis to explore the underlying component items used to evaluate the methodological quality or risk of bias of RCTs in physical therapy. The items and factors represent a starting point for evaluating the methodological quality and risk of bias in physical therapy trials. Empirical evidence of the association among these items with treatment effects and a confirmatory factor analysis of these results are needed to validate these items. © 2014 American Physical Therapy Association.

  1. Reliability and Validity Assessment of a Linear Position Transducer

    PubMed Central

    Garnacho-Castaño, Manuel V.; López-Lastra, Silvia; Maté-Muñoz, José L.

    2015-01-01

    The objectives of the study were to determine the validity and reliability of peak velocity (PV), average velocity (AV), peak power (PP) and average power (AP) measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain) during two resistance exercises, bench press (BP) and full back squat (BS), performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2). Intraclass correlation coefficients (ICCs) indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W). Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W). Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP) make this device a useful tool for monitoring resistance training. Key points This study determined the validity and reliability of peak velocity, average velocity, peak power and average power measurements made using a linear position transducer The Tendo Weight-lifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and power. PMID:25729300

  2. The effects of Cognitive Bias Modification training and oxytocin administration on trust in maternal support: study protocol for a randomized controlled trial.

    PubMed

    Verhees, Martine W F T; Ceulemans, Eva; Bakermans-Kranenburg, Marian J; van IJzendoorn, Marinus H; de Winter, Simon; Bosmans, Guy

    2017-07-14

    Lack of trust in parental support is a transdiagnostic risk factor for the development of psychological problems throughout the lifespan. Research suggests that children's cognitive attachment representations and related information processing biases could be an important target for interventions aiming to build trust in the parent-child relationship. A paradigm that can alter these biases and increase trust is that of Cognitive Bias Modification (CBM), during which a target processing bias is systematically trained. Trust-related CBM training effects could possibly be enhanced by oxytocin, a neuropeptide that has been proposed to play an important role in social information processing and social relationships. The present article describes the study protocol for a double-blind randomized controlled trial (RCT) aimed at testing the individual and combined effects of CBM training and oxytocin administration on trust in maternal support. One hundred children (aged 8-12 years) are randomly assigned to one of four intervention conditions. Participants inhale a nasal spray that either contains oxytocin (OT) or a placebo. Additionally, they receive either a CBM training aimed at positively modifying trust-related information processing bias or a neutral placebo training aimed to have no trust-related effects. Main and interaction effects of the interventions are assessed on three levels of trust-related outcome measures: trust-related interpretation bias; self-reported trust; and mother-child interactional behavior. Importantly, side-effects of a single administration of OT in middle childhood are monitored closely to provide further information on the safety of OT administration in this age group. The present RCT is the first study to combine CBM training with oxytocin to test for individual and combined effects on trust in mother. If effective, CBM training and oxytocin could be easily applicable and nonintrusive additions to interventions that target trust in the context of the parent-child relationship. ClinicalTrials.gov, ID: NCT02737254 . Registered on 23 March 2016.

  3. Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.

    PubMed

    Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt

    2009-05-30

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

  4. Experimental demonstration of the optical multi-mesh hypercube: scaleable interconnection network for multiprocessors and multicomputers.

    PubMed

    Louri, A; Furlonge, S; Neocleous, C

    1996-12-10

    A prototype of a novel topology for scaleable optical interconnection networks called the optical multi-mesh hypercube (OMMH) is experimentally demonstrated to as high as a 150-Mbit/s data rate (2(7) - 1 nonreturn-to-zero pseudo-random data pattern) at a bit error rate of 10(-13)/link by the use of commercially available devices. OMMH is a scaleable network [Appl. Opt. 33, 7558 (1994); J. Lightwave Technol. 12, 704 (1994)] architecture that combines the positive features of the hypercube (small diameter, connectivity, symmetry, simple routing, and fault tolerance) and the mesh (constant node degree and size scaleability). The optical implementation method is divided into two levels: high-density local connections for the hypercube modules, and high-bit-rate, low-density, long connections for the mesh links connecting the hypercube modules. Free-space imaging systems utilizing vertical-cavity surface-emitting laser (VCSEL) arrays, lenslet arrays, space-invariant holographic techniques, and photodiode arrays are demonstrated for the local connections. Optobus fiber interconnects from Motorola are used for the long-distance connections. The OMMH was optimized to operate at the data rate of Motorola's Optobus (10-bit-wide, VCSEL-based bidirectional data interconnects at 150 Mbits/s). Difficulties encountered included the varying fan-out efficiencies of the different orders of the hologram, misalignment sensitivity of the free-space links, low power (1 mW) of the individual VCSEL's, and noise.

  5. Large-scale Electrophysiology: Acquisition, Compression, Encryption, and Storage of Big Data

    PubMed Central

    Brinkmann, Benjamin H.; Bower, Mark R.; Stengel, Keith A.; Worrell, Gregory A.; Stead, Matt

    2009-01-01

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32 kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single neuron action potentials, high frequency oscillations, and high amplitude ultraslow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information. PMID:19427545

  6. Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.

    PubMed

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2011-12-01

    The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.

  7. Clearing out a maze: A model of chemotactic motion in porous media

    NASA Astrophysics Data System (ADS)

    Schilling, Tanja; Voigtmann, Thomas

    2017-12-01

    We study the anomalous dynamics of a biased "hungry" (or "greedy") random walk on a percolating cluster. The model mimics chemotaxis in a porous medium: In close resemblance to the 1980s arcade game PAC-MA N ®, the hungry random walker consumes food, which is initially distributed in the maze, and biases its movement towards food-filled sites. We observe that the mean-squared displacement of the process follows a power law with an exponent that is different from previously known exponents describing passive or active microswimmer dynamics. The change in dynamics is well described by a dynamical exponent that depends continuously on the propensity to move towards food. It results in slower differential growth when compared to the unbiased random walk.

  8. Different hunting strategies select for different weights in red deer.

    PubMed

    Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; San Miguel, Alfonso

    2005-09-22

    Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data.

  9. Depth from Edge and Intensity Based Stereo.

    DTIC Science & Technology

    1982-09-01

    a Mars Viking vehicle, and a random dotted coffee jar. Assessment of the algorithm is a bit difficult: it uses a fairly simple control structure with...correspondences. This use of an evaluation function estimator allowed the introduction of the extensive pruning of a branch and bound algorithm. Even with it...Figure 3-6). This is the edge reversal constraint, and was integral to the pruning . As it happens, this same constraint is the key to the use of the

  10. Information Encoding on a Pseudo Random Noise Radar Waveform

    DTIC Science & Technology

    2013-03-01

    quadrature mirror filter bank (QMFB) tree diagram [18] . . . . . . . . . . . 18 2.7 QMFB layer 3 contour plot for 7-bit barker code binary phase shift...test signal . . . . . . . . 20 2.9 Block diagram of the FFT accumulation method (FAM) time smoothing method to estimate the spectral correlation ... Samples A m pl itu de (b) Correlator output for an WGN pulse in a AWGN channel Figure 2.2: Effectiveness of correlation for SNR = -10 dB 10 2.3 Radar

  11. Effects of social approval bias on self-reported fruit and vegetable consumption: a randomized controlled trial.

    PubMed

    Miller, Tracy M; Abdel-Maksoud, Madiha F; Crane, Lori A; Marcus, Al C; Byers, Tim E

    2008-06-27

    Self-reports of dietary intake in the context of nutrition intervention research can be biased by the tendency of respondents to answer consistent with expected norms (social approval bias). The objective of this study was to assess the potential influence of social approval bias on self-reports of fruit and vegetable intake obtained using both food frequency questionnaire (FFQ) and 24-hour recall methods. A randomized blinded trial compared reported fruit and vegetable intake among subjects exposed to a potentially biasing prompt to that from control subjects. Subjects included 163 women residing in Colorado between 35 and 65 years of age who were randomly selected and recruited by telephone to complete what they were told would be a future telephone survey about health. Randomly half of the subjects then received a letter prior to the interview describing this as a study of fruit and vegetable intake. The letter included a brief statement of the benefits of fruits and vegetables, a 5-A-Day sticker, and a 5-a-Day refrigerator magnet. The remainder received the same letter, but describing the study purpose only as a more general nutrition survey, with neither the fruit and vegetable message nor the 5-A-Day materials. Subjects were then interviewed on the telephone within 10 days following the letters using an eight-item FFQ and a limited 24-hour recall to estimate fruit and vegetable intake. All interviewers were blinded to the treatment condition. By the FFQ method, subjects who viewed the potentially biasing prompts reported consuming more fruits and vegetables than did control subjects (5.2 vs. 3.7 servings per day, p < 0.001). By the 24-hour recall method, 61% of the intervention group but only 32% of the control reported eating fruits and vegetables on 3 or more occasions the prior day (p = 0.002). These associations were independent of age, race/ethnicity, education level, self-perceived health status, and time since last medical check-up. Self-reports of fruit and vegetable intake using either a food frequency questionnaire or a limited 24-hour recall are both susceptible to substantial social approval bias. Valid assessments of intervention effects in nutritional intervention trials may require objective measures of dietary change.

  12. Modeling a space-based quantum link that includes an adaptive optics system

    NASA Astrophysics Data System (ADS)

    Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.

    2017-10-01

    Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.

  13. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strait, R.S.; Pearson, P.K.; Sengupta, S.K.

    A password system comprises a set of codewords spaced apart from one another by a Hamming distance (HD) that exceeds twice the variability that can be projected for a series of biometric measurements for a particular individual and that is less than the HD that can be encountered between two individuals. To enroll an individual, a biometric measurement is taken and exclusive-ORed with a random codeword to produce a reference value. To verify the individual later, a biometric measurement is taken and exclusive-ORed with the reference value to reproduce the original random codeword or its approximation. If the reproduced valuemore » is not a codeword, the nearest codeword to it is found, and the bits that were corrected to produce the codeword to it is found, and the bits that were corrected to produce the codeword are also toggled in the biometric measurement taken and the codeword generated during enrollment. The correction scheme can be implemented by any conventional error correction code such as Reed-Muller code R(m,n). In the implementation using a hand geometry device an R(2,5) code has been used in this invention. Such codeword and biometric measurement can then be used to see if the individual is an authorized user. Conventional Diffie-Hellman public key encryption schemes and hashing procedures can then be used to secure the communications lines carrying the biometric information and to secure the database of authorized users.« less

  15. Behavioral Intervention Materials Compendium. OPRE Report 2018-08

    ERIC Educational Resources Information Center

    Anzelone, Caitlin, Ed.; Dechausay, Nadine, Ed.; Alemany, Xavier, Ed.

    2018-01-01

    The Behavioral Interventions to Advance Self-Sufficiency (BIAS) project conducted 15 randomized controlled trials of behavioral interventions across eight states, in the domains of work support, child support, and child care. BIAS used a systematic approach called "behavioral diagnosis and design" to develop the interventions and their…

  16. BIT BY BIT: A Game Simulating Natural Language Processing in Computers

    ERIC Educational Resources Information Center

    Kato, Taichi; Arakawa, Chuichi

    2008-01-01

    BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

  17. A focus group study to understand biases and confounders in a cluster randomized controlled trial on low back pain in primary care in Norway.

    PubMed

    Werner, Erik L; Løchting, Ida; Storheim, Kjersti; Grotle, Margreth

    2018-05-22

    Cluster randomized controlled trials are often used in research in primary care but creates challenges regarding biases and confounders. We recently presented a study on low back pain from primary care in Norway with equal effects in the intervention and the control group. In order to understand the specific mechanisms that may produce biases in a cluster randomized trial we conducted a focus group study among the participating health care providers. The aim of this study was to understand how the participating providers themselves influenced on the study and thereby possibly on the results of the cluster randomized controlled trial. The providers were invited to share their experiences from their participation in the COPE study, from recruitment of patients to accomplishment of either the intervention or control consultations. Six clinicians from the intervention group and four from the control group took part in the focus group interviews. The group discussions focused on feasibility of the study in primary care and particularly on identifying potential biases and confounders in the study. The audio-recorded interviews were transcribed verbatim and analyzed according to a systematic text condensation. The themes for the analysis emerged from the group discussions. A personal interest for back pain, logistic factors at the clinics and an assessment of the patients' capacity to accomplish the study prior to their recruitment was reported. The providers were allowed to provide additional therapy to the intervention and it turned out that some of these could be regarded as opposed to the messages of the intervention. The providers seemed to select different items from the educational package according to personal beliefs and their perception of the patients' acceptance. The study disclosed several potential biases to the COPE study which may have impacted on the study results. Awareness of these is highly important when planning and conducting a cluster randomized controlled trial. Procedures in the recruitment of both providers and patients seem to be key factors and the providers should be aware of their role in a scientific study in order to standardize the provision of the intervention.

  18. Extending Landauer's bound from bit erasure to arbitrary computation

    NASA Astrophysics Data System (ADS)

    Wolpert, David

    The minimal thermodynamic work required to erase a bit, known as Landauer's bound, has been extensively investigated both theoretically and experimentally. However, when viewed as a computation that maps inputs to outputs, bit erasure has a very special property: the output does not depend on the input. Existing analyses of thermodynamics of bit erasure implicitly exploit this property, and thus cannot be directly extended to analyze the computation of arbitrary input-output maps. Here we show how to extend these earlier analyses of bit erasure to analyze the thermodynamics of arbitrary computations. Doing this establishes a formal connection between the thermodynamics of computers and much of theoretical computer science. We use this extension to analyze the thermodynamics of the canonical ``general purpose computer'' considered in computer science theory: a universal Turing machine (UTM). We consider a UTM which maps input programs to output strings, where inputs are drawn from an ensemble of random binary sequences, and prove: i) The minimal work needed by a UTM to run some particular input program X and produce output Y is the Kolmogorov complexity of Y minus the log of the ``algorithmic probability'' of Y. This minimal amount of thermodynamic work has a finite upper bound, which is independent of the output Y, depending only on the details of the UTM. ii) The expected work needed by a UTM to compute some given output Y is infinite. As a corollary, the overall expected work to run a UTM is infinite. iii) The expected work needed by an arbitrary Turing machine T (not necessarily universal) to compute some given output Y can either be infinite or finite, depending on Y and the details of T. To derive these results we must combine ideas from nonequilibrium statistical physics with fundamental results from computer science, such as Levin's coding theorem and other theorems about universal computation. I would like to ackowledge the Santa Fe Institute, Grant No. TWCF0079/AB47 from the Templeton World Charity Foundation, Grant No. FQXi-RHl3-1349 from the FQXi foundation, and Grant No. CHE-1648973 from the U.S. National Science Foundation.

  19. Perceptibility curve test for digital radiographs before and after correction for attenuation and correction for attenuation and visual response.

    PubMed

    Li, G; Welander, U; Yoshiura, K; Shi, X-Q; McDavid, W D

    2003-11-01

    Two digital image processing methods, correction for X-ray attenuation and correction for attenuation and visual response, have been developed. The aim of the present study was to compare digital radiographs before and after correction for attenuation and correction for attenuation and visual response by means of a perceptibility curve test. Radiographs were exposed of an aluminium test object containing holes ranging from 0.03 mm to 0.30 mm with increments of 0.03 mm. Fourteen radiographs were exposed with the Dixi system (Planmeca Oy, Helsinki, Finland) and twelve radiographs were exposed with the F1 iOX system (Fimet Oy, Monninkylä, Finland) from low to high exposures covering the full exposure ranges of the systems. Radiographs obtained from the Dixi and F1 iOX systems were 12 bit and 8 bit images, respectively. Original radiographs were then processed for correction for attenuation and correction for attenuation and visual response. Thus, two series of radiographs were created. Ten viewers evaluated all the radiographs in the same random order under the same viewing conditions. The object detail having the lowest perceptible contrast was recorded for each observer. Perceptibility curves were plotted according to the mean of observer data. The perceptibility curves for processed radiographs obtained with the F1 iOX system are higher than those for originals in the exposure range up to the peak, where the curves are basically the same. For radiographs exposed with the Dixi system, perceptibility curves for processed radiographs are higher than those for originals for all exposures. Perceptibility curves show that for 8 bit radiographs obtained from the F1 iOX system, the contrast threshold was increased in processed radiographs up to the peak, while for 12 bit radiographs obtained with the Dixi system, the contrast threshold was increased in processed radiographs for all exposures. When comparisons were made between radiographs corrected for attenuation and corrected for attenuation and visual response, basically no differences were found. Radiographs processed for correction for attenuation and correction for attenuation and visual response may improve perception, especially for 12 bit originals.

  20. A comparison of the test-negative and the traditional case-control study designs for estimation of influenza vaccine effectiveness under nonrandom vaccination.

    PubMed

    Shi, Meng; An, Qian; Ainslie, Kylie E C; Haber, Michael; Orenstein, Walter A

    2017-12-08

    As annual influenza vaccination is recommended for all U.S. persons aged 6 months or older, it is unethical to conduct randomized clinical trials to estimate influenza vaccine effectiveness (VE). Observational studies are being increasingly used to estimate VE. We developed a probability model for comparing the bias and the precision of VE estimates from two case-control designs: the traditional case-control (TCC) design and the test-negative (TN) design. In both study designs, acute respiratory illness (ARI) patients seeking medical care testing positive for influenza infection are considered cases. In the TN design, ARI patients seeking medical care who test negative serve as controls, while in the TCC design, controls are randomly selected individuals from the community who did not contract an ARI. Our model assigns each study participant a covariate corresponding to the person's health status. The probabilities of vaccination and of contracting influenza and non-influenza ARI depend on health status. Hence, our model allows non-random vaccination and confounding. In addition, the probability of seeking care for ARI may depend on vaccination and health status. We consider two outcomes of interest: symptomatic influenza (SI) and medically-attended influenza (MAI). If vaccination does not affect the probability of non-influenza ARI, then VE estimates from TN studies usually have smaller bias than estimates from TCC studies. We also found that if vaccinated influenza ARI patients are less likely to seek medical care than unvaccinated patients because the vaccine reduces symptoms' severity, then estimates of VE from both types of studies may be severely biased when the outcome of interest is SI. The bias is not present when the outcome of interest is MAI. The TN design produces valid estimates of VE if (a) vaccination does not affect the probabilities of non-influenza ARI and of seeking care against influenza ARI, and (b) the confounding effects resulting from non-random vaccination are similar for influenza and non-influenza ARI. Since the bias of VE estimates depends on the outcome against which the vaccine is supposed to protect, it is important to specify the outcome of interest when evaluating the bias.

  1. Towards Terabit Memories

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Memories have been the major yardstick for the continuing validity of Moore's law. In single-transistor-per-Bit dynamic random-access memories (DRAM), the number of bits per chip pretty much gives us the number of transistors. For decades, DRAM's have offered the largest storage capacity per chip. However, DRAM does not scale any longer, both in density and voltage, severely limiting its power efficiency to 10 fJ/b. A differential DRAM would gain four-times in density and eight-times in energy. Static CMOS RAM (SRAM) with its six transistors/cell is gaining in reputation because it scales well in cell size and operating voltage so that its fundamental advantage of speed, non-destructive read-out and low-power standby could lead to just 2.5 electrons/bit in standby and to a dynamic power efficiency of 2aJ/b. With a projected 2020 density of 16 Gb/cm², the SRAM would be as dense as normal DRAM and vastly better in power efficiency, which would mean a major change in the architecture and market scenario for DRAM versus SRAM. Non-volatile Flash memory have seen two quantum jumps in density well beyond the roadmap: Multi-Bit storage per transistor and high-density TSV (through-silicon via) technology. The number of electrons required per Bit on the storage gate has been reduced since their first realization in 1996 by more than an order of magnitude to 400 electrons/Bit in 2010 for a complexity of 32Gbit per chip at the 32 nm node. Chip stacking of eight chips with TSV has produced a 32GByte solid-state drive (SSD). A stack of 32 chips with 2 b/cell at the 16 nm node will reach a density of 2.5 Terabit/cm². Non-volatile memory with a density of 10 × 10 nm²/Bit is the target for widespread development. Phase-change memory (PCM) and resistive memory (RRAM) lead in cell density, and they will reach 20 Gb/cm² in 2D and higher with 3D chip stacking. This is still almost an order-of-magnitude less than Flash. However, their read-out speed is ~10-times faster, with as yet little data on their energy/b. As a read-out memory with unparalleled retention and lifetime, the ROM with electron-beam direct-write-lithography (Chap. 8) should be considered for its projected 2D density of 250 Gb/cm², a very small read energy of 0.1 μW/Gb/s. The lithography write-speed 10 ms/Terabit makes this ROM a serious contentender for the optimum in non-volatile, tamper-proof storage.

  2. Bit selection using field drilling data and mathematical investigation

    NASA Astrophysics Data System (ADS)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  3. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    NASA Astrophysics Data System (ADS)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  4. Surprisingly rational: probability theory plus noise explains biases in judgment.

    PubMed

    Costello, Fintan; Watts, Paul

    2014-07-01

    The systematic biases seen in people's probability judgments are typically taken as evidence that people do not use the rules of probability theory when reasoning about probability but instead use heuristics, which sometimes yield reasonable judgments and sometimes yield systematic biases. This view has had a major impact in economics, law, medicine, and other fields; indeed, the idea that people cannot reason with probabilities has become a truism. We present a simple alternative to this view, where people reason about probability according to probability theory but are subject to random variation or noise in the reasoning process. In this account the effect of noise is canceled for some probabilistic expressions. Analyzing data from 2 experiments, we find that, for these expressions, people's probability judgments are strikingly close to those required by probability theory. For other expressions, this account produces systematic deviations in probability estimates. These deviations explain 4 reliable biases in human probabilistic reasoning (conservatism, subadditivity, conjunction, and disjunction fallacies). These results suggest that people's probability judgments embody the rules of probability theory and that biases in those judgments are due to the effects of random noise. (c) 2014 APA, all rights reserved.

  5. Growing cell-phone population and noncoverage bias in traditional random digit dial telephone health surveys.

    PubMed

    Lee, Sunghee; Brick, J Michael; Brown, E Richard; Grant, David

    2010-08-01

    Examine the effect of including cell-phone numbers in a traditional landline random digit dial (RDD) telephone survey. The 2007 California Health Interview Survey (CHIS). CHIS 2007 is an RDD telephone survey supplementing a landline sample in California with a sample of cell-only (CO) adults. We examined the degree of bias due to exclusion of CO populations and compared a series of demographic and health-related characteristics by telephone usage. When adjusted for noncoverage in the landline sample through weighting, the potential noncoverage bias due to excluding CO adults in landline telephone surveys is diminished. Both CO adults and adults who have both landline and cell phones but mostly use cell phones appear different from other telephone usage groups. Controlling for demographic differences did not attenuate the significant distinctiveness of cell-mostly adults. While careful weighting can mitigate noncoverage bias in landline telephone surveys, the rapid growth of cell-phone population and their distinctive characteristics suggest it is important to include a cell-phone sample. Moreover, the threat of noncoverage bias in telephone health survey estimates could mislead policy makers with possibly serious consequences for their ability to address important health policy issues.

  6. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  7. Simultaneous classical communication and quantum key distribution using continuous variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Bing

    Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less

  8. Uncertain decision tree inductive inference

    NASA Astrophysics Data System (ADS)

    Zarban, L.; Jafari, S.; Fakhrahmad, S. M.

    2011-10-01

    Induction is the process of reasoning in which general rules are formulated based on limited observations of recurring phenomenal patterns. Decision tree learning is one of the most widely used and practical inductive methods, which represents the results in a tree scheme. Various decision tree algorithms have already been proposed such as CLS, ID3, Assistant C4.5, REPTree and Random Tree. These algorithms suffer from some major shortcomings. In this article, after discussing the main limitations of the existing methods, we introduce a new decision tree induction algorithm, which overcomes all the problems existing in its counterparts. The new method uses bit strings and maintains important information on them. This use of bit strings and logical operation on them causes high speed during the induction process. Therefore, it has several important features: it deals with inconsistencies in data, avoids overfitting and handles uncertainty. We also illustrate more advantages and the new features of the proposed method. The experimental results show the effectiveness of the method in comparison with other methods existing in the literature.

  9. Multiparty Quantum Direct Secret Sharing of Classical Information with Bell States and Bell Measurements

    NASA Astrophysics Data System (ADS)

    Song, Yun; Li, Yongming; Wang, Wenhua

    2018-02-01

    This paper proposed a new and efficient multiparty quantum direct secret sharing (QDSS) by using swapping quantum entanglement of Bell states. In the proposed scheme, the quantum correlation between the possible measurement results of the members (except dealer) and the original local unitary operation encoded by the dealer was presented. All agents only need to perform Bell measurements to share dealer's secret by recovering dealer's operation without performing any unitary operation. Our scheme has several advantages. The dealer is not required to retain any photons, and can further share a predetermined key instead of a random key to the agents. It has high capacity as two bits of secret messages can be transmitted by an EPR pair and the intrinsic efficiency approaches 100%, because no classical bit needs to be transmitted except those for detection. Without inserting any checking sets for detecting the eavesdropping, the scheme can resist not only the existing attacks, but also the cheating attack from the dishonest agent.

  10. Simultaneous classical communication and quantum key distribution using continuous variables

    DOE PAGES

    Qi, Bing

    2016-10-26

    Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less

  11. Steganalysis based on JPEG compatibility

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  12. Characterization of impulse noise and analysis of its effect upon correlation receivers

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Moore, J. D.

    1971-01-01

    A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.

  13. Performance of an optical relay satellite using Reed-Solomon coding over a cascaded optical PPM and BPSK channel

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Naderi, F.

    1982-01-01

    The nature of the optical/microwave interface aboard the relay satellite is considered. To allow for the maximum system flexibility, without overburdening either the optical or RF channel, demodulating the optical on board the relay satellite but leaving the optical channel decoding to be performed at the ground station is examined. The occurrence of erasures in the optical channel is treated. A hard decision on the erasure (i.e., the relay selecting a symbol at random in case of erasure occurrence) seriously degrades the performance of the overall system. Coding the erasure occurrences at the relay and transmitting this information via an extra bit to the ground station where it can be used by the decoder is suggested. Many examples with varying bit/photon energy efficiency and for the noisy and noiseless optical channel are considered. It is shown that coding the erasure occurrences dramatically improves the performance of the cascaded channel relative to the case of hard decision on the erasure by the relay.

  14. The effect of an intervention to break the gender bias habit for faculty at one institution: a cluster randomized, controlled trial.

    PubMed

    Carnes, Molly; Devine, Patricia G; Baier Manwell, Linda; Byars-Winston, Angela; Fine, Eve; Ford, Cecilia E; Forscher, Patrick; Isaac, Carol; Kaatz, Anna; Magua, Wairimu; Palta, Mari; Sheridan, Jennifer

    2015-02-01

    Despite sincere commitment to egalitarian, meritocratic principles, subtle gender bias persists, constraining women's opportunities for academic advancement. The authors implemented a pair-matched, single-blind, cluster randomized, controlled study of a gender-bias-habit-changing intervention at a large public university. Participants were faculty in 92 departments or divisions at the University of Wisconsin-Madison. Between September 2010 and March 2012, experimental departments were offered a gender-bias-habit-changing intervention as a 2.5-hour workshop. Surveys measured gender bias awareness; motivation, self-efficacy, and outcome expectations to reduce bias; and gender equity action. A timed word categorization task measured implicit gender/leadership bias. Faculty completed a work-life survey before and after all experimental departments received the intervention. Control departments were offered workshops after data were collected. Linear mixed-effects models showed significantly greater changes post intervention for faculty in experimental versus control departments on several outcome measures, including self-efficacy to engage in gender-equity-promoting behaviors (P = .013). When ≥ 25% of a department's faculty attended the workshop (26 of 46 departments), significant increases in self-reported action to promote gender equity occurred at three months (P = .007). Post intervention, faculty in experimental departments expressed greater perceptions of fit (P = .024), valuing of their research (P = .019), and comfort in raising personal and professional conflicts (P = .025). An intervention that facilitates intentional behavioral change can help faculty break the gender bias habit and change department climate in ways that should support the career advancement of women in academic medicine, science, and engineering.

  15. Secure distance ranging by electronic means

    DOEpatents

    Gritton, Dale G.

    1992-01-01

    A system for secure distance ranging between a reader 11 and a tag 12 wherein the distance between the two is determined by the time it takes to propagate a signal from the reader to the tag and for a responsive signal to return, and in which such time is random and unpredictable, except to the reader, even though the distance between the reader and tag remains the same. A random number (19) is sent from the reader and encrypted (26) by the tag into a number having 16 segments of 4 bits each (28). A first tag signal (31) is sent after such encryption. In response, a random width start pulse (13) is generated by the reader. When received in the tag, the width of the start pulse is measured (41) in the tag and a segment of the encrypted number is selected (42) in accordance with such width. A second tag pulse is generated at a time T after the start pulse arrives at the tag, the time T being dependent on the length of a variable time delay t.sub.v which is determined by the value of the bits in the selected segment of the encrypted number. At the reader, the total time from the beginning of the start pulse to the receipt of the second tag signal is measured (36, 37). The value of t.sub.v (21, 22, 23, 34) is known at the reader and the time T is subtracted (46) from the total time to find the actual propagation t.sub.p for signals to travel between the reader 11 and tag 12. The propagation time is then converted into distance (46).

  16. Boring apparatus capable of boring straight holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, C.R.

    The invention relates to a rock boring assembly for producing a straight hole for use in a drill string above a pilot boring bit of predetermined diameter smaller than the desired final hole size. The boring assembly comprises a small conical boring bit and a larger conical boring, the conical boring bits mounted on lower and upper ends of an enlongated spacer, respectively, and the major effective cutting diameters of each of the conical boring bits being at least 10% greater than the minor effective cutting diameter of the respective bit. The spacer has a cross-section resistant bending and spacesmore » the conical boring bits apart a distance at least 5 times the major cutting diameter of the small conical boring bit, thereby spacing the pivot points provided by the two conical boring bits to limit bodily angular deflection of the assembly and providing a substantial moment arm to resist lateral forces applied to the assembly by the pilot bit and drill string. The spacing between the conical bits is less than about 20 times the major cutting diameter of the lower conical boring bit to enable the spacer to act as a bend-resistant beam to resist angular deflection of the axis of either of the conical boring bits relative to the other when it receives uneven lateral force due to non-uniformity of cutting conditions about the circumference of the bit. Advantageously the boring bits also are self-advancing and feature skewed rollers. 7 claims.« less

  17. Toward a Better Understanding of the Relationship between Belief in the Paranormal and Statistical Bias: The Potential Role of Schizotypy

    PubMed Central

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2016-01-01

    The present paper examined relationships between schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experience; O-LIFE scale brief), belief in the paranormal (assessed via the Revised Paranormal Belief Scale; RPBS) and proneness to statistical bias (i.e., perception of randomness and susceptibility to conjunction fallacy). Participants were 254 volunteers recruited via convenience sampling. Probabilistic reasoning problems appeared framed within both standard and paranormal contexts. Analysis revealed positive correlations between the Unusual Experience (UnExp) subscale of O-LIFE and paranormal belief measures [RPBS full scale, traditional paranormal beliefs (TPB) and new age philosophy]. Performance on standard problems correlated negatively with UnExp and belief in the paranormal (particularly the TPB dimension of the RPBS). Consideration of specific problem types revealed that perception of randomness associated more strongly with belief in the paranormal than conjunction; both problem types related similarly to UnExp. Structural equation modeling specified that belief in the paranormal mediated the indirect relationship between UnExp and statistical bias. For problems presented in a paranormal context a framing effect occurred. Whilst UnExp correlated positively with conjunction proneness (controlling for perception of randomness), there was no association between UnExp and perception of randomness (controlling for conjunction). PMID:27471481

  18. Toward a Better Understanding of the Relationship between Belief in the Paranormal and Statistical Bias: The Potential Role of Schizotypy.

    PubMed

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2016-01-01

    The present paper examined relationships between schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experience; O-LIFE scale brief), belief in the paranormal (assessed via the Revised Paranormal Belief Scale; RPBS) and proneness to statistical bias (i.e., perception of randomness and susceptibility to conjunction fallacy). Participants were 254 volunteers recruited via convenience sampling. Probabilistic reasoning problems appeared framed within both standard and paranormal contexts. Analysis revealed positive correlations between the Unusual Experience (UnExp) subscale of O-LIFE and paranormal belief measures [RPBS full scale, traditional paranormal beliefs (TPB) and new age philosophy]. Performance on standard problems correlated negatively with UnExp and belief in the paranormal (particularly the TPB dimension of the RPBS). Consideration of specific problem types revealed that perception of randomness associated more strongly with belief in the paranormal than conjunction; both problem types related similarly to UnExp. Structural equation modeling specified that belief in the paranormal mediated the indirect relationship between UnExp and statistical bias. For problems presented in a paranormal context a framing effect occurred. Whilst UnExp correlated positively with conjunction proneness (controlling for perception of randomness), there was no association between UnExp and perception of randomness (controlling for conjunction).

  19. Randomized controlled trials of simulation-based interventions in Emergency Medicine: a methodological review.

    PubMed

    Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri

    2018-04-01

    The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.

  20. Evolutionary Trends and the Salience Bias (with Apologies to Oil Tankers, Karl Marx, and Others).

    ERIC Educational Resources Information Center

    McShea, Daniel W.

    1994-01-01

    Examines evolutionary trends, specifically trends in size, complexity, and fitness. Notes that documentation of these trends consists of either long lists of cases, or descriptions of a small number of salient cases. Proposes the use of random samples to avoid this "saliency bias." (SR)

  1. What Constitutes Commercial Bias Compared with the Personal Opinion of Experts?

    ERIC Educational Resources Information Center

    Cornish, Jean K.; Leist, James C.

    2006-01-01

    Introduction: The presence of commercial messages in continuing medical education (CME) is an ongoing cause of concern. This study identifies actions perceived by CME participants to convey commercial bias from CME faculty. Methods: A questionnaire listing actions associated with CME activities was distributed to 230 randomly selected participants…

  2. Anti-Fat Bias by Professors Teaching Physical Education Majors

    ERIC Educational Resources Information Center

    Fontana, Fabio; Furtado, Ovande, Jr.; Mazzardo, Oldemar, Jr.; Hong, Deockki; de Campos, Wagner

    2017-01-01

    Anti-fat bias by professors in physical education departments may interfere with the training provided to pre-service teachers. The purpose of this study was to evaluate the attitudes of professors in physical education departments toward obese individuals. Professors from randomly selected institutions across all four US regions participated in…

  3. Examinations of Home Economics Textbooks for Sex Bias.

    ERIC Educational Resources Information Center

    Weis, Susan F.

    1979-01-01

    Four analyses were conducted on a sample of 100 randomly selected, secondary home economics textbooks published between 1964 and 1974. Results indicated that the contents presented sex bias in language usage, in pictures portraying male and female role environments, and in role behaviors and expectations emphasized. (Author/JH)

  4. Stoppage in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Grønborg, Therese K.; Hansen, Stefan N.; Nielsen, Svend V.; Skytthe, Axel; Parner, Erik T.

    2015-01-01

    Stoppage refers to changes in reproductive behavior following the birth of a child with a severe disease. The presence of stoppage can bias estimates of sibling recurrence risk if not properly addressed. If stoppage occurs non-randomly (differential stoppage), it is possibly an additional source of bias in sibling recurrence risk estimation. This…

  5. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    PubMed

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  6. Effects of Bias Modification Training in Binge Eating Disorder.

    PubMed

    Schmitz, Florian; Svaldi, Jennifer

    2017-09-01

    Food-related attentional biases have been identified as maintaining factors in binge eating disorder (BED) as they can trigger a binge episode. Bias modification training may reduce symptoms, as it has been shown to be successful in other appetitive disorders. The aim of this study was to assess and modify food-related biases in BED. It was tested whether biases could be increased and decreased by means of a modified dot-probe paradigm, how long such bias modification persisted, and whether this affected subjective food craving. Participants were randomly assigned to a bias enhancement (attend to food stimulus) group or to a bias reduction (avoid food stimulus) group. Food-related attentional bias was found to be successfully reduced in the bias-reduction group, and effects persisted briefly. Additionally, subjective craving for food was influenced by the intervention, and possible mechanisms are discussed. Given these promising initial results, future research should investigate boundary conditions of the experimental intervention to understand how it could complement treatment of BED. Copyright © 2017. Published by Elsevier Ltd.

  7. Statistical characterization of fluctuations of a laser beam transmitted through a random air-water interface: new results from a laboratory experiment

    NASA Astrophysics Data System (ADS)

    Majumdar, Arun K.; Land, Phillip; Siegenthaler, John

    2014-10-01

    New results for characterizing laser intensity fluctuation statistics of a laser beam transmitted through a random air-water interface relevant to underwater communications are presented. A laboratory watertank experiment is described to investigate the beam wandering effects of the transmitted beam. Preliminary results from the experiment provide information about histograms of the probability density functions of intensity fluctuations for different wind speeds measured by a CMOS camera for the transmitted beam. Angular displacements of the centroids of the fluctuating laser beam generates the beam wander effects. This research develops a probabilistic model for optical propagation at the random air-water interface for a transmission case under different wind speed conditions. Preliminary results for bit-error-rate (BER) estimates as a function of fade margin for an on-off keying (OOK) optical communication through the air-water interface are presented for a communication system where a random air-water interface is a part of the communication channel.

  8. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  9. Evidence-based medicine, the research-practice gap, and biases in medical and surgical decision making in dermatology.

    PubMed

    Eaglstein, William H

    2010-10-01

    The objectives of this article are to promote a better understanding of a group of biases that influence therapeutic decision making by physicians/dermatologists and to raise the awareness that these biases contribute to a research-practice gap that has an impact on physicians and treatment solutions. The literature included a wide range of peer-reviewed articles dealing with biases in decision making, evidence-based medicine, randomized controlled clinical trials, and the research-practice gap. Bias against new therapies, bias in favor of indirect harm or omission, and bias against change when multiple new choices are offered may unconsciously affect therapeutic decision making. Although there is no comprehensive understanding or theory as to how choices are made by physicians, recognition of certain cognition patterns and their associated biases will help narrow the research-practice gap and optimize decision making regarding therapeutic choices.

  10. A Compression Algorithm for Field Programmable Gate Arrays in the Space Environment

    DTIC Science & Technology

    2011-12-01

    Bit 1 ,Bit 0P  . (V.3) Equation (V.3) is implemented with a string of XOR gates and Bit Basher blocks, as shown in Figure 31. As discussed in...5], the string of Bit Basher blocks are used to separate each 35-bit value into 35 one-bit values, and the string of XOR gates is used to

  11. Analog Nonvolatile Computer Memory Circuits

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd

    2007-01-01

    In nonvolatile random-access memory (RAM) circuits of a proposed type, digital data would be stored in analog form in ferroelectric field-effect transistors (FFETs). This type of memory circuit would offer advantages over prior volatile and nonvolatile types: In a conventional complementary metal oxide/semiconductor static RAM, six transistors must be used to store one bit, and storage is volatile in that data are lost when power is turned off. In a conventional dynamic RAM, three transistors must be used to store one bit, and the stored bit must be refreshed every few milliseconds. In contrast, in a RAM according to the proposal, data would be retained when power was turned off, each memory cell would contain only two FFETs, and the cell could store multiple bits (the exact number of bits depending on the specific design). Conventional flash memory circuits afford nonvolatile storage, but they operate at reading and writing times of the order of thousands of conventional computer memory reading and writing times and, hence, are suitable for use only as off-line storage devices. In addition, flash memories cease to function after limited numbers of writing cycles. The proposed memory circuits would not be subject to either of these limitations. Prior developmental nonvolatile ferroelectric memories are limited to one bit per cell, whereas, as stated above, the proposed memories would not be so limited. The design of a memory circuit according to the proposal must reflect the fact that FFET storage is only partly nonvolatile, in that the signal stored in an FFET decays gradually over time. (Retention times of some advanced FFETs exceed ten years.) Instead of storing a single bit of data as either a positively or negatively saturated state in a ferroelectric device, each memory cell according to the proposal would store two values. The two FFETs in each cell would be denoted the storage FFET and the control FFET. The storage FFET would store an analog signal value, between the positive and negative FFET saturation values. This signal value would represent a numerical value of interest corresponding to multiple bits: for example, if the memory circuit were designed to distinguish among 16 different analog values, then each cell could store 4 bits. Simultaneously with writing the signal value in the storage FFET, a negative saturation signal value would be stored in the control FFET. The decay of this control-FFET signal from the saturation value would serve as a model of the decay, for use in regenerating the numerical value of interest from its decaying analog signal value. The memory circuit would include addressing, reading, and writing circuitry that would have features in common with the corresponding parts of other memory circuits, but would also have several distinctive features. The writing circuitry would include a digital-to-analog converter (DAC); the reading circuitry would include an analog-to-digital converter (ADC). For writing a numerical value of interest in a given cell, that cell would be addressed, the saturation value would be written in the control FFET in that cell, and the non-saturation analog value representing the numerical value of interest would be generated by use of the DAC and stored in the storage FFET in that cell. For reading the numerical value of interest stored in a given cell, the cell would be addressed, the ADC would convert the decaying control and storage analog signal values to digital values, and an associated fast digital processing circuit would regenerate the numerical value from digital values.

  12. Drag bit construction

    DOEpatents

    Hood, M.

    1986-02-11

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face. 4 figs.

  13. Drag bit construction

    DOEpatents

    Hood, Michael

    1986-01-01

    A mounting movable with respect to an adjacent hard face has a projecting drag bit adapted to engage the hard face. The drag bit is disposed for movement relative to the mounting by encounter of the drag bit with the hard face. That relative movement regulates a valve in a water passageway, preferably extending through the drag bit, to play a stream of water in the area of contact of the drag bit and the hard face and to prevent such water play when the drag bit is out of contact with the hard face.

  14. Error vector magnitude based parameter estimation for digital filter back-propagation mitigating SOA distortions in 16-QAM.

    PubMed

    Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A

    2013-08-26

    We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.

  15. PDC-bit performance under simulated borehole conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, E.E.; Azar, J.J.

    1993-09-01

    Laboratory drilling tests were used to investigate the effects of pressure on polycrystalline-diamond-compact (PDC) drill-bit performance. Catoosa shale core samples were drilled with PDC and roller-cone bits at up to 1,750-psi confining pressure. All tests were conducted in a controlled environment with a full-scale laboratory drilling system. Test results indicate, that under similar operating conditions, increases in confining pressure reduce PDC-bit performance as much as or more than conventional-rock-bit performance. Specific energy calculations indicate that a combination of rock strength, chip hold-down, and bit balling may have reduced performance. Quantifying the degree to which pressure reduces PDC-bit performance will helpmore » researchers interpret test results and improve bit designs and will help drilling engineers run PDC bits more effectively in the field.« less

  16. Smart built-in test

    NASA Technical Reports Server (NTRS)

    Richards, Dale W.

    1990-01-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  17. Smart built-in test

    NASA Astrophysics Data System (ADS)

    Richards, Dale W.

    1990-03-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  18. Simplified biased random walk model for RecA-protein-mediated homology recognition offers rapid and accurate self-assembly of long linear arrays of binding sites

    NASA Astrophysics Data System (ADS)

    Kates-Harbeck, Julian; Tilloy, Antoine; Prentiss, Mara

    2013-07-01

    Inspired by RecA-protein-based homology recognition, we consider the pairing of two long linear arrays of binding sites. We propose a fully reversible, physically realizable biased random walk model for rapid and accurate self-assembly due to the spontaneous pairing of matching binding sites, where the statistics of the searched sample are included. In the model, there are two bound conformations, and the free energy for each conformation is a weakly nonlinear function of the number of contiguous matched bound sites.

  19. Optimization of the random multilayer structure to break the random-alloy limit of thermal conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Gu, Chongjie; Ruan, Xiulin, E-mail: ruan@purdue.edu

    2015-02-16

    A low lattice thermal conductivity (κ) is desired for thermoelectrics, and a highly anisotropic κ is essential for applications such as magnetic layers for heat-assisted magnetic recording, where a high cross-plane (perpendicular to layer) κ is needed to ensure fast writing while a low in-plane κ is required to avoid interaction between adjacent bits of data. In this work, we conduct molecular dynamics simulations to investigate the κ of superlattice (SL), random multilayer (RML) and alloy, and reveal that RML can have 1–2 orders of magnitude higher anisotropy in κ than SL and alloy. We systematically explore how the κmore » of SL, RML, and alloy changes relative to each other for different bond strength, interface roughness, atomic mass, and structure size, which provides guidance for choosing materials and structural parameters to build RMLs with optimal performance for specific applications.« less

  20. Carbon nanomaterials for non-volatile memories

    NASA Astrophysics Data System (ADS)

    Ahn, Ethan C.; Wong, H.-S. Philip; Pop, Eric

    2018-03-01

    Carbon can create various low-dimensional nanostructures with remarkable electronic, optical, mechanical and thermal properties. These features make carbon nanomaterials especially interesting for next-generation memory and storage devices, such as resistive random access memory, phase-change memory, spin-transfer-torque magnetic random access memory and ferroelectric random access memory. Non-volatile memories greatly benefit from the use of carbon nanomaterials in terms of bit density and energy efficiency. In this Review, we discuss sp2-hybridized carbon-based low-dimensional nanostructures, such as fullerene, carbon nanotubes and graphene, in the context of non-volatile memory devices and architectures. Applications of carbon nanomaterials as memory electrodes, interfacial engineering layers, resistive-switching media, and scalable, high-performance memory selectors are investigated. Finally, we compare the different memory technologies in terms of writing energy and time, and highlight major challenges in the manufacturing, integration and understanding of the physical mechanisms and material properties.

  1. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  2. Effect of standardized training on the reliability of the Cochrane risk of bias assessment tool: a prospective study.

    PubMed

    da Costa, Bruno R; Beckett, Brooke; Diaz, Alison; Resta, Nina M; Johnston, Bradley C; Egger, Matthias; Jüni, Peter; Armijo-Olivo, Susan

    2017-03-03

    The Cochrane risk of bias tool is commonly criticized for having a low reliability. We aimed to investigate whether training of raters, with objective and standardized instructions on how to assess risk of bias, can improve the reliability of the Cochrane risk of bias tool. In this pilot study, four raters inexperienced in risk of bias assessment were randomly allocated to minimal or intensive standardized training for risk of bias assessment of randomized trials of physical therapy treatments for patients with knee osteoarthritis pain. Two raters were experienced risk of bias assessors who served as reference. The primary outcome of our study was between-group reliability, defined as the agreement of the risk of bias assessments of inexperienced raters with the reference assessments of experienced raters. Consensus-based assessments were used for this purpose. The secondary outcome was within-group reliability, defined as the agreement of assessments within pairs of inexperienced raters. We calculated the chance-corrected weighted Kappa to quantify agreement within and between groups of raters for each of the domains of the risk of bias tool. A total of 56 trials were included in our analysis. The Kappa for the agreement of inexperienced raters with reference across items of the risk of bias tool ranged from 0.10 to 0.81 for the minimal training group and from 0.41 to 0.90 for the standardized training group. The Kappa values for the agreement within pairs of inexperienced raters across the items of the risk of bias tool ranged from 0 to 0.38 for the minimal training group and from 0.93 to 1 for the standardized training group. Between-group differences in Kappa for the agreement of inexperienced raters with reference always favored the standardized training group and was most pronounced for incomplete outcome data (difference in Kappa 0.52, p < 0.001) and allocation concealment (difference in Kappa 0.30, p = 0.004). Intensive, standardized training on risk of bias assessment may significantly improve the reliability of the Cochrane risk of bias tool.

  3. Hey! A Flea Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Flea Bit Me! KidsHealth / For Kids / Hey! A Flea Bit Me! Print en español ¡Ay! ¡ ... 30% DEET. More on this topic for: Kids Hey! A Gnat Bit Me! Hey! A Bedbug Bit ...

  4. Hey! A Louse Bit Me!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Hey! A Louse Bit Me! KidsHealth / For Kids / Hey! A Louse Bit Me! Print en español ¡Ay! ¡ ... topic for: Kids Lice Aren't So Nice Hey! A Gnat Bit Me! Hey! A Flea Bit ...

  5. Graphics-Printing Program For The HP Paintjet Printer

    NASA Technical Reports Server (NTRS)

    Atkins, Victor R.

    1993-01-01

    IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.

  6. A special attack on the multiparty quantum secret sharing of secure direct communication using single photons

    NASA Astrophysics Data System (ADS)

    Qin, Su-Juan; Gao, Fei; Wen, Qiao-Yan; Zhu, Fu-Chen

    2008-11-01

    The security of a multiparty quantum secret sharing protocol [L.F. Han, Y.M. Liu, J. Liu, Z.J. Zhang, Opt. Commun. 281 (2008) 2690] is reexamined. It is shown that any one dishonest participant can obtain all the transmitted secret bits by a special attack, where the controlled- (-iσy) gate is employed to invalidate the role of the random phase shift operation. Furthermore, a possible way to resist this attack is discussed.

  7. No Bit Left Behind: The Limits of Heap Data Compression

    DTIC Science & Technology

    2008-06-01

    Lempel - Ziv compression is non-lossy, in other words, the original data can be fully recovered by decompression. Unlike the data representations for most...of the other models, Lempel - Ziv compressed data does not permit random access, let alone in-place update. To compute this model as accu- rately as...of the collection, we print the size of the full stream, i.e., all live data in the heap. We then apply Lempel - Ziv compression to the stream

  8. Chess games: a model for RNA based computation.

    PubMed

    Cukras, A R; Faulhammer, D; Lipton, R J; Landweber, L F

    1999-10-01

    Here we develop the theory of RNA computing and a method for solving the 'knight problem' as an instance of a satisfiability (SAT) problem. Using only biological molecules and enzymes as tools, we developed an algorithm for solving the knight problem (3 x 3 chess board) using a 10-bit combinatorial pool and sequential RNase H digestions. The results of preliminary experiments presented here reveal that the protocol recovers far more correct solutions than expected at random, but the persistence of errors still presents the greatest challenge.

  9. Structured codebook design in CELP

    NASA Technical Reports Server (NTRS)

    Leblanc, W. P.; Mahmoud, S. A.

    1990-01-01

    Codebook Excited Linear Protection (CELP) is a popular analysis by synthesis technique for quantizing speech at bit rates from 4 to 6 kbps. Codebook design techniques to date have been largely based on either random (often Gaussian) codebooks, or on known binary or ternary codes which efficiently map the space of (assumed white) excitation codevectors. It has been shown that by introducing symmetries into the codebook, good complexity reduction can be realized with only marginal decrease in performance. Codebook design algorithms are considered for a wide range of structured codebooks.

  10. Routing, Disjoint Paths, and Classification

    DTIC Science & Technology

    2006-08-01

    Mozart, Beatles , and Roxette. I would like to thank a very special friend of mine, Daria Ward, for not only teaching me how to dance, but for showing...Give up with Pr EN1 2N N32 ĒN1 For each balanced S S̄ , given h as 2KL history h ĒL1 diff diff T S S̄ L...particular balanced cut S S̄ , we let vector H1 H2KN record the entire history of random unordered pairs of bits, where H1

  11. Plated wire memory subsystem

    NASA Technical Reports Server (NTRS)

    Carpenter, K. H.

    1974-01-01

    The design, construction, and test history of a 4096 word by 18 bit random access NDRO Plated Wire Memory for use in conjunction with a spacecraft input/output and central processing unit is reported. A technical and functional description is given along with diagrams illustrating layout and systems operation. Test data is shown on the procedures and results of system level and memory stack testing, and hybrid circuit screening. A comparison of the most significant physical and performance characteristics of the memory unit versus the specified requirements is also included.

  12. Antioxidant supplements and mortality.

    PubMed

    Bjelakovic, Goran; Nikolova, Dimitrinka; Gluud, Christian

    2014-01-01

    Oxidative damage to cells and tissues is considered involved in the aging process and in the development of chronic diseases in humans, including cancer and cardiovascular diseases, the leading causes of death in high-income countries. This has stimulated interest in the preventive potential of antioxidant supplements. Today, more than one half of adults in high-income countries ingest antioxidant supplements hoping to improve their health, oppose unhealthy behaviors, and counteract the ravages of aging. Older observational studies and some randomized clinical trials with high risks of systematic errors ('bias') have suggested that antioxidant supplements may improve health and prolong life. A number of randomized clinical trials with adequate methodologies observed neutral or negative results of antioxidant supplements. Recently completed large randomized clinical trials with low risks of bias and systematic reviews of randomized clinical trials taking systematic errors ('bias') and risks of random errors ('play of chance') into account have shown that antioxidant supplements do not seem to prevent cancer, cardiovascular diseases, or death. Even more, beta-carotene, vitamin A, and vitamin E may increase mortality. Some recent large observational studies now support these findings. According to recent dietary guidelines, there is no evidence to support the use of antioxidant supplements in the primary prevention of chronic diseases or mortality. Antioxidant supplements do not possess preventive effects and may be harmful with unwanted consequences to our health, especially in well-nourished populations. The optimal source of antioxidants seems to come from our diet, not from antioxidant supplements in pills or tablets.

  13. Different hunting strategies select for different weights in red deer

    PubMed Central

    Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; Miguel, Alfonso San

    2005-01-01

    Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data. PMID:17148205

  14. A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro

    2006-04-01

    A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).

  15. Wideband propagation measurements at 30.3 GHz through a pecan orchard in Texas

    NASA Astrophysics Data System (ADS)

    Papazian, Peter B.; Jones, David L.; Espeland, Richard H.

    1992-09-01

    Wideband propagation measurements were made in a pecan orchard in Texas during April and August of 1990 to examine the propagation characteristics of millimeter-wave signals through vegetation. Measurements were made on tree obstructed paths with and without leaves. The study presents narrowband attenuation data at 9.6 and 28.8 GHz as well as wideband impulse response measurements at 30.3 GHz. The wideband probe (Violette et al., 1983), provides amplitude and delay of reflected and scattered signals and bit-error rate. This is accomplished using a 500 MBit/sec pseudo-random code to BPSK modulate a 28.8 GHz carrier. The channel impulse response is then extracted by cross correlating the received pseudo-random sequence with a locally generated replica.

  16. Soft errors in commercial off-the-shelf static random access memories

    NASA Astrophysics Data System (ADS)

    Dilillo, L.; Tsiligiannis, G.; Gupta, V.; Bosser, A.; Saigne, F.; Wrobel, F.

    2017-01-01

    This article reviews state-of-the-art techniques for the evaluation of the effect of radiation on static random access memory (SRAM). We detailed irradiation test techniques and results from irradiation experiments with several types of particles. Two commercial SRAMs, in 90 and 65 nm technology nodes, were considered as case studies. Besides the basic static and dynamic test modes, advanced stimuli for the irradiation tests were introduced, as well as statistical post-processing techniques allowing for deeper analysis of the correlations between bit-flip cross-sections and design/architectural characteristics of the memory device. Further insight is provided on the response of irradiated stacked layer devices and on the use of characterized SRAM devices as particle detectors.

  17. Cover estimation and payload location using Markov random fields

    NASA Astrophysics Data System (ADS)

    Quach, Tu-Thach

    2014-02-01

    Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.

  18. Optical mass memories

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1976-01-01

    Optical and magnetic variants in the design of trillion-bit read/write memories are compared and tabulated. Components and materials suitable for a random access read/write nonmoving memory system are examined, with preference given to holography and photoplastic materials. Advantages and deficiencies of photoplastics are reviewed. Holographic page composer design, essential features of an optical memory with no moving parts, fiche-oriented random access memory design, and materials suitable for an efficient photoplastic fiche are considered. The optical variants offer advantages in lower volume and weight at data transfer rates near 1 Mbit/sec, but power drain is of the same order as for the magnetic variants (tape memory, disk memory). The mechanical properties of photoplastic film materials still leave much to be desired.

  19. PDC bits: What`s needed to meet tomorrow`s challenge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, T.M.; Sinor, L.A.

    1994-12-31

    When polycrystalline diamond compact (PDC) bits were introduced in the mid-1970s they showed tantalizingly high penetration rates in laboratory drilling tests. Single cutter tests indicated that they had the potential to drill very hard rocks. Unfortunately, 20 years later we`re still striving to reach the potential that these bits seem to have. Many problems have been overcome, and PDC bits have offered capabilities not possible with roller cone bits. PDC bits provide the most economical bit choice in many areas, but their limited durability has hampered their application in many other areas.

  20. Why Are People Bad at Detecting Randomness? A Statistical Argument

    ERIC Educational Resources Information Center

    Williams, Joseph J.; Griffiths, Thomas L.

    2013-01-01

    Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent statistical difficulty of the task. Our account is based on a Bayesian statistical analysis, focusing on the fact that a random process is a special case of…

Top