Sample records for generated random numbers

  1. Self-correcting random number generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Pooser, Raphael C.

    2016-09-06

    A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, tomore » provide a random number according to one or more performance criteria.« less

  2. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  3. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  4. Generation of physical random numbers by using homodyne detection

    NASA Astrophysics Data System (ADS)

    Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro

    2016-10-01

    Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.

  5. Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.

    DTIC Science & Technology

    The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)

  6. Quantum random number generator

    DOEpatents

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  7. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.

  8. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  9. Using Computer-Generated Random Numbers to Calculate the Lifetime of a Comet.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1991-01-01

    An educational technique to calculate the lifetime of a comet using software-generated random numbers is introduced to undergraduate physiques and astronomy students. Discussed are the generation and eligibility of the required random numbers, background literature related to the problem, and the solution to the problem using random numbers.…

  10. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  11. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  12. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  13. Quantum random number generation

    DOE PAGES

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; ...

    2016-06-28

    Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  14. Quantum random number generation for loophole-free Bell tests

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan; Abellan, Carlos; Amaya, Waldimar

    2015-05-01

    We describe the generation of quantum random numbers at multi-Gbps rates, combined with real-time randomness extraction, to give very high purity random numbers based on quantum events at most tens of ns in the past. The system satisfies the stringent requirements of quantum non-locality tests that aim to close the timing loophole. We describe the generation mechanism using spontaneous-emission-driven phase diffusion in a semiconductor laser, digitization, and extraction by parity calculation using multi-GHz logic chips. We pay special attention to experimental proof of the quality of the random numbers and analysis of the randomness extraction. In contrast to widely-used models of randomness generators in the computer science literature, we argue that randomness generation by spontaneous emission can be extracted from a single source.

  15. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  16. Secure uniform random-number extraction via incoherent strategies

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Zhu, Huangjun

    2018-01-01

    To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.

  17. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  18. Generating and using truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2012-01-01

    The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.

  19. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  20. Problems with the random number generator RANF implemented on the CDC cyber 205

    NASA Astrophysics Data System (ADS)

    Kalle, Claus; Wansleben, Stephan

    1984-10-01

    We show that using RANF may lead to wrong results when lattice models are simulated by Monte Carlo methods. We present a shift-register sequence random number generator which generates two random numbers per cycle on a two pipe CDC Cyber 205.

  1. Fast physical-random number generation using laser diode's frequency noise: influence of frequency discriminator

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo

    2018-02-01

    Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.

  2. Towards a high-speed quantum random number generator

    NASA Astrophysics Data System (ADS)

    Stucki, Damien; Burri, Samuel; Charbon, Edoardo; Chunnilall, Christopher; Meneghetti, Alessio; Regazzoni, Francesco

    2013-10-01

    Randomness is of fundamental importance in various fields, such as cryptography, numerical simulations, or the gaming industry. Quantum physics, which is fundamentally probabilistic, is the best option for a physical random number generator. In this article, we will present the work carried out in various projects in the context of the development of a commercial and certified high speed random number generator.

  3. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  4. Unbiased All-Optical Random-Number Generator

    NASA Astrophysics Data System (ADS)

    Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja

    2017-10-01

    The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.

  5. Leveraging Random Number Generation for Mastery of Learning in Teaching Quantitative Research Courses via an E-Learning Method

    ERIC Educational Resources Information Center

    Boonsathorn, Wasita; Charoen, Danuvasin; Dryver, Arthur L.

    2014-01-01

    E-Learning brings access to a powerful but often overlooked teaching tool: random number generation. Using random number generation, a practically infinite number of quantitative problem-solution sets can be created. In addition, within the e-learning context, in the spirit of the mastery of learning, it is possible to assign online quantitative…

  6. Generating random numbers by means of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi

    2018-07-01

    To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the angular displacements of chaotic motion, random binary numerical arrays can be generated. To check the randomness of generated numerical arrays, the NIST Special Publication 800-20 method was adopted. As a result, it was found that all the random arrays which were generated by the chaotic motion could pass the validity criteria and some of them were even better than the quality of pseudo-random numbers generated by a computer. Through the experiments, it is demonstrated that chaotic pendulum can be used as an efficient mechanical facility in generating random numbers, and can be applied in teaching random motion to the students.

  7. Doing better by getting worse: posthypnotic amnesia improves random number generation.

    PubMed

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation.

  8. Doing Better by Getting Worse: Posthypnotic Amnesia Improves Random Number Generation

    PubMed Central

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation. PMID:22195022

  9. Self-balanced real-time photonic scheme for ultrafast random number generation

    NASA Astrophysics Data System (ADS)

    Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang

    2018-06-01

    We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.

  10. Recommendations and illustrations for the evaluation of photonic random number generators

    NASA Astrophysics Data System (ADS)

    Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi

    2017-09-01

    The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu

    Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  12. PUFKEY: A High-Security and High-Throughput Hardware True Random Number Generator for Sensor Networks

    PubMed Central

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-01-01

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks. PMID:26501283

  13. PUFKEY: a high-security and high-throughput hardware true random number generator for sensor networks.

    PubMed

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-10-16

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.

  14. Autocorrelation peaks in congruential pseudorandom number generators

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Merrick, R. B.

    1976-01-01

    The complete correlation structure of several congruential pseudorandom number generators (PRNG) of the same type and small cycle length was studied to deal with the problem of congruential PRNG almost repeating themselves at intervals smaller than their cycle lengths, during simulation of bandpass filtered normal random noise. Maximum period multiplicative and mixed congruential generators were studied, with inferences drawn from examination of several tractable members of a class of random number generators, and moduli from 2 to the 5th power to 2 to the 9th power. High correlation is shown to exist in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle length. The random noise sequences in question are required when simulating electrical noise, air turbulence, or time variation of wind parameters.

  15. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  16. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  17. Source-Independent Quantum Random Number Generation

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  18. Generation of pseudo-random numbers

    NASA Technical Reports Server (NTRS)

    Howell, L. W.; Rheinfurth, M. H.

    1982-01-01

    Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.

  19. The correlation structure of several popular pseudorandom number generators

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Merrick, R.; Martin, C. F.

    1973-01-01

    One of the desirable properties of a pseudorandom number generator is that the sequence of numbers it generates should have very low autocorrelation for all shifts except for zero shift and those that are multiples of its cycle length. Due to the simple methods of constructing random numbers, the ideal is often not quite fulfilled. A simple method of examining any random generator for previously unsuspected regularities is discussed. Once they are discovered it is often easy to derive the mathematical relationships, which describe the mathematical relationships, which describe the regular behavior. As examples, it is shown that high correlation exists in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle lengths.

  20. Social Noise: Generating Random Numbers from Twitter Streams

    NASA Astrophysics Data System (ADS)

    Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús

    2015-12-01

    Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.

  1. Compact Quantum Random Number Generator with Silicon Nanocrystals Light Emitting Device Coupled to a Silicon Photomultiplier

    NASA Astrophysics Data System (ADS)

    Bisadi, Zahra; Acerbi, Fabio; Fontana, Giorgio; Zorzi, Nicola; Piemonte, Claudio; Pucker, Georg; Pavesi, Lorenzo

    2018-02-01

    A small-sized photonic quantum random number generator, easy to be implemented in small electronic devices for secure data encryption and other applications, is highly demanding nowadays. Here, we propose a compact configuration with Silicon nanocrystals large area light emitting device (LED) coupled to a Silicon photomultiplier to generate random numbers. The random number generation methodology is based on the photon arrival time and is robust against the non-idealities of the detector and the source of quantum entropy. The raw data show high quality of randomness and pass all the statistical tests in national institute of standards and technology tests (NIST) suite without a post-processing algorithm. The highest bit rate is 0.5 Mbps with the efficiency of 4 bits per detected photon.

  2. On the limiting characteristics of quantum random number generators at various clusterings of photocounts

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2017-03-01

    Various methods for the clustering of photocounts constituting a sequence of random numbers are considered. It is shown that the clustering of photocounts resulting in the Fermi-Dirac distribution makes it possible to achieve the theoretical limit of the random number generation rate.

  3. A hybrid-type quantum random number generator

    NASA Astrophysics Data System (ADS)

    Hai-Qiang, Ma; Wu, Zhu; Ke-Jin, Wei; Rui-Xue, Li; Hong-Wei, Liu

    2016-05-01

    This paper proposes a well-performing hybrid-type truly quantum random number generator based on the time interval between two independent single-photon detection signals, which is practical and intuitive, and generates the initial random number sources from a combination of multiple existing random number sources. A time-to-amplitude converter and multichannel analyzer are used for qualitative analysis to demonstrate that each and every step is random. Furthermore, a carefully designed data acquisition system is used to obtain a high-quality random sequence. Our scheme is simple and proves that the random number bit rate can be dramatically increased to satisfy practical requirements. Project supported by the National Natural Science Foundation of China (Grant Nos. 61178010 and 11374042), the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China, and the Fundamental Research Funds for the Central Universities of China (Grant No. bupt2014TS01).

  4. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  5. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  6. DNA-based random number generation in security circuitry.

    PubMed

    Gearheart, Christy M; Arazi, Benjamin; Rouchka, Eric C

    2010-06-01

    DNA-based circuit design is an area of research in which traditional silicon-based technologies are replaced by naturally occurring phenomena taken from biochemistry and molecular biology. This research focuses on further developing DNA-based methodologies to mimic digital data manipulation. While exhibiting fundamental principles, this work was done in conjunction with the vision that DNA-based circuitry, when the technology matures, will form the basis for a tamper-proof security module, revolutionizing the meaning and concept of tamper-proofing and possibly preventing it altogether based on accurate scientific observations. A paramount part of such a solution would be self-generation of random numbers. A novel prototype schema employs solid phase synthesis of oligonucleotides for random construction of DNA sequences; temporary storage and retrieval is achieved through plasmid vectors. A discussion of how to evaluate sequence randomness is included, as well as how these techniques are applied to a simulation of the random number generation circuitry. Simulation results show generated sequences successfully pass three selected NIST random number generation tests specified for security applications.

  7. Experimentally generated randomness certified by the impossibility of superluminal signals.

    PubMed

    Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K

    2018-04-01

    From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.

  8. Random numbers certified by Bell's theorem.

    PubMed

    Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C

    2010-04-15

    Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory.

  9. Realization of a Quantum Random Generator Certified with the Kochen-Specker Theorem

    NASA Astrophysics Data System (ADS)

    Kulikov, Anatoly; Jerger, Markus; Potočnik, Anton; Wallraff, Andreas; Fedorov, Arkady

    2017-12-01

    Random numbers are required for a variety of applications from secure communications to Monte Carlo simulation. Yet randomness is an asymptotic property, and no output string generated by a physical device can be strictly proven to be random. We report an experimental realization of a quantum random number generator (QRNG) with randomness certified by quantum contextuality and the Kochen-Specker theorem. The certification is not performed in a device-independent way but through a rigorous theoretical proof of each outcome being value indefinite even in the presence of experimental imperfections. The analysis of the generated data confirms the incomputable nature of our QRNG.

  10. Realization of a Quantum Random Generator Certified with the Kochen-Specker Theorem.

    PubMed

    Kulikov, Anatoly; Jerger, Markus; Potočnik, Anton; Wallraff, Andreas; Fedorov, Arkady

    2017-12-15

    Random numbers are required for a variety of applications from secure communications to Monte Carlo simulation. Yet randomness is an asymptotic property, and no output string generated by a physical device can be strictly proven to be random. We report an experimental realization of a quantum random number generator (QRNG) with randomness certified by quantum contextuality and the Kochen-Specker theorem. The certification is not performed in a device-independent way but through a rigorous theoretical proof of each outcome being value indefinite even in the presence of experimental imperfections. The analysis of the generated data confirms the incomputable nature of our QRNG.

  11. KMCLib 1.1: Extended random number support and technical updates to the KMCLib general framework for kinetic Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-11-01

    We here present a revised version, v1.1, of the KMCLib general framework for kinetic Monte-Carlo (KMC) simulations. The generation of random numbers in KMCLib now relies on the C++11 standard library implementation, and support has been added for the user to choose from a set of C++11 implemented random number generators. The Mersenne-twister, the 24 and 48 bit RANLUX and a 'minimal-standard' PRNG are supported. We have also included the possibility to use true random numbers via the C++11 std::random_device generator. This release also includes technical updates to support the use of an extended range of operating systems and compilers.

  12. Implementation of a quantum random number generator based on the optimal clustering of photocounts

    NASA Astrophysics Data System (ADS)

    Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.

    2017-10-01

    To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.

  13. A Micro-Computer Model for Army Air Defense Training.

    DTIC Science & Technology

    1985-03-01

    generator. The period is 32763 numbers generated before a repetitive sequence is encountered on the development system. Chi-Squared tests for frequency...C’ Tests CPeriodicity. The period is 32763 numbers generated C’before a repetitive sequence is encountered on the development system. This was...positions in the test array. This was done with several different random number seeds. In each case 32763 p random numbers were generated before a

  14. Superparamagnetic perpendicular magnetic tunnel junctions for true random number generators

    NASA Astrophysics Data System (ADS)

    Parks, Bradley; Bapna, Mukund; Igbokwe, Julianne; Almasi, Hamid; Wang, Weigang; Majetich, Sara A.

    2018-05-01

    Superparamagnetic perpendicular magnetic tunnel junctions are fabricated and analyzed for use in random number generators. Time-resolved resistance measurements are used as streams of bits in statistical tests for randomness. Voltage control of the thermal stability enables tuning the average speed of random bit generation up to 70 kHz in a 60 nm diameter device. In its most efficient operating mode, the device generates random bits at an energy cost of 600 fJ/bit. A narrow range of magnetic field tunes the probability of a given state from 0 to 1, offering a means of probabilistic computing.

  15. Pseudo-random properties of a linear congruential generator investigated by b-adic diaphony

    NASA Astrophysics Data System (ADS)

    Stoev, Peter; Stoilova, Stanislava

    2017-12-01

    In the proposed paper we continue the study of the diaphony, defined in b-adic number system, and we extend it in different directions. We investigate this diaphony as a tool for estimation of the pseudorandom properties of some of the most used random number generators. This is done by evaluating the distribution of specially constructed two-dimensional nets on the base of the obtained random numbers. The aim is to see how the generated numbers are suitable for calculations in some numerical methods (Monte Carlo etc.).

  16. Random Number Generation and Executive Functions in Parkinson's Disease: An Event-Related Brain Potential Study.

    PubMed

    Münte, Thomas F; Joppich, Gregor; Däuper, Jan; Schrader, Christoph; Dengler, Reinhard; Heldmann, Marcus

    2015-01-01

    The generation of random sequences is considered to tax executive functions and has been reported to be impaired in Parkinson's disease (PD) previously. To assess the neurophysiological markers of random number generation in PD. Event-related potentials (ERP) were recorded in 12 PD patients and 12 age-matched normal controls (NC) while either engaging in random number generation (RNG) by pressing the number keys on a computer keyboard in a random sequence or in ordered number generation (ONG) necessitating key presses in the canonical order. Key presses were paced by an external auditory stimulus at a rate of 1 tone every 1800 ms. As a secondary task subjects had to monitor the tone-sequence for a particular target tone to which the number "0" key had to be pressed. This target tone occurred randomly and infrequently, thus creating a secondary oddball task. Behaviorally, PD patients showed an increased tendency to count in steps of one as well as a tendency towards repetition avoidance. Electrophysiologically, the amplitude of the P3 component of the ERP to the target tone of the secondary task was reduced during RNG in PD but not in NC. The behavioral findings indicate less random behavior in PD while the ERP findings suggest that this impairment comes about, because attentional resources are depleted in PD.

  17. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  18. Response Rates in Random-Digit-Dialed Telephone Surveys: Estimation vs. Measurement.

    ERIC Educational Resources Information Center

    Franz, Jennifer D.

    The efficacy of the random digit dialing method in telephone surveys was examined. Random digit dialing (RDD) generates a pure random sample and provides the advantage of including unlisted phone numbers, as well as numbers which are too new to be listed. Its disadvantage is that it generates a major proportion of nonworking and business…

  19. Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao

    2016-07-15

    We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilizedmore » interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.« less

  20. Anonymous authenticated communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Cheryl L; Schroeppel, Richard C; Snyder, Lillian A

    2007-06-19

    A method of performing electronic communications between members of a group wherein the communications are authenticated as being from a member of the group and have not been altered, comprising: generating a plurality of random numbers; distributing in a digital medium the plurality of random numbers to the members of the group; publishing a hash value of contents of the digital medium; distributing to the members of the group public-key-encrypted messages each containing a same token comprising a random number; and encrypting a message with a key generated from the token and the plurality of random numbers.

  1. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  2. Source-Device-Independent Ultrafast Quantum Random Number Generation.

    PubMed

    Marangon, Davide G; Vallone, Giuseppe; Villoresi, Paolo

    2017-02-10

    Secure random numbers are a fundamental element of many applications in science, statistics, cryptography and more in general in security protocols. We present a method that enables the generation of high-speed unpredictable random numbers from the quadratures of an electromagnetic field without any assumption on the input state. The method allows us to eliminate the numbers that can be predicted due to the presence of classical and quantum side information. In particular, we introduce a procedure to estimate a bound on the conditional min-entropy based on the entropic uncertainty principle for position and momentum observables of infinite dimensional quantum systems. By the above method, we experimentally demonstrated the generation of secure true random bits at a rate greater than 1.7 Gbit/s.

  3. Practical quantum random number generator based on measuring the shot noise of vacuum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yong; Zou Hongxin; Tian Liang

    2010-06-15

    The shot noise of vacuum states is a kind of quantum noise and is totally random. In this paper a nondeterministic random number generation scheme based on measuring the shot noise of vacuum states is presented and experimentally demonstrated. We use a homodyne detector to measure the shot noise of vacuum states. Considering that the frequency bandwidth of our detector is limited, we derive the optimal sampling rate so that sampling points have the least correlation with each other. We also choose a method to extract random numbers from sampling values, and prove that the influence of classical noise canmore » be avoided with this method so that the detector does not have to be shot-noise limited. The random numbers generated with this scheme have passed ent and diehard tests.« less

  4. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  5. Truly random number generation: an example

    NASA Astrophysics Data System (ADS)

    Frauchiger, Daniela; Renner, Renato

    2013-10-01

    Randomness is crucial for a variety of applications, ranging from gambling to computer simulations, and from cryptography to statistics. However, many of the currently used methods for generating randomness do not meet the criteria that are necessary for these applications to work properly and safely. A common problem is that a sequence of numbers may look random but nevertheless not be truly random. In fact, the sequence may pass all standard statistical tests and yet be perfectly predictable. This renders it useless for many applications. For example, in cryptography, the predictability of a "andomly" chosen password is obviously undesirable. Here, we review a recently developed approach to generating true | and hence unpredictable | randomness.

  6. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  7. An investigation of the uniform random number generator

    NASA Technical Reports Server (NTRS)

    Temple, E. C.

    1982-01-01

    Most random number generators that are in use today are of the congruential form X(i+1) + AX(i) + C mod M where A, C, and M are nonnegative integers. If C=O, the generator is called the multiplicative type and those for which C/O are called mixed congruential generators. It is easy to see that congruential generators will repeat a sequence of numbers after a maximum of M values have been generated. The number of numbers that a procedure generates before restarting the sequence is called the length or the period of the generator. Generally, it is desirable to make the period as long as possible. A detailed discussion of congruential generators is given. Also, several promising procedures that differ from the multiplicative and mixed procedure are discussed.

  8. Experimental study of a quantum random-number generator based on two independent lasers

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Xu, Feihu

    2017-12-01

    A quantum random-number generator (QRNG) can produce true randomness by utilizing the inherent probabilistic nature of quantum mechanics. Recently, the spontaneous-emission quantum phase noise of the laser has been widely deployed for quantum random-number generation, due to its high rate, its low cost, and the feasibility of chip-scale integration. Here, we perform a comprehensive experimental study of a phase-noise-based QRNG with two independent lasers, each of which operates in either continuous-wave (CW) or pulsed mode. We implement the QRNG by operating the two lasers in three configurations, namely, CW + CW, CW + pulsed, and pulsed + pulsed, and demonstrate their trade-offs, strengths, and weaknesses.

  9. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  10. Solution-Processed Carbon Nanotube True Random Number Generator.

    PubMed

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  11. Probabilistic generation of random networks taking into account information on motifs occurrence.

    PubMed

    Bois, Frederic Y; Gayraud, Ghislaine

    2015-01-01

    Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli.

  12. Probabilistic Generation of Random Networks Taking into Account Information on Motifs Occurrence

    PubMed Central

    Bois, Frederic Y.

    2015-01-01

    Abstract Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli. PMID:25493547

  13. Accelerating Pseudo-Random Number Generator for MCNP on GPU

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu

    2010-09-01

    Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.

  14. Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling

    NASA Astrophysics Data System (ADS)

    Bierhorst, Peter; Shalm, Lynden K.; Mink, Alan; Jordan, Stephen; Liu, Yi-Kai; Rommal, Andrea; Glancy, Scott; Christensen, Bradley; Nam, Sae Woo; Knill, Emanuel

    Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment to obtain data containing randomness that cannot be predicted by any theory that does not also allow the sending of signals faster than the speed of light. To certify and quantify the randomness, we develop a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtain 256 new random bits, uniform to within 10- 3 .

  15. Random ambience using high fidelity images

    NASA Astrophysics Data System (ADS)

    Abu, Nur Azman; Sahib, Shahrin

    2011-06-01

    Most of the secure communication nowadays mandates true random keys as an input. These operations are mostly designed and taken care of by the developers of the cryptosystem. Due to the nature of confidential crypto development today, pseudorandom keys are typically designed and still preferred by the developers of the cryptosystem. However, these pseudorandom keys are predictable, periodic and repeatable, hence they carry minimal entropy. True random keys are believed to be generated only via hardware random number generators. Careful statistical analysis is still required to have any confidence the process and apparatus generates numbers that are sufficiently random to suit the cryptographic use. In this underlying research, each moment in life is considered unique in itself. The random key is unique for the given moment generated by the user whenever he or she needs the random keys in practical secure communication. An ambience of high fidelity digital image shall be tested for its randomness according to the NIST Statistical Test Suite. Recommendation on generating a simple 4 megabits per second random cryptographic keys live shall be reported.

  16. 25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...

  17. 25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...

  18. 25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...

  19. Scope of Various Random Number Generators in Ant System Approach for TSP

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2007-01-01

    Experimented on heuristic, based on an ant system approach for traveling Salesman problem, are several quasi and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is just to seek an answer to the controversial performance ranking of the generators in probabilistic/statically sense.

  20. Random numbers from vacuum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  1. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  2. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  3. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  4. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  5. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  6. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  7. Direct generation of all-optical random numbers from optical pulse amplitude chaos.

    PubMed

    Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong

    2012-02-13

    We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.

  8. A Comparison of One Time Pad Random Key Generation using Linear Congruential Generator and Quadratic Congruential Generator

    NASA Astrophysics Data System (ADS)

    Apdilah, D.; Harahap, M. K.; Khairina, N.; Husein, A. M.; Harahap, M.

    2018-04-01

    One Time Pad algorithm always requires a pairing of the key for plaintext. If the length of keys less than a length of the plaintext, the key will be repeated until the length of the plaintext same with the length of the key. In this research, we use Linear Congruential Generator and Quadratic Congruential Generator for generating a random number. One Time Pad use a random number as a key for encryption and decryption process. Key will generate the first letter from the plaintext, we compare these two algorithms in terms of time speed encryption, and the result is a combination of OTP with LCG faster than the combination of OTP with QCG.

  9. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  10. Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm

    NASA Astrophysics Data System (ADS)

    Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.

    2018-03-01

    Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective

  11. Digital-Analog Hybrid Scheme and Its Application to Chaotic Random Number Generators

    NASA Astrophysics Data System (ADS)

    Yuan, Zeshi; Li, Hongtao; Miao, Yunchi; Hu, Wen; Zhu, Xiaohua

    2017-12-01

    Practical random number generation (RNG) circuits are typically achieved with analog devices or digital approaches. Digital-based techniques, which use field programmable gate array (FPGA) and graphics processing units (GPU) etc. usually have better performances than analog methods as they are programmable, efficient and robust. However, digital realizations suffer from the effect of finite precision. Accordingly, the generated random numbers (RNs) are actually periodic instead of being real random. To tackle this limitation, in this paper we propose a novel digital-analog hybrid scheme that employs the digital unit as the main body, and minimum analog devices to generate physical RNs. Moreover, the possibility of realizing the proposed scheme with only one memory element is discussed. Without loss of generality, we use the capacitor and the memristor along with FPGA to construct the proposed hybrid system, and a chaotic true random number generator (TRNG) circuit is realized, producing physical RNs at a throughput of Gbit/s scale. These RNs successfully pass all the tests in the NIST SP800-22 package, confirming the significance of the scheme in practical applications. In addition, the use of this new scheme is not restricted to RNGs, and it also provides a strategy to solve the effect of finite precision in other digital systems.

  12. Megahertz-Rate Semi-Device-Independent Quantum Random Number Generators Based on Unambiguous State Discrimination

    NASA Astrophysics Data System (ADS)

    Brask, Jonatan Bohr; Martin, Anthony; Esposito, William; Houlmann, Raphael; Bowles, Joseph; Zbinden, Hugo; Brunner, Nicolas

    2017-05-01

    An approach to quantum random number generation based on unambiguous quantum state discrimination is developed. We consider a prepare-and-measure protocol, where two nonorthogonal quantum states can be prepared, and a measurement device aims at unambiguously discriminating between them. Because the states are nonorthogonal, this necessarily leads to a minimal rate of inconclusive events whose occurrence must be genuinely random and which provide the randomness source that we exploit. Our protocol is semi-device-independent in the sense that the output entropy can be lower bounded based on experimental data and a few general assumptions about the setup alone. It is also practically relevant, which we demonstrate by realizing a simple optical implementation, achieving rates of 16.5 Mbits /s . Combining ease of implementation, a high rate, and a real-time entropy estimation, our protocol represents a promising approach intermediate between fully device-independent protocols and commercial quantum random number generators.

  13. Quantum random number generator based on quantum nature of vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.

    2017-11-01

    Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.

  14. 640-Gbit/s fast physical random number generation using a broadband chaotic semiconductor laser

    NASA Astrophysics Data System (ADS)

    Zhang, Limeng; Pan, Biwei; Chen, Guangcan; Guo, Lu; Lu, Dan; Zhao, Lingjuan; Wang, Wei

    2017-04-01

    An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method. The compact chaotic source is implemented by using a monolithic integrated dual-mode amplified feedback laser (AFL) with self-injection, where a robust chaotic signal with RF frequency coverage of above 50 GHz and flatness of ±3.6 dB is generated. By using 4-least significant bits (LSBs) retaining from the 8-bit digitization of the chaotic waveform, random sequences with a bit-rate up to 640 Gbit/s (160 GS/s × 4 bits) are realized. The generated random bits have passed each of the fifteen NIST statistics tests (NIST SP800-22), indicating its randomness for practical applications.

  15. Not all numbers are equal: preferences and biases among children and adults when generating random sequences.

    PubMed

    Towse, John N; Loetscher, Tobias; Brugger, Peter

    2014-01-01

    We investigate the number preferences of children and adults when generating random digit sequences. Previous research has shown convincingly that adults prefer smaller numbers when randomly choosing between responses 1-6. We analyze randomization choices made by both children and adults, considering a range of experimental studies and task configurations. Children - most of whom are between 8 and 11~years - show a preference for relatively large numbers when choosing numbers 1-10. Adults show a preference for small numbers with the same response set. We report a modest association between children's age and numerical bias. However, children also exhibit a small number bias with a smaller response set available, and they show a preference specifically for the numbers 1-3 across many datasets. We argue that number space demonstrates both continuities (numbers 1-3 have a distinct status) and change (a developmentally emerging bias toward the left side of representational space or lower numbers).

  16. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  17. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

  18. On the design of henon and logistic map-based random number generator

    NASA Astrophysics Data System (ADS)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  19. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-01

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  20. Quantum random bit generation using energy fluctuations in stimulated Raman scattering.

    PubMed

    Bustard, Philip J; England, Duncan G; Nunn, Josh; Moffatt, Doug; Spanner, Michael; Lausten, Rune; Sussman, Benjamin J

    2013-12-02

    Random number sequences are a critical resource in modern information processing systems, with applications in cryptography, numerical simulation, and data sampling. We introduce a quantum random number generator based on the measurement of pulse energy quantum fluctuations in Stokes light generated by spontaneously-initiated stimulated Raman scattering. Bright Stokes pulse energy fluctuations up to five times the mean energy are measured with fast photodiodes and converted to unbiased random binary strings. Since the pulse energy is a continuous variable, multiple bits can be extracted from a single measurement. Our approach can be generalized to a wide range of Raman active materials; here we demonstrate a prototype using the optical phonon line in bulk diamond.

  1. Security of practical private randomness generation

    NASA Astrophysics Data System (ADS)

    Pironio, Stefano; Massar, Serge

    2013-01-01

    Measurements on entangled quantum systems necessarily yield outcomes that are intrinsically unpredictable if they violate a Bell inequality. This property can be used to generate certified randomness in a device-independent way, i.e., without making detailed assumptions about the internal working of the quantum devices used to generate the random numbers. Furthermore these numbers are also private; i.e., they appear random not only to the user but also to any adversary that might possess a perfect description of the devices. Since this process requires a small initial random seed to sample the behavior of the quantum devices and to extract uniform randomness from the raw outputs of the devices, one usually speaks of device-independent randomness expansion. The purpose of this paper is twofold. First, we point out that in most real, practical situations, where the concept of device independence is used as a protection against unintentional flaws or failures of the quantum apparatuses, it is sufficient to show that the generated string is random with respect to an adversary that holds only classical side information; i.e., proving randomness against quantum side information is not necessary. Furthermore, the initial random seed does not need to be private with respect to the adversary, provided that it is generated in a way that is independent from the measured systems. The devices, however, will generate cryptographically secure randomness that cannot be predicted by the adversary, and thus one can, given access to free public randomness, talk about private randomness generation. The theoretical tools to quantify the generated randomness according to these criteria were already introduced in S. Pironio [Nature (London)NATUAS0028-083610.1038/nature09008 464, 1021 (2010)], but the final results were improperly formulated. The second aim of this paper is to correct this inaccurate formulation and therefore lay out a precise theoretical framework for practical device-independent randomness generation.

  2. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  3. Influence of finger and mouth action observation on random number generation: an instance of embodied cognition for abstract concepts.

    PubMed

    Grade, Stéphane; Badets, Arnaud; Pesenti, Mauro

    2017-05-01

    Numerical magnitude and specific grasping action processing have been shown to interfere with each other because some aspects of numerical meaning may be grounded in sensorimotor transformation mechanisms linked to finger grip control. However, how specific these interactions are to grasping actions is still unknown. The present study tested the specificity of the number-grip relationship by investigating how the observation of different closing-opening stimuli that might or not refer to prehension-releasing actions was able to influence a random number generation task. Participants had to randomly produce numbers after they observed action stimuli representing either closure or aperture of the fingers, the hand or the mouth, or a colour change used as a control condition. Random number generation was influenced by the prior presentation of finger grip actions, whereby observing a closing finger grip led participants to produce small rather than large numbers, whereas observing an opening finger grip led them to produce large rather than small numbers. Hand actions had reduced or no influence on number production; mouth action influence was restricted to opening, with an overproduction of large numbers. Finally, colour changes did not influence number generation. These results show that some characteristics of observed finger, hand and mouth grip actions automatically prime number magnitude, with the strongest effect for finger grasping. The findings are discussed in terms of the functional and neural mechanisms shared between hand actions and number processing, but also between hand and mouth actions. The present study provides converging evidence that part of number semantics is grounded in sensory-motor mechanisms.

  4. Toward DNA-based Security Circuitry: First Step - Random Number Generation.

    PubMed

    Bogard, Christy M; Arazi, Benjamin; Rouchka, Eric C

    2008-08-10

    DNA-based circuit design is an area of research in which traditional silicon-based technologies are replaced by naturally occurring phenomena taken from biochemistry and molecular biology. Our team investigates the implications of DNA-based circuit design in serving security applications. As an initial step we develop a random number generation circuitry. A novel prototype schema employs solid-phase synthesis of oligonucleotides for random construction of DNA sequences. Temporary storage and retrieval is achieved through plasmid vectors.

  5. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  6. Efficient and robust quantum random number generation by photon number detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.

    2015-08-17

    We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less

  7. True randomness from an incoherent source

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2017-11-01

    Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.

  8. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.

    PubMed

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-05

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80  Gb×45.6  Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114  bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  9. Scope of Various Random Number Generators in ant System Approach for TSP

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2007-01-01

    Experimented on heuristic, based on an ant system approach for traveling salesman problem, are several quasi- and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is mainly to seek an answer to the controversial issue "which generator is the best in terms of quality of the result (accuracy) as well as cost of producing the result (time/computational complexity) in a probabilistic/statistical sense."

  10. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  11. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  12. Pseudo-Random Number Generation in Children with High-Functioning Autism and Asperger's Disorder: Further Evidence for a Dissociation in Executive Functioning?

    ERIC Educational Resources Information Center

    Rinehart, Nicole J.; Bradshaw, John L.; Moss, Simon A.; Brereton, Avril V.; Tonge, Bruce J.

    2006-01-01

    The repetitive, stereotyped and obsessive behaviours, which are core diagnostic features of autism, are thought to be underpinned by executive dysfunction. This study examined executive impairment in individuals with autism and Asperger's disorder using a verbal equivalent of an established pseudo-random number generating task. Different patterns…

  13. Compact quantum random number generator based on superluminescent light-emitting diodes

    NASA Astrophysics Data System (ADS)

    Wei, Shihai; Yang, Jie; Fan, Fan; Huang, Wei; Li, Dashuang; Xu, Bingjie

    2017-12-01

    By measuring the amplified spontaneous emission (ASE) noise of the superluminescent light emitting diodes, we propose and realize a quantum random number generator (QRNG) featured with practicability. In the QRNG, after the detection and amplification of the ASE noise, the data acquisition and randomness extraction which is integrated in a field programmable gate array (FPGA) are both implemented in real-time, and the final random bit sequences are delivered to a host computer with a real-time generation rate of 1.2 Gbps. Further, to achieve compactness, all the components of the QRNG are integrated on three independent printed circuit boards with a compact design, and the QRNG is packed in a small enclosure sized 140 mm × 120 mm × 25 mm. The final random bit sequences can pass all the NIST-STS and DIEHARD tests.

  14. Random number generation in bilingual Balinese and German students: preliminary findings from an exploratory cross-cultural study.

    PubMed

    Strenge, Hans; Lesmana, Cokorda Bagus Jaya; Suryani, Luh Ketut

    2009-08-01

    Verbal random number generation is a procedurally simple task to assess executive function and appears ideally suited for the use under diverse settings in cross-cultural research. The objective of this study was to examine ethnic group differences between young adults in Bali (Indonesia) and Kiel (Germany): 50 bilingual healthy students, 30 Balinese and 20 Germans, attempted to generate a random sequence of the digits 1 to 9. In Balinese participants, randomization was done in Balinese (native language L1) and Indonesian (first foreign language L2), in German subjects in the German (L1) and English (L2) languages. 10 of 30 Balinese (33%), but no Germans, were unable to inhibit habitual counting in more than half of the responses. The Balinese produced significantly more nonrandom responses than the Germans with higher rates of counting and significantly less occurrence of the digits 2 and 3 in L1 compared with L2. Repetition and cycling behavior did not differ between the four languages. The findings highlight the importance of taking into account culture-bound psychosocial factors for Balinese individuals when administering and interpreting a random number generation test.

  15. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  16. Random number generators tested on quantum Monte Carlo simulations.

    PubMed

    Hongo, Kenta; Maezono, Ryo; Miura, Kenichi

    2010-08-01

    We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.

  17. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  18. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    PubMed Central

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  19. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    PubMed

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  20. N-state random switching based on quantum tunnelling

    NASA Astrophysics Data System (ADS)

    Bernardo Gavito, Ramón; Jiménez Urbanos, Fernando; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J.; Woodhead, Christopher S.; Missous, Mohamed; Roedig, Utz; Young, Robert J.

    2017-08-01

    In this work, we show how the hysteretic behaviour of resonant tunnelling diodes (RTDs) can be exploited for new functionalities. In particular, the RTDs exhibit a stochastic 2-state switching mechanism that could be useful for random number generation and cryptographic applications. This behaviour can be scaled to N-bit switching, by connecting various RTDs in series. The InGaAs/AlAs RTDs used in our experiments display very sharp negative differential resistance (NDR) peaks at room temperature which show hysteresis cycles that, rather than having a fixed switching threshold, show a probability distribution about a central value. We propose to use this intrinsic uncertainty emerging from the quantum nature of the RTDs as a source of randomness. We show that a combination of two RTDs in series results in devices with three-state outputs and discuss the possibility of scaling to N-state devices by subsequent series connections of RTDs, which we demonstrate for the up to the 4-state case. In this work, we suggest using that the intrinsic uncertainty in the conduction paths of resonant tunnelling diodes can behave as a source of randomness that can be integrated into current electronics to produce on-chip true random number generators. The N-shaped I-V characteristic of RTDs results in a two-level random voltage output when driven with current pulse trains. Electrical characterisation and randomness testing of the devices was conducted in order to determine the validity of the true randomness assumption. Based on the results obtained for the single RTD case, we suggest the possibility of using multi-well devices to generate N-state random switching devices for their use in random number generation or multi-valued logic devices.

  1. Standard random number generation for MBASIC

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.

  2. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  3. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  4. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  5. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  6. Single-electron random-number generator (RNG) for highly secure ubiquitous computing applications

    NASA Astrophysics Data System (ADS)

    Uchida, Ken; Tanamoto, Tetsufumi; Fujita, Shinobu

    2007-11-01

    Since the security of all modern cryptographic techniques relies on unpredictable and irreproducible digital keys generated by random-number generators (RNGs), the realization of high-quality RNG is essential for secure communications. In this report, a new RNG, which utilizes single-electron phenomena, is proposed. A room-temperature operating silicon single-electron transistor (SET) having nearby an electron pocket is used as a high-quality, ultra-small RNG. In the proposed RNG, stochastic single-electron capture/emission processes to/from the electron pocket are detected with high sensitivity by the SET, and result in giant random telegraphic signals (GRTS) on the SET current. It is experimentally demonstrated that the single-electron RNG generates extremely high-quality random digital sequences at room temperature, in spite of its simple configuration. Because of its small-size and low-power properties, the single-electron RNG is promising as a key nanoelectronic device for future ubiquitous computing systems with highly secure mobile communication capabilities.

  7. Beyond Moore's law: towards competitive quantum devices

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2015-05-01

    A century after the invention of quantum theory and fifty years after Bell's inequality we see the first quantum devices emerge as products that aim to be competitive with the best classical computing devices. While a universal quantum computer of non-trivial size is still out of reach there exist a number commercial and experimental devices: quantum random number generators, quantum simulators and quantum annealers. In this colloquium I will present some of these devices and validation tests we performed on them. Quantum random number generators use the inherent randomness in quantum measurements to produce true random numbers, unlike classical pseudorandom number generators which are inherently deterministic. Optical lattice emulators use ultracold atomic gases in optical lattices to mimic typical models of condensed matter physics. In my talk I will focus especially on the devices built by Canadian company D-Wave systems, which are special purpose quantum simulators for solving hard classical optimization problems. I will review the controversy around the quantum nature of these devices and will compare them to state of the art classical algorithms. I will end with an outlook towards universal quantum computing and end with the question: which important problems that are intractable even for post-exa-scale classical computers could we expect to solve once we have a universal quantum computer?

  8. Random sampling of constrained phylogenies: conducting phylogenetic analyses when the phylogeny is partially known.

    PubMed

    Housworth, E A; Martins, E P

    2001-01-01

    Statistical randomization tests in evolutionary biology often require a set of random, computer-generated trees. For example, earlier studies have shown how large numbers of computer-generated trees can be used to conduct phylogenetic comparative analyses even when the phylogeny is uncertain or unknown. These methods were limited, however, in that (in the absence of molecular sequence or other data) they allowed users to assume that no phylogenetic information was available or that all possible trees were known. Intermediate situations where only a taxonomy or other limited phylogenetic information (e.g., polytomies) are available are technically more difficult. The current study describes a procedure for generating random samples of phylogenies while incorporating limited phylogenetic information (e.g., four taxa belong together in a subclade). The procedure can be used to conduct comparative analyses when the phylogeny is only partially resolved or can be used in other randomization tests in which large numbers of possible phylogenies are needed.

  9. Generating Random Samples of a Given Size Using Social Security Numbers.

    ERIC Educational Resources Information Center

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  10. Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode.

    PubMed

    Abellán, C; Amaya, W; Jofre, M; Curty, M; Acín, A; Capmany, J; Pruneri, V; Mitchell, M W

    2014-01-27

    We demonstrate a high bit-rate quantum random number generator by interferometric detection of phase diffusion in a gain-switched DFB laser diode. Gain switching at few-GHz frequencies produces a train of bright pulses with nearly equal amplitudes and random phases. An unbalanced Mach-Zehnder interferometer is used to interfere subsequent pulses and thereby generate strong random-amplitude pulses, which are detected and digitized to produce a high-rate random bit string. Using established models of semiconductor laser field dynamics, we predict a regime of high visibility interference and nearly complete vacuum-fluctuation-induced phase diffusion between pulses. These are confirmed by measurement of pulse power statistics at the output of the interferometer. Using a 5.825 GHz excitation rate and 14-bit digitization, we observe 43 Gbps quantum randomness generation.

  11. Simulations Using Random-Generated DNA and RNA Sequences

    ERIC Educational Resources Information Center

    Bryce, C. F. A.

    1977-01-01

    Using a very simple computer program written in BASIC, a very large number of random-generated DNA or RNA sequences are obtained. Students use these sequences to predict complementary sequences and translational products, evaluate base compositions, determine frequencies of particular triplet codons, and suggest possible secondary structures.…

  12. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  13. FPGA and USB based control board for quantum random number generator

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Wan, Xu; Zhang, Hong-Fei; Gao, Yuan; Chen, Teng-Yun; Liang, Hao

    2009-09-01

    The design and implementation of FPGA-and-USB-based control board for quantum experiments are discussed. The usage of quantum true random number generator, control- logic in FPGA and communication with computer through USB protocol are proposed in this paper. Programmable controlled signal input and output ports are implemented. The error-detections of data frame header and frame length are designed. This board has been used in our decoy-state based quantum key distribution (QKD) system successfully.

  14. Random motor generation in a finger tapping task: influence of spatial contingency and of cortical and subcortical hemispheric brain lesions

    PubMed Central

    Annoni, J.; Pegna, A.

    1997-01-01

    OBJECTIVE—To test the hypothesis that, during random motor generation, the spatial contingencies inherent to the task would induce additional preferences in normal subjects, shifting their performances farther from randomness. By contrast, perceptual or executive dysfunction could alter these task related biases in patients with brain damage.
METHODS—Two groups of patients, with right and left focal brain lesions, as well as 25 right handed subjects matched for age and handedness were asked to execute a random choice motor task—namely, to generate a random series of 180 button presses from a set of 10 keys placed vertically in front of them.
RESULTS—In the control group, as in the left brain lesion group, motor generation was subject to deviations from theoretical expected randomness, similar to those when numbers are generated mentally, as immediate repetitions (successive presses on the same key) are avoided. However, the distribution of button presses was also contingent on the topographic disposition of the keys: the central keys were chosen more often than those placed at extreme positions. Small distances were favoured, particularly with the left hand. These patterns were influenced by implicit strategies and task related contingencies.
 By contrast, right brain lesion patients with frontal involvement tended to show a more square distribution of key presses—that is, the number of key presses tended to be more equally distributed. The strategies were also altered by brain lesions: the number of immediate repetitions was more frequent when the lesion involved the right frontal areas yielding a random generation nearer to expected theoretical randomness. The frequency of adjacent key presses was increased by right anterior and left posterior cortical as well as by right subcortical lesions, but decreased by left subcortical lesions.
CONCLUSIONS—Depending on the side of the lesion and the degree of cortical-subcortical involvement, the deficits take on a different aspect and direct repetions and adjacent key presses have different patterns of alterations. Motor random generation is therefore a complex task which seems to necessitate the participation of numerous cerebral structures, among which those situated in the right frontal, left posterior, and subcortical regions have a predominant role.

 PMID:9408109

  15. Novel pseudo-random number generator based on quantum random walks.

    PubMed

    Yang, Yu-Guang; Zhao, Qian-Qian

    2016-02-04

    In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation.

  16. Novel pseudo-random number generator based on quantum random walks

    PubMed Central

    Yang, Yu-Guang; Zhao, Qian-Qian

    2016-01-01

    In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation. PMID:26842402

  17. Multi-peak structure of generation spectrum of random distributed feedback fiber Raman lasers.

    PubMed

    Vatnik, I D; Zlobina, E A; Kablukov, S I; Babin, S A

    2017-02-06

    We study spectral features of the generation of random distributed feedback fiber Raman laser arising from two-peak shape of the Raman gain spectral profile realized in the germanosilicate fibers. We demonstrate that number of peaks can be calculated using power balance model considering different subcomponents within each Stokes component.

  18. The Effects of Two Generative Activities on Learner Comprehension of Part-Whole Meaning of Rational Numbers Using Virtual Manipulatives

    ERIC Educational Resources Information Center

    Trespalacios, Jesus

    2010-01-01

    This study investigated the effects of two generative learning activities on students' academic achievement of the part-whole meaning of rational numbers while using virtual manipulatives. Third-grade students were divided randomly in two groups to evaluate the effects of two generative learning activities: answering-questions and…

  19. Statistical auditing and randomness test of lotto k/N-type games

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  20. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  1. Chaotic oscillation and random-number generation based on nanoscale optical-energy transfer.

    PubMed

    Naruse, Makoto; Kim, Song-Ju; Aono, Masashi; Hori, Hirokazu; Ohtsu, Motoichi

    2014-08-12

    By using nanoscale energy-transfer dynamics and density matrix formalism, we demonstrate theoretically and numerically that chaotic oscillation and random-number generation occur in a nanoscale system. The physical system consists of a pair of quantum dots (QDs), with one QD smaller than the other, between which energy transfers via optical near-field interactions. When the system is pumped by continuous-wave radiation and incorporates a timing delay between two energy transfers within the system, it emits optical pulses. We refer to such QD pairs as nano-optical pulsers (NOPs). Irradiating an NOP with external periodic optical pulses causes the oscillating frequency of the NOP to synchronize with the external stimulus. We find that chaotic oscillation occurs in the NOP population when they are connected by an external time delay. Moreover, by evaluating the time-domain signals by statistical-test suites, we confirm that the signals are sufficiently random to qualify the system as a random-number generator (RNG). This study reveals that even relatively simple nanodevices that interact locally with each other through optical energy transfer at scales far below the wavelength of irradiating light can exhibit complex oscillatory dynamics. These findings are significant for applications such as ultrasmall RNGs.

  2. Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.

    ERIC Educational Resources Information Center

    Oulman, Charles S.; Lee, Motoko Y.

    Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…

  3. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  4. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    PubMed Central

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  5. Random Number Generation for High Performance Computing

    DTIC Science & Technology

    2015-01-01

    number streams, a quality metric for the parallel random number streams. * * * * * Atty. Dkt . No.: 5660-14400 Customer No. 35690 Eric B. Meyertons...responsibility to ensure timely payment of maintenance fees when due. Pagel of3 PTOL-85 (Rev. 02/11) Atty. Dkt . No.: 5660-14400 Page 1 Meyertons...with each subtask executed by a separate thread or process (henceforth, process). Each process has Atty. Dkt . No.: 5660-14400 Page 2 Meyertons

  6. Bird's-eye view on noise-based logic.

    PubMed

    Kish, Laszlo B; Granqvist, Claes G; Horvath, Tamas; Klappenecker, Andreas; Wen, He; Bezrukov, Sergey M

    2014-01-01

    Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic state. We briefly discuss various questions such as ( i ) What does practical determinism mean? ( ii ) Is noise-based logic a Turing machine? ( iii ) Is there hope to beat (the dreams of) quantum computation by a classical physical noise-based processor, and what are the minimum hardware requirements for that? Finally, ( iv ) we address the problem of random number generators and show that the common belief that quantum number generators are superior to classical (thermal) noise-based generators is nothing but a myth.

  7. Bird's-eye view on noise-based logic

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Granqvist, Claes G.; Horvath, Tamas; Klappenecker, Andreas; Wen, He; Bezrukov, Sergey M.

    2014-09-01

    Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic state. We briefly discuss various questions such as (i) What does practical determinism mean? (ii) Is noise-based logic a Turing machine? (iii) Is there hope to beat (the dreams of) quantum computation by a classical physical noise-based processor, and what are the minimum hardware requirements for that? Finally, (iv) we address the problem of random number generators and show that the common belief that quantum number generators are superior to classical (thermal) noise-based generators is nothing but a myth.

  8. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  9. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  10. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  11. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  12. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  13. Device-independent randomness generation from several Bell estimators

    NASA Astrophysics Data System (ADS)

    Nieto-Silleras, Olmo; Bamps, Cédric; Silman, Jonathan; Pironio, Stefano

    2018-02-01

    Device-independent randomness generation and quantum key distribution protocols rely on a fundamental relation between the non-locality of quantum theory and its random character. This relation is usually expressed in terms of a trade-off between the probability of guessing correctly the outcomes of measurements performed on quantum systems and the amount of violation of a given Bell inequality. However, a more accurate assessment of the randomness produced in Bell experiments can be obtained if the value of several Bell expressions is simultaneously taken into account, or if the full set of probabilities characterizing the behavior of the device is considered. We introduce protocols for device-independent randomness generation secure against classical side information, that rely on the estimation of an arbitrary number of Bell expressions or even directly on the experimental frequencies of measurement outcomes. Asymptotically, this results in an optimal generation of randomness from experimental data (as measured by the min-entropy), without having to assume beforehand that the devices violate a specific Bell inequality.

  14. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  15. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  16. Design of high-throughput and low-power true random number generator utilizing perpendicularly magnetized voltage-controlled magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.

    2017-05-01

    A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.

  17. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  18. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  19. Statistical complexity measure of pseudorandom bit generators

    NASA Astrophysics Data System (ADS)

    González, C. M.; Larrondo, H. A.; Rosso, O. A.

    2005-08-01

    Pseudorandom number generators (PRNG) are extensively used in Monte Carlo simulations, gambling machines and cryptography as substitutes of ideal random number generators (RNG). Each application imposes different statistical requirements to PRNGs. As L’Ecuyer clearly states “the main goal for Monte Carlo methods is to reproduce the statistical properties on which these methods are based whereas for gambling machines and cryptology, observing the sequence of output values for some time should provide no practical advantage for predicting the forthcoming numbers better than by just guessing at random”. In accordance with different applications several statistical test suites have been developed to analyze the sequences generated by PRNGs. In a recent paper a new statistical complexity measure [Phys. Lett. A 311 (2003) 126] has been defined. Here we propose this measure, as a randomness quantifier of a PRNGs. The test is applied to three very well known and widely tested PRNGs available in the literature. All of them are based on mathematical algorithms. Another PRNGs based on Lorenz 3D chaotic dynamical system is also analyzed. PRNGs based on chaos may be considered as a model for physical noise sources and important new results are recently reported. All the design steps of this PRNG are described, and each stage increase the PRNG randomness using different strategies. It is shown that the MPR statistical complexity measure is capable to quantify this randomness improvement. The PRNG based on the chaotic 3D Lorenz dynamical system is also evaluated using traditional digital signal processing tools for comparison.

  20. Noise generator for tinnitus treatment based on look-up tables

    NASA Astrophysics Data System (ADS)

    Uriz, Alejandro J.; Agüero, Pablo; Tulli, Juan C.; Castiñeira Moreira, Jorge; González, Esteban; Hidalgo, Roberto; Casadei, Manuel

    2016-04-01

    Treatment of tinnitus by means of masking sounds allows to obtain a significant improve of the quality of life of the individual that suffer that condition. In view of that, it is possible to develop noise synthesizers based on random number generators in digital signal processors (DSP), which are used in almost any digital hearing aid devices. DSP architecture have limitations to implement a pseudo random number generator, due to it, the noise statistics can be not as good as expectations. In this paper, a technique to generate additive white gaussian noise (AWGN) or other types of filtered noise using coefficients stored in program memory of the DSP is proposed. Also, an implementation of the technique is carried out on a dsPIC from Microchip®. Objective experiments and experimental measurements are performed to analyze the proposed technique.

  1. Clustering, randomness and regularity in cloud fields. I - Theoretical considerations. II - Cumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.

    1992-01-01

    The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.

  2. The Evolution of Random Number Generation in MUVES

    DTIC Science & Technology

    2017-01-01

    mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results identical to the current...MUVES, includ- ing the mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results...questionable numerical and statistical properties. The development of the modern system is traced through software change requests, resulting in a random number

  3. Evaluation of Inventory Reduction Strategies: Balad Air Base Case Study

    DTIC Science & Technology

    2012-03-01

    produced by conducting individual simulations using a unique random seed generated by the default Anylogic © random number generator. The...develops an agent-based simulation model of the sustainment supply chain supporting Balad AB during its closure using the software AnyLogic ®. The...research. The goal of USAF Stockage Policy is to maximize customer support while minimizing inventory costs (DAF, 2011:1). USAF stocking decisions

  4. Autonomous Byte Stream Randomizer

    NASA Technical Reports Server (NTRS)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  5. Random phase encoding for optical security

    NASA Astrophysics Data System (ADS)

    Wang, RuiKang K.; Watson, Ian A.; Chatwin, Christopher R.

    1996-09-01

    A new optical encoding method for security applications is proposed. The encoded image (encrypted into the security products) is merely a random phase image statistically and randomly generated by a random number generator using a computer, which contains no information from the reference pattern (stored for verification) or the frequency plane filter (a phase-only function for decoding). The phase function in the frequency plane is obtained using a modified phase retrieval algorithm. The proposed method uses two phase-only functions (images) at both the input and frequency planes of the optical processor leading to maximum optical efficiency. Computer simulation shows that the proposed method is robust for optical security applications.

  6. The link between mental rotation ability and basic numerical representations

    PubMed Central

    Thompson, Jacqueline M.; Nuerk, Hans-Christoph; Moeller, Korbinian; Cohen Kadosh, Roi

    2013-01-01

    Mental rotation and number representation have both been studied widely, but although mental rotation has been linked to higher-level mathematical skills, to date it has not been shown whether mental rotation ability is linked to the most basic mental representation and processing of numbers. To investigate the possible connection between mental rotation abilities and numerical representation, 43 participants completed four tasks: 1) a standard pen-and-paper mental rotation task; 2) a multi-digit number magnitude comparison task assessing the compatibility effect, which indicates separate processing of decade and unit digits; 3) a number-line mapping task, which measures precision of number magnitude representation; and 4) a random number generation task, which yields measures both of executive control and of spatial number representations. Results show that mental rotation ability correlated significantly with both size of the compatibility effect and with number mapping accuracy, but not with any measures from the random number generation task. Together, these results suggest that higher mental rotation abilities are linked to more developed number representation, and also provide further evidence for the connection between spatial and numerical abilities. PMID:23933002

  7. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less

  8. Type I and Type II Error Rates and Overall Accuracy of the Revised Parallel Analysis Method for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo

    2015-01-01

    Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…

  9. Seminar on Understanding Digital Control and Analysis in Vibration Test Systems, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A number of techniques for dealing with important technical aspects of the random vibration control problem are described. These include the generation of pseudo-random and true random noise, the control spectrum estimation problem, the accuracy/speed tradeoff, and control correction strategies. System hardware, the operator-system interface, safety features, and operational capabilities of sophisticated digital random vibration control systems are also discussed.

  10. Small Private Key PKS on an Embedded Microprocessor

    PubMed Central

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-01-01

    Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722

  11. Small private key MQPKS on an embedded microprocessor.

    PubMed

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-03-19

    Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  12. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  13. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  14. Modeling 2D and 3D diffusion.

    PubMed

    Saxton, Michael J

    2007-01-01

    Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.

  15. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  16. A random spatial network model based on elementary postulates

    USGS Publications Warehouse

    Karlinger, Michael R.; Troutman, Brent M.

    1989-01-01

    A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

  17. Heterogeneous Suppression of Sequential Effects in Random Sequence Generation, but Not in Operant Learning.

    PubMed

    Shteingart, Hanan; Loewenstein, Yonatan

    2016-01-01

    There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants' choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when generating the "random" sequences.

  18. A New Model that Generates Lotka's Law.

    ERIC Educational Resources Information Center

    Huber, John C.

    2002-01-01

    Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)

  19. Response Surface Analysis of Stochastic Network Performance

    DTIC Science & Technology

    1988-12-01

    Bl5/32768/, B16 /65536/,P/2147483647/ XHI-IX/B 16 XALO=(IX-XHI* Bl6 )*A LEFTLO=XALO/ Bl6 FHI=XHI*A+LEFTLO IC=FHI/B1 5 IX-(((XALO-LEFTLO* Bl6 )-P)4(FHI-K*Bl5...ELSE GO TO 50 END IF GO TO 50 100 END D-5 * RANDOM NUMBER GENERATOR FUNCTION RANDOM( IX) INTEGER AP, IX,B15, B16 ,XHI ,XALOI,LEFTLO,FHI ,K DATA A/16807... Bl6 )+K IF(IX.LT.O) IX=IX+P RANDOM-FLOAT( IX) *4.656612875E-1O RETURN END * NETWORK ENTRY and * PATHSET AND CUTSET GENERATION SUBROUTINE SUBROUTINE

  20. Astronomical random numbers for quantum foundations experiments

    NASA Astrophysics Data System (ADS)

    Leung, Calvin; Brown, Amy; Nguyen, Hien; Friedman, Andrew S.; Kaiser, David I.; Gallicchio, Jason

    2018-04-01

    Photons from distant astronomical sources can be used as a classical source of randomness to improve fundamental tests of quantum nonlocality, wave-particle duality, and local realism through Bell's inequality and delayed-choice quantum eraser tests inspired by Wheeler's cosmic-scale Mach-Zehnder interferometer gedanken experiment. Such sources of random numbers may also be useful for information-theoretic applications such as key distribution for quantum cryptography. Building on the design of an astronomical random number generator developed for the recent cosmic Bell experiment [Handsteiner et al. Phys. Rev. Lett. 118, 060401 (2017), 10.1103/PhysRevLett.118.060401], in this paper we report on the design and characterization of a device that, with 20-nanosecond latency, outputs a bit based on whether the wavelength of an incoming photon is greater than or less than ≈700 nm. Using the one-meter telescope at the Jet Propulsion Laboratory Table Mountain Observatory, we generated random bits from astronomical photons in both color channels from 50 stars of varying color and magnitude, and from 12 quasars with redshifts up to z =3.9 . With stars, we achieved bit rates of ˜1 ×106Hz/m 2 , limited by saturation of our single-photon detectors, and with quasars of magnitudes between 12.9 and 16, we achieved rates between ˜102 and 2 ×103Hz /m2 . For bright quasars, the resulting bitstreams exhibit sufficiently low amounts of statistical predictability as quantified by the mutual information. In addition, a sufficiently high fraction of bits generated are of true astronomical origin in order to address both the locality and freedom-of-choice loopholes when used to set the measurement settings in a test of the Bell-CHSH inequality.

  1. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography.

    PubMed

    Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang

    2010-08-16

    Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.

  2. Some design issues of strata-matched non-randomized studies with survival outcomes.

    PubMed

    Mazumdar, Madhu; Tu, Donsheng; Zhou, Xi Kathy

    2006-12-15

    Non-randomized studies for the evaluation of a medical intervention are useful for quantitative hypothesis generation before the initiation of a randomized trial and also when randomized clinical trials are difficult to conduct. A strata-matched non-randomized design is often utilized where subjects treated by a test intervention are matched to a fixed number of subjects treated by a standard intervention within covariate based strata. In this paper, we consider the issue of sample size calculation for this design. Based on the asymptotic formula for the power of a stratified log-rank test, we derive a formula to calculate the minimum number of subjects in the test intervention group that is required to detect a given relative risk between the test and standard interventions. When this minimum number of subjects in the test intervention group is available, an equation is also derived to find the multiple that determines the number of subjects in the standard intervention group within each stratum. The methodology developed is applied to two illustrative examples in gastric cancer and sarcoma.

  3. Developmental Changes in the Effect of Active Left and Right Head Rotation on Random Number Generation.

    PubMed

    Sosson, Charlotte; Georges, Carrie; Guillaume, Mathieu; Schuller, Anne-Marie; Schiltz, Christine

    2018-01-01

    Numbers are thought to be spatially organized along a left-to-right horizontal axis with small/large numbers on its left/right respectively. Behavioral evidence for this mental number line (MNL) comes from studies showing that the reallocation of spatial attention by active left/right head rotation facilitated the generation of small/large numbers respectively. While spatial biases in random number generation (RNG) during active movement are well established in adults, comparable evidence in children is lacking and it remains unclear whether and how children's access to the MNL is affected by active head rotation. To get a better understanding of the development of embodied number processing, we investigated the effect of active head rotation on the mean of generated numbers as well as the mean difference between each number and its immediately preceding response (the first order difference; FOD) not only in adults ( n = 24), but also in 7- to 11-year-old elementary school children ( n = 70). Since the sign and absolute value of FODs carry distinct information regarding spatial attention shifts along the MNL, namely their direction (left/right) and size (narrow/wide) respectively, we additionally assessed the influence of rotation on the total of negative and positive FODs regardless of their numerical values as well as on their absolute values. In line with previous studies, adults produced on average smaller numbers and generated smaller mean FODs during left than right rotation. More concretely, they produced more negative/positive FODs during left/right rotation respectively and the size of negative FODs was larger (in terms of absolute value) during left than right rotation. Importantly, as opposed to adults, no significant differences in RNG between left and right head rotations were observed in children. Potential explanations for such age-related changes in the effect of active head rotation on RNG are discussed. Altogether, the present study confirms that numerical processing is spatially grounded in adults and suggests that its embodied aspect undergoes significant developmental changes.

  4. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    ERIC Educational Resources Information Center

    Bancroft, Stacie L.; Bourret, Jason C.

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…

  5. Random Number Generation in Autism.

    ERIC Educational Resources Information Center

    Williams, Mark A.; Moss, Simon A.; Bradshaw, John L.; Rinehart, Nicole J.

    2002-01-01

    This study explored the ability of 14 individuals with autism to generate a unique series of digits. Individuals with autism were more likely to repeat previous digits than comparison individuals, suggesting they may exhibit a shortfall in response inhibition. Results support the executive dysfunction theory of autism. (Contains references.)…

  6. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  7. Impact of number of realizations on the suitability of simulated weather data for hydrologic and environmental applications

    USDA-ARS?s Scientific Manuscript database

    Stochastic weather generators are widely used in hydrological, environmental, and agricultural applications to simulate and forecast weather time series. However, such stochastic processes usually produce random outputs hence the question on how representative the generated data are if obtained fro...

  8. Moving along the Mental Number Line: Interactions between Whole-Body Motion and Numerical Cognition

    ERIC Educational Resources Information Center

    Hartmann, Matthias; Grabherr, Luzia; Mast, Fred W.

    2012-01-01

    Active head turns to the left and right have recently been shown to influence numerical cognition by shifting attention along the mental number line. In the present study, we found that passive whole-body motion influences numerical cognition. In a random-number generation task (Experiment 1), leftward and downward displacement of participants…

  9. On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, A. K.

    1973-01-01

    Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.

  10. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  11. DNA based random key generation and management for OTP encryption.

    PubMed

    Zhang, Yunpeng; Liu, Xin; Sun, Manhui

    2017-09-01

    One-time pad (OTP) is a principle of key generation applied to the stream ciphering method which offers total privacy. The OTP encryption scheme has proved to be unbreakable in theory, but difficult to realize in practical applications. Because OTP encryption specially requires the absolute randomness of the key, its development has suffered from dense constraints. DNA cryptography is a new and promising technology in the field of information security. DNA chromosomes storing capabilities can be used as one-time pad structures with pseudo-random number generation and indexing in order to encrypt the plaintext messages. In this paper, we present a feasible solution to the OTP symmetric key generation and transmission problem with DNA at the molecular level. Through recombinant DNA technology, by using only sender-receiver known restriction enzymes to combine the secure key represented by DNA sequence and the T vector, we generate the DNA bio-hiding secure key and then place the recombinant plasmid in implanted bacteria for secure key transmission. The designed bio experiments and simulation results show that the security of the transmission of the key is further improved and the environmental requirements of key transmission are reduced. Analysis has demonstrated that the proposed DNA-based random key generation and management solutions are marked by high security and usability. Published by Elsevier B.V.

  12. FPGA Implementation of Metastability-Based True Random Number Generator

    NASA Astrophysics Data System (ADS)

    Hata, Hisashi; Ichikawa, Shuichi

    True random number generators (TRNGs) are important as a basis for computer security. Though there are some TRNGs composed of analog circuit, the use of digital circuits is desired for the application of TRNGs to logic LSIs. Some of the digital TRNGs utilize jitter in free-running ring oscillators as a source of entropy, which consume large power. Another type of TRNG exploits the metastability of a latch to generate entropy. Although this kind of TRNG has been mostly implemented with full-custom LSI technology, this study presents an implementation based on common FPGA technology. Our TRNG is comprised of logic gates only, and can be integrated in any kind of logic LSI. The RS latch in our TRNG is implemented as a hard-macro to guarantee the quality of randomness by minimizing the signal skew and load imbalance of internal nodes. To improve the quality and throughput, the output of 64-256 latches are XOR'ed. The derived design was verified on a Xilinx Virtex-4 FPGA (XC4VFX20), and passed NIST statistical test suite without post-processing. Our TRNG with 256 latches occupies 580 slices, while achieving 12.5Mbps throughput.

  13. Generating Random Numbers by Means of Nonlinear Dynamic Systems

    ERIC Educational Resources Information Center

    Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi

    2018-01-01

    To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the…

  14. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  15. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    PubMed

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.

  16. Random sampling of elementary flux modes in large-scale metabolic networks.

    PubMed

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  17. Hurwitz numbers and products of random matrices

    NASA Astrophysics Data System (ADS)

    Orlov, A. Yu.

    2017-09-01

    We study multimatrix models, which may be viewed as integrals of products of tau functions depending on the eigenvalues of products of random matrices. We consider tau functions of the two-component Kadomtsev-Petviashvili (KP) hierarchy (semi-infinite relativistic Toda lattice) and of the B-type KP (BKP) hierarchy introduced by Kac and van de Leur. Such integrals are sometimes tau functions themselves. We consider models that generate Hurwitz numbers HE,F, where E is the Euler characteristic of the base surface and F is the number of branch points. We show that in the case where the integrands contain the product of n > 2 matrices, the integral generates Hurwitz numbers with E ≤ 2 and F ≤ n+2. Both the numbers E and F depend both on n and on the order of the factors in the matrix product. The Euler characteristic E can be either an even or an odd number, i.e., it can match both orientable and nonorientable (Klein) base surfaces depending on the presence of the tau function of the BKP hierarchy in the integrand. We study two cases, the products of complex and the products of unitary matrices.

  18. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) The random number generator shall be linked to the computer system and shall directly relay the... information shall be generated by the computer system. (2) This documentation shall be restricted to... to the computer system shall be adequately restricted (i.e., passwords are changed at least quarterly...

  19. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  20. Gambling with Superconducting Fluctuations

    NASA Astrophysics Data System (ADS)

    Foltyn, Marek; Zgirski, Maciej

    2015-08-01

    Josephson junctions and superconducting nanowires, when biased close to superconducting critical current, can switch to a nonzero voltage state by thermal or quantum fluctuations. The process is understood as an escape of a Brownian particle from a metastable state. Since this effect is fully stochastic, we propose to use it for generating random numbers. We present protocol for obtaining random numbers and test the experimentally harvested data for their fidelity. Our work is prerequisite for using the Josephson junction as a tool for stochastic (probabilistic) determination of physical parameters such as magnetic flux, temperature, and current.

  1. Vision system and three-dimensional modeling techniques for quantification of the morphology of irregular particles

    NASA Astrophysics Data System (ADS)

    Smith, Lyndon N.; Smith, Melvyn L.

    2000-10-01

    Particulate materials undergo processing in many industries, and therefore there are significant commercial motivators for attaining improvements in the flow and packing behavior of powders. This can be achieved by modeling the effects of particle size, friction, and most importantly, particle shape or morphology. The method presented here for simulating powders employs a random number generator to construct a model of a random particle by combining a sphere with a number of smaller spheres. The resulting 3D model particle has a nodular type of morphology, which is similar to that exhibited by the atomized powders that are used in the bulk of powder metallurgy (PM) manufacture. The irregularity of the model particles is dependent upon vision system data gathered from microscopic analysis of real powder particles. A methodology is proposed whereby randomly generated model particles of various sized and irregularities can be combined in a random packing simulation. The proposed Monte Carlo technique would allow incorporation of the effects of gravity, wall friction, and inter-particle friction. The improvements in simulation realism that this method is expected to provide would prove useful for controlling powder production, and for predicting die fill behavior during the production of PM parts.

  2. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhagwat, Nikhil V.

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment ofmore » tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.« less

  3. Developmental Changes in the Effect of Active Left and Right Head Rotation on Random Number Generation

    PubMed Central

    Sosson, Charlotte; Georges, Carrie; Guillaume, Mathieu; Schuller, Anne-Marie; Schiltz, Christine

    2018-01-01

    Numbers are thought to be spatially organized along a left-to-right horizontal axis with small/large numbers on its left/right respectively. Behavioral evidence for this mental number line (MNL) comes from studies showing that the reallocation of spatial attention by active left/right head rotation facilitated the generation of small/large numbers respectively. While spatial biases in random number generation (RNG) during active movement are well established in adults, comparable evidence in children is lacking and it remains unclear whether and how children’s access to the MNL is affected by active head rotation. To get a better understanding of the development of embodied number processing, we investigated the effect of active head rotation on the mean of generated numbers as well as the mean difference between each number and its immediately preceding response (the first order difference; FOD) not only in adults (n = 24), but also in 7- to 11-year-old elementary school children (n = 70). Since the sign and absolute value of FODs carry distinct information regarding spatial attention shifts along the MNL, namely their direction (left/right) and size (narrow/wide) respectively, we additionally assessed the influence of rotation on the total of negative and positive FODs regardless of their numerical values as well as on their absolute values. In line with previous studies, adults produced on average smaller numbers and generated smaller mean FODs during left than right rotation. More concretely, they produced more negative/positive FODs during left/right rotation respectively and the size of negative FODs was larger (in terms of absolute value) during left than right rotation. Importantly, as opposed to adults, no significant differences in RNG between left and right head rotations were observed in children. Potential explanations for such age-related changes in the effect of active head rotation on RNG are discussed. Altogether, the present study confirms that numerical processing is spatially grounded in adults and suggests that its embodied aspect undergoes significant developmental changes. PMID:29541048

  4. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  5. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    PubMed

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  6. A homodyne detector integrated onto a photonic chip for measuring quantum states and generating random numbers

    NASA Astrophysics Data System (ADS)

    Raffaelli, Francesco; Ferranti, Giacomo; Mahler, Dylan H.; Sibson, Philip; Kennard, Jake E.; Santamato, Alberto; Sinclair, Gary; Bonneau, Damien; Thompson, Mark G.; Matthews, Jonathan C. F.

    2018-04-01

    Optical homodyne detection has found use as a characterisation tool in a range of quantum technologies. So far implementations have been limited to bulk optics. Here we present the optical integration of a homodyne detector onto a silicon photonics chip. The resulting device operates at high speed, up 150 MHz, it is compact and it operates with low noise, quantified with 11 dB clearance between shot noise and electronic noise. We perform on-chip quantum tomography of coherent states with the detector and show that it meets the requirements for characterising more general quantum states of light. We also show that the detector is able to produce quantum random numbers at a rate of 1.2 Gbps, by measuring the vacuum state of the electromagnetic field and applying off-line post processing. The produced random numbers pass all the statistical tests provided by the NIST test suite.

  7. The Coalescent Process in Models with Selection

    PubMed Central

    Kaplan, N. L.; Darden, T.; Hudson, R. R.

    1988-01-01

    Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685

  8. On the conservative nature of intragenic recombination

    PubMed Central

    Drummond, D. Allan; Silberg, Jonathan J.; Meyer, Michelle M.; Wilke, Claus O.; Arnold, Frances H.

    2005-01-01

    Intragenic recombination rapidly creates protein sequence diversity compared with random mutation, but little is known about the relative effects of recombination and mutation on protein function. Here, we compare recombination of the distantly related β-lactamases PSE-4 and TEM-1 to mutation of PSE-4. We show that, among β-lactamase variants containing the same number of amino acid substitutions, variants created by recombination retain function with a significantly higher probability than those generated by random mutagenesis. We present a simple model that accurately captures the differing effects of mutation and recombination in real and simulated proteins with only four parameters: (i) the amino acid sequence distance between parents, (ii) the number of substitutions, (iii) the average probability that random substitutions will preserve function, and (iv) the average probability that substitutions generated by recombination will preserve function. Our results expose a fundamental functional enrichment in regions of protein sequence space accessible by recombination and provide a framework for evaluating whether the relative rates of mutation and recombination observed in nature reflect the underlying imbalance in their effects on protein function. PMID:15809422

  9. On the conservative nature of intragenic recombination.

    PubMed

    Drummond, D Allan; Silberg, Jonathan J; Meyer, Michelle M; Wilke, Claus O; Arnold, Frances H

    2005-04-12

    Intragenic recombination rapidly creates protein sequence diversity compared with random mutation, but little is known about the relative effects of recombination and mutation on protein function. Here, we compare recombination of the distantly related beta-lactamases PSE-4 and TEM-1 to mutation of PSE-4. We show that, among beta-lactamase variants containing the same number of amino acid substitutions, variants created by recombination retain function with a significantly higher probability than those generated by random mutagenesis. We present a simple model that accurately captures the differing effects of mutation and recombination in real and simulated proteins with only four parameters: (i) the amino acid sequence distance between parents, (ii) the number of substitutions, (iii) the average probability that random substitutions will preserve function, and (iv) the average probability that substitutions generated by recombination will preserve function. Our results expose a fundamental functional enrichment in regions of protein sequence space accessible by recombination and provide a framework for evaluating whether the relative rates of mutation and recombination observed in nature reflect the underlying imbalance in their effects on protein function.

  10. Concatenated shift registers generating maximally spaced phase shifts of PN-sequences

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.; Welch, L. R.

    1977-01-01

    A large class of linearly concatenated shift registers is shown to generate approximately maximally spaced phase shifts of pn-sequences, for use in pseudorandom number generation. A constructive method is presented for finding members of this class, for almost all degrees for which primitive trinomials exist. The sequences which result are not normally characterized by trinomial recursions, which is desirable since trinomial sequences can have some undesirable randomness properties.

  11. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    PubMed

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  12. 25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the application under the rules of the game. For example, if a bingo game with 75 objects with numbers... test potency and degree of serial correlation (outcomes must be independent from the previous game...) General requirements. (1) Software that calls an RNG to derive game outcome events must immediately use...

  13. 25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the application under the rules of the game. For example, if a bingo game with 75 objects with numbers... test potency and degree of serial correlation (outcomes must be independent from the previous game...) General requirements. (1) Software that calls an RNG to derive game outcome events must immediately use...

  14. Data-driven probability concentration and sampling on manifold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less

  15. Color image encryption based on gyrator transform and Arnold transform

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Gao, Bo

    2013-06-01

    A color image encryption scheme using gyrator transform and Arnold transform is proposed, which has two security levels. In the first level, the color image is separated into three components: red, green and blue, which are normalized and scrambled using the Arnold transform. The green component is combined with the first random phase mask and transformed to an interim using the gyrator transform. The first random phase mask is generated with the sum of the blue component and a logistic map. Similarly, the red component is combined with the second random phase mask and transformed to three-channel-related data. The second random phase mask is generated with the sum of the phase of the interim and an asymmetrical tent map. In the second level, the three-channel-related data are scrambled again and combined with the third random phase mask generated with the sum of the previous chaotic maps, and then encrypted into a gray scale ciphertext. The encryption result has stationary white noise distribution and camouflage property to some extent. In the process of encryption and decryption, the rotation angle of gyrator transform, the iterative numbers of Arnold transform, the parameters of the chaotic map and generated accompanied phase function serve as encryption keys, and hence enhance the security of the system. Simulation results and security analysis are presented to confirm the security, validity and feasibility of the proposed scheme.

  16. Statistical evaluation of PACSTAT random number generation capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, G.F.; Toland, M.R.; Harty, H.

    1988-05-01

    This report summarizes the work performed in verifying the general purpose Monte Carlo driver-program PACSTAT. The main objective of the work was to verify the performance of PACSTAT's random number generation capabilities. Secondary objectives were to document (using controlled configuration management procedures) changes made in PACSTAT at Pacific Northwest Laboratory, and to assure that PACSTAT input and output files satisfy quality assurance traceability constraints. Upon receipt of the PRIME version of the PACSTAT code from the Basalt Waste Isolation Project, Pacific Northwest Laboratory staff converted the code to run on Digital Equipment Corporation (DEC) VAXs. The modifications to PACSTAT weremore » implemented using the WITNESS configuration management system, with the modifications themselves intended to make the code as portable as possible. Certain modifications were made to make the PACSTAT input and output files conform to quality assurance traceability constraints. 10 refs., 17 figs., 6 tabs.« less

  17. Cascading failures in ac electricity grids.

    PubMed

    Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan

    2016-09-01

    Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.

  18. Statistical aspects of evolution under natural selection, with implications for the advantage of sexual reproduction.

    PubMed

    Crouch, Daniel J M

    2017-10-27

    The prevalence of sexual reproduction remains mysterious, as it poses clear evolutionary drawbacks compared to reproducing asexually. Several possible explanations exist, with one of the most likely being that finite population size causes linkage disequilibria to randomly generate and impede the progress of natural selection, and that these are eroded by recombination via sexual reproduction. Previous investigations have either analysed this phenomenon in detail for small numbers of loci, or performed population simulations for many loci. Here we present a quantitative genetic model for fitness, based on the Price Equation, in order to examine the theoretical consequences of randomly generated linkage disequilibria when there are many loci. In addition, most previous work has been concerned with the long-term consequences of deleterious linkage disequilibria for population fitness. The expected change in mean fitness between consecutive generations, a measure of short-term evolutionary success, is shown under random environmental influences to be related to the autocovariance in mean fitness between the generations, capturing the effects of stochastic forces such as genetic drift. Interaction between genetic drift and natural selection, due to randomly generated linkage disequilibria, is demonstrated to be one possible source of mean fitness autocovariance. This suggests a possible role for sexual reproduction in reducing the negative effects of genetic drift, thereby improving the short-term efficacy of natural selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. On predicting receptivity to surface roughness in a compressible infinite swept wing boundary layer

    NASA Astrophysics Data System (ADS)

    Thomas, Christian; Mughal, Shahid; Ashworth, Richard

    2017-03-01

    The receptivity of crossflow disturbances on an infinite swept wing is investigated using solutions of the adjoint linearised Navier-Stokes equations. The adjoint based method for predicting the magnitude of stationary disturbances generated by randomly distributed surface roughness is described, with the analysis extended to include both surface curvature and compressible flow effects. Receptivity is predicted for a broad spectrum of spanwise wavenumbers, variable freestream Reynolds numbers, and subsonic Mach numbers. Curvature is found to play a significant role in the receptivity calculations, while compressible flow effects are only found to marginally affect the initial size of the crossflow instability. A Monte Carlo type analysis is undertaken to establish the mean amplitude and variance of crossflow disturbances generated by the randomly distributed surface roughness. Mean amplitudes are determined for a range of flow parameters that are maximised for roughness distributions containing a broad spectrum of roughness wavelengths, including those that are most effective in generating stationary crossflow disturbances. A control mechanism is then developed where the short scale roughness wavelengths are damped, leading to significant reductions in the receptivity amplitude.

  20. Sample design effects in landscape genetics

    USGS Publications Warehouse

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  1. A dose optimization method for electron radiotherapy using randomized aperture beams

    NASA Astrophysics Data System (ADS)

    Engel, Konrad; Gauer, Tobias

    2009-09-01

    The present paper describes the entire optimization process of creating a radiotherapy treatment plan for advanced electron irradiation. Special emphasis is devoted to the selection of beam incidence angles and beam energies as well as to the choice of appropriate subfields generated by a refined version of intensity segmentation and a novel random aperture approach. The algorithms have been implemented in a stand-alone programme using dose calculations from a commercial treatment planning system. For this study, the treatment planning system Pinnacle from Philips has been used and connected to the optimization programme using an ASCII interface. Dose calculations in Pinnacle were performed by Monte Carlo simulations for a remote-controlled electron multileaf collimator (MLC) from Euromechanics. As a result, treatment plans for breast cancer patients could be significantly improved when using randomly generated aperture beams. The combination of beams generated through segmentation and randomization achieved the best results in terms of target coverage and sparing of critical organs. The treatment plans could be further improved by use of a field reduction algorithm. Without a relevant loss in dose distribution, the total number of MLC fields and monitor units could be reduced by up to 20%. In conclusion, using randomized aperture beams is a promising new approach in radiotherapy and exhibits potential for further improvements in dose optimization through a combination of randomized electron and photon aperture beams.

  2. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    PubMed

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  3. The relationships of 'ecstasy' (MDMA) and cannabis use to impaired executive inhibition and access to semantic long-term memory.

    PubMed

    Murphy, Philip N; Erwin, Philip G; Maciver, Linda; Fisk, John E; Larkin, Derek; Wareing, Michelle; Montgomery, Catharine; Hilton, Joanne; Tames, Frank J; Bradley, Belinda; Yanulevitch, Kate; Ralley, Richard

    2011-10-01

    This study aimed to examine the relationship between the consumption of ecstasy (3,4-methylenedioxymethamphetamine (MDMA)) and cannabis, and performance on the random letter generation task which generates dependent variables drawing upon executive inhibition and access to semantic long-term memory (LTM). The participant group was a between-participant independent variable with users of both ecstasy and cannabis (E/C group, n = 15), users of cannabis but not ecstasy (CA group, n = 13) and controls with no exposure to these drugs (CO group, n = 12). Dependent variables measured violations of randomness: number of repeat sequences, number of alphabetical sequences (both drawing upon inhibition) and redundancy (drawing upon access to semantic LTM). E/C participants showed significantly higher redundancy than CO participants but did not differ from CA participants. There were no significant effects for the other dependent variables. A regression model comprising intelligence measures and estimates of ecstasy and cannabis consumption predicted redundancy scores, but only cannabis consumption contributed significantly to this prediction. Impaired access to semantic LTM may be related to cannabis consumption, although the involvement of ecstasy and other stimulant drugs cannot be excluded here. Executive inhibitory functioning, as measured by the random letter generation task, is unrelated to ecstasy and cannabis consumption. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  5. Knowledge Representation for Decision Making Agents

    DTIC Science & Technology

    2013-07-15

    knowledge map. This knowledge map is a dictionary data structure called tmap in the code. It represents a network of locations with a number [0,1...fillRandom(): Informed initial tmap distribution (randomly generated per node) with belief one. • initialBelief = 3 uses fillCenter(): normal...triggered on AllMyFMsHaveBeenInitialized. 2. Executes main.py • Initializes knowledge map labeled tmap . • Calls initialize search() – resets distanceTot and

  6. CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties

    DTIC Science & Technology

    2017-03-01

    inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse

  7. Randomization in clinical trials: stratification or minimization? The HERMES free simulation software.

    PubMed

    Fron Chabouis, Hélène; Chabouis, Francis; Gillaizeau, Florence; Durieux, Pierre; Chatellier, Gilles; Ruse, N Dorin; Attal, Jean-Pierre

    2014-01-01

    Operative clinical trials are often small and open-label. Randomization is therefore very important. Stratification and minimization are two randomization options in such trials. The first aim of this study was to compare stratification and minimization in terms of predictability and balance in order to help investigators choose the most appropriate allocation method. Our second aim was to evaluate the influence of various parameters on the performance of these techniques. The created software generated patients according to chosen trial parameters (e.g., number of important prognostic factors, number of operators or centers, etc.) and computed predictability and balance indicators for several stratification and minimization methods over a given number of simulations. Block size and proportion of random allocations could be chosen. A reference trial was chosen (50 patients, 1 prognostic factor, and 2 operators) and eight other trials derived from this reference trial were modeled. Predictability and balance indicators were calculated from 10,000 simulations per trial. Minimization performed better with complex trials (e.g., smaller sample size, increasing number of prognostic factors, and operators); stratification imbalance increased when the number of strata increased. An inverse correlation between imbalance and predictability was observed. A compromise between predictability and imbalance still has to be found by the investigator but our software (HERMES) gives concrete reasons for choosing between stratification and minimization; it can be downloaded free of charge. This software will help investigators choose the appropriate randomization method in future two-arm trials.

  8. What's in a title? An assessment of whether randomized controlled trial in a title means that it is one.

    PubMed

    Koletsi, Despina; Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore

    2012-06-01

    In this study, we aimed to investigate whether studies published in orthodontic journals and titled as randomized clinical trials are truly randomized clinical trials. A second objective was to explore the association of journal type and other publication characteristics on correct classification. American Journal of Orthodontics and Dentofacial Orthopedics, European Journal of Orthodontics, Angle Orthodontist, Journal of Orthodontics, Orthodontics and Craniofacial Research, World Journal of Orthodontics, Australian Orthodontic Journal, and Journal of Orofacial Orthopedics were hand searched for clinical trials labeled in the title as randomized from 1979 to July 2011. The data were analyzed by using descriptive statistics, and univariable and multivariable examinations of statistical associations via ordinal logistic regression modeling (proportional odds model). One hundred twelve trials were identified. Of the included trials, 33 (29.5%) were randomized clinical trials, 52 (46.4%) had an unclear status, and 27 (24.1%) were not randomized clinical trials. In the multivariable analysis among the included journal types, year of publication, number of authors, multicenter trial, and involvement of statistician were significant predictors of correctly classifying a study as a randomized clinical trial vs unclear and not a randomized clinical trial. From 112 clinical trials in the orthodontic literature labeled as randomized clinical trials, only 29.5% were identified as randomized clinical trials based on clear descriptions of appropriate random number generation and allocation concealment. The type of journal, involvement of a statistician, multicenter trials, greater numbers of authors, and publication year were associated with correct clinical trial classification. This study indicates the need of clear and accurate reporting of clinical trials and the need for educating investigators on randomized clinical trial methodology. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  9. Monte Carlo, Probability, Algebra, and Pi.

    ERIC Educational Resources Information Center

    Hinders, Duane C.

    1981-01-01

    The uses of random number generators are illustrated in three ways: (1) the solution of a probability problem using a coin; (2) the solution of a system of simultaneous linear equations using a die; and (3) the approximation of pi using darts. (MP)

  10. Interpretation of sucrose gradient sedimentation pattern of deoxyribonucleic acid fragments resulting from random breaks.

    PubMed

    Litwin, S; Shahn, E; Kozinski, A W

    1969-07-01

    Mass distribution in a sucrose gradient of deoxyribonucleic acid (DNA) fragments arising as a result of random breaks is predicted by analytical means from which computer evaluations are plotted. The analytical results are compared with the results of verifying experiments: (i) a Monte Carlo computer experiment in which simulated molecules of DNA were individuals of unit length subjected to random "breaks" applied by a random number generator, and (ii) an in vitro experiment in which molecules of T4 DNA, highly labeled with (32)P, were stored in liquid nitrogen for variable periods of time during which a precisely known number of (32)P atoms decayed, causing single-stranded breaks. The distribution of sizes of the resulting fragments was measured in an alkaline sucrose gradient. The profiles obtained in this fashion were compared with the mathematical predictions. Both experiments agree with the analytical approach and thus permit the use of the graphs obtained from the latter as a means of determining the average number of random breaks in DNA from distributions obtained experimentally in a sucrose gradient. An example of the application of this procedure to a previously unresolved problem is provided in the case of DNA from ultraviolet-irradiated phage which undergoes a dose-dependent intracellular breakdown. The relationship between the number of lethal hits and the number of single-stranded breaks was not previously established. A comparison of the calculated number of nicks per strand of DNA with the known dose in phage-lethal hits reveals a relationship closely approximating one lethal hit to one single-stranded break.

  11. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  12. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  13. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  14. Impact of Probiotics on Necrotizing Enterocolitis

    PubMed Central

    Underwood, Mark A.

    2016-01-01

    A large number of randomized placebo-controlled clinical trials and cohort studies have demonstrated a decrease in the incidence of necrotizing enterocolitis with administration of probiotic microbes. These studies have prompted many neonatologists to adopt routine prophylactic administration of probiotics while others await more definitive studies and/or probiotic products with demonstrated purity and stable numbers of live organisms. Cross-contamination and inadequate sample size limit the value of further traditional placebo-controlled randomized controlled trials. Key areas for future research include mechanisms of protection, optimum probiotic species or strains (or combinations thereof) and duration of treatment, interactions between diet and the administered probiotic, and the influence of genetic polymorphisms in the mother and infant on probiotic response. Next generation probiotics selected based on bacterial genetics rather than ease of production and large cluster-randomized clinical trials hold great promise for NEC prevention. PMID:27836423

  15. Counting of oligomers in sequences generated by markov chains for DNA motif discovery.

    PubMed

    Shan, Gao; Zheng, Wei-Mou

    2009-02-01

    By means of the technique of the imbedded Markov chain, an efficient algorithm is proposed to exactly calculate first, second moments of word counts and the probability for a word to occur at least once in random texts generated by a Markov chain. A generating function is introduced directly from the imbedded Markov chain to derive asymptotic approximations for the problem. Two Z-scores, one based on the number of sequences with hits and the other on the total number of word hits in a set of sequences, are examined for discovery of motifs on a set of promoter sequences extracted from A. thaliana genome. Source code is available at http://www.itp.ac.cn/zheng/oligo.c.

  16. Graphic Simulations of the Poisson Process.

    DTIC Science & Technology

    1982-10-01

    RANDOM NUMBERS AND TRANSFORMATIONS..o......... 11 Go THE RANDOM NUMBERGENERATOR....... .oo..... 15 III. POISSON PROCESSES USER GUIDE....oo.ooo ......... o...again. In the superimposed mode, two Poisson processes are active, each with a different rate parameter, (call them Type I and Type II with respective...occur. The value ’p’ is generated by the following equation where ’Li’ and ’L2’ are the rates of the two Poisson processes ; p = Li / (Li + L2) The value

  17. Long period pseudo random number sequence generator

    NASA Technical Reports Server (NTRS)

    Wang, Charles C. (Inventor)

    1989-01-01

    A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).

  18. Designing Hyperchaotic Cat Maps With Any Desired Number of Positive Lyapunov Exponents.

    PubMed

    Hua, Zhongyun; Yi, Shuang; Zhou, Yicong; Li, Chengqing; Wu, Yue

    2018-02-01

    Generating chaotic maps with expected dynamics of users is a challenging topic. Utilizing the inherent relation between the Lyapunov exponents (LEs) of the Cat map and its associated Cat matrix, this paper proposes a simple but efficient method to construct an -dimensional ( -D) hyperchaotic Cat map (HCM) with any desired number of positive LEs. The method first generates two basic -D Cat matrices iteratively and then constructs the final -D Cat matrix by performing similarity transformation on one basic -D Cat matrix by the other. Given any number of positive LEs, it can generate an -D HCM with desired hyperchaotic complexity. Two illustrative examples of -D HCMs were constructed to show the effectiveness of the proposed method, and to verify the inherent relation between the LEs and Cat matrix. Theoretical analysis proves that the parameter space of the generated HCM is very large. Performance evaluations show that, compared with existing methods, the proposed method can construct -D HCMs with lower computation complexity and their outputs demonstrate strong randomness and complex ergodicity.

  19. Efficient quantum pseudorandomness with simple graph states

    NASA Astrophysics Data System (ADS)

    Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian

    2018-02-01

    Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.

  20. An analysis of the metabolic theory of the origin of the genetic code

    NASA Technical Reports Server (NTRS)

    Amirnovin, R.; Bada, J. L. (Principal Investigator)

    1997-01-01

    A computer program was used to test Wong's coevolution theory of the genetic code. The codon correlations between the codons of biosynthetically related amino acids in the universal genetic code and in randomly generated genetic codes were compared. It was determined that many codon correlations are also present within random genetic codes and that among the random codes there are always several which have many more correlations than that found in the universal code. Although the number of correlations depends on the choice of biosynthetically related amino acids, the probability of choosing a random genetic code with the same or greater number of codon correlations as the universal genetic code was found to vary from 0.1% to 34% (with respect to a fairly complete listing of related amino acids). Thus, Wong's theory that the genetic code arose by coevolution with the biosynthetic pathways of amino acids, based on codon correlations between biosynthetically related amino acids, is statistical in nature.

  1. Scaling of flow distance in random self-similar channel networks

    USGS Publications Warehouse

    Troutman, B.M.

    2005-01-01

    Natural river channel networks have been shown in empirical studies to exhibit power-law scaling behavior characteristic of self-similar and self-affine structures. Of particular interest is to describe how the distribution of distance to the outlet changes as a function of network size. In this paper, networks are modeled as random self-similar rooted tree graphs and scaling of distance to the root is studied using methods in stochastic branching theory. In particular, the asymptotic expectation of the width function (number of nodes as a function of distance to the outlet) is derived under conditions on the replacement generators. It is demonstrated further that the branching number describing rate of growth of node distance to the outlet is identical to the length ratio under a Horton-Strahler ordering scheme as order gets large, again under certain restrictions on the generators. These results are discussed in relation to drainage basin allometry and an application to an actual drainage network is presented. ?? World Scientific Publishing Company.

  2. Statistical analysis of relationship between negative-bias temperature instability and random telegraph noise in small p-channel metal-oxide-semiconductor field-effect transistors

    NASA Astrophysics Data System (ADS)

    Tega, Naoki; Miki, Hiroshi; Mine, Toshiyuki; Ohmori, Kenji; Yamada, Keisaku

    2014-03-01

    It is demonstrated from a statistical perspective that the generation of random telegraph noise (RTN) changes before and after the application of negative-bias temperature instability (NBTI) stress. The NBTI stress generates a large number of permanent interface traps and, at the same time, a large number of RTN traps causing temporary RTN and one-time RTN. The interface trap and the RTN trap show different features in the recovery process. That is, a re-passivation of interface states is the minor cause of the recovery after the NBTI stress, and in contrast, rapid disappearance of the temporary RTN and the one-time RTN is the main cause of the recovery. The RTN traps are less likely to become permanent. This two-type trap, namely, the interface trap and RTN trap, model simply explains NBTI degradation and recovery in scaled p-channel metal-oxide-semiconductor field-effect transistors.

  3. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  4. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein-DNA interfaces.

  5. Caustics, counting maps and semi-classical asymptotics

    NASA Astrophysics Data System (ADS)

    Ercolani, N. M.

    2011-02-01

    This paper develops a deeper understanding of the structure and combinatorial significance of the partition function for Hermitian random matrices. The coefficients of the large N expansion of the logarithm of this partition function, also known as the genus expansion (and its derivatives), are generating functions for a variety of graphical enumeration problems. The main results are to prove that these generating functions are, in fact, specific rational functions of a distinguished irrational (algebraic) function, z0(t). This distinguished function is itself the generating function for the Catalan numbers (or generalized Catalan numbers, depending on the choice of weight of the parameter t). It is also a solution of the inviscid Burgers equation for certain initial data. The shock formation, or caustic, of the Burgers characteristic solution is directly related to the poles of the rational forms of the generating functions. As an intriguing application, one gains new insights into the relation between certain derivatives of the genus expansion, in a double-scaling limit, and the asymptotic expansion of the first Painlevé transcendent. This provides a precise expression of the Painlevé asymptotic coefficients directly in terms of the coefficients of the partial fractions expansion of the rational form of the generating functions established in this paper. Moreover, these insights point towards a more general program relating the first Painlevé hierarchy to the higher order structure of the double-scaling limit through the specific rational structure of generating functions in the genus expansion. The paper closes with a discussion of the relation of this work to recent developments in understanding the asymptotics of graphical enumeration. As a by-product, these results also yield new information about the asymptotics of recurrence coefficients for orthogonal polynomials with respect to exponential weights, the calculation of correlation functions for certain tied random walks on a 1D lattice, and the large time asymptotics of random matrix partition functions.

  6. ARTS: automated randomization of multiple traits for study design.

    PubMed

    Maienschein-Cline, Mark; Lei, Zhengdeng; Gardeux, Vincent; Abbasi, Taimur; Machado, Roberto F; Gordeuk, Victor; Desai, Ankit A; Saraf, Santosh; Bahroos, Neil; Lussier, Yves

    2014-06-01

    Collecting data from large studies on high-throughput platforms, such as microarray or next-generation sequencing, typically requires processing samples in batches. There are often systematic but unpredictable biases from batch-to-batch, so proper randomization of biologically relevant traits across batches is crucial for distinguishing true biological differences from experimental artifacts. When a large number of traits are biologically relevant, as is common for clinical studies of patients with varying sex, age, genotype and medical background, proper randomization can be extremely difficult to prepare by hand, especially because traits may affect biological inferences, such as differential expression, in a combinatorial manner. Here we present ARTS (automated randomization of multiple traits for study design), which aids researchers in study design by automatically optimizing batch assignment for any number of samples, any number of traits and any batch size. ARTS is implemented in Perl and is available at github.com/mmaiensc/ARTS. ARTS is also available in the Galaxy Tool Shed, and can be used at the Galaxy installation hosted by the UIC Center for Research Informatics (CRI) at galaxy.cri.uic.edu. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. A novel microseeding method for the crystallization of membrane proteins in lipidic cubic phase.

    PubMed

    Kolek, Stefan Andrew; Bräuning, Bastian; Stewart, Patrick Douglas Shaw

    2016-04-01

    Random microseed matrix screening (rMMS), in which seed crystals are added to random crystallization screens, is an important breakthrough in soluble protein crystallization that increases the number of crystallization hits that are available for optimization. This greatly increases the number of soluble protein structures generated every year by typical structural biology laboratories. Inspired by this success, rMMS has been adapted to the crystallization of membrane proteins, making LCP seed stock by scaling up LCP crystallization conditions without changing the physical and chemical parameters that are critical for crystallization. Seed crystals are grown directly in LCP and, as with conventional rMMS, a seeding experiment is combined with an additive experiment. The new method was used with the bacterial integral membrane protein OmpF, and it was found that it increased the number of crystallization hits by almost an order of magnitude: without microseeding one new hit was found, whereas with LCP-rMMS eight new hits were found. It is anticipated that this new method will lead to better diffracting crystals of membrane proteins. A method of generating seed gradients, which allows the LCP seed stock to be diluted and the number of crystals in each LCP bolus to be reduced, if required for optimization, is also demonstrated.

  8. Generation of random microstructures and prediction of sound velocity and absorption for open foams with spherical pores.

    PubMed

    Zieliński, Tomasz G

    2015-04-01

    This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.

  9. IDGenerator: unique identifier generator for epidemiologic or clinical studies.

    PubMed

    Olden, Matthias; Holle, Rolf; Heid, Iris M; Stark, Klaus

    2016-09-15

    Creating study identifiers and assigning them to study participants is an important feature in epidemiologic studies, ensuring the consistency and privacy of the study data. The numbering system for identifiers needs to be random within certain number constraints, to carry extensions coding for organizational information, or to contain multiple layers of numbers per participant to diversify data access. Available software can generate globally-unique identifiers, but identifier-creating tools meeting the special needs of epidemiological studies are lacking. We have thus set out to develop a software program to generate IDs for epidemiological or clinical studies. Our software IDGenerator creates unique identifiers that not only carry a random identifier for a study participant, but also support the creation of structured IDs, where organizational information is coded into the ID directly. This may include study center (for multicenter-studies), study track (for studies with diversified study programs), or study visit (baseline, follow-up, regularly repeated visits). Our software can be used to add a check digit to the ID to minimize data entry errors. It facilitates the generation of IDs in batches and the creation of layered IDs (personal data ID, study data ID, temporary ID, external data ID) to ensure a high standard of data privacy. The software is supported by a user-friendly graphic interface that enables the generation of IDs in both standard text and barcode 128B format. Our software IDGenerator can create identifiers meeting the specific needs for epidemiologic or clinical studies to facilitate study organization and data privacy. IDGenerator is freeware under the GNU General Public License version 3; a Windows port and the source code can be downloaded at the Open Science Framework website: https://osf.io/urs2g/ .

  10. Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    PubMed

    Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk

    2017-01-01

    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.

  11. Time Delay Measurements of Key Generation Process on Smart Cards

    DTIC Science & Technology

    2015-03-01

    random number generator is available (Chatterjee & Gupta, 2009). The ECC algorithm will grow in usage as information becomes more and more secure. Figure...Worldwide Mobile Enterprise Security Software 2012–2016 Forecast and Analysis), mobile identity and access management is expected to grow by 27.6 percent...iPad, tablets) as well as 80000 BlackBerry phones. The mobility plan itself will be deployed in three phases over 2014, with the first phase

  12. Uncrackable code for nuclear weapons

    ScienceCinema

    Hart, Mark

    2018-05-11

    Mark Hart, a scientist and engineer in Lawrence Livermore National Laboratory's (LLNL) Defense Technologies Division, has developed a new approach for ensuring nuclear weapons and their components can't fall prey to unauthorized use. The beauty of his approach: Let the weapon protect itself. "Using the random process of nuclear radioactive decay is the gold standard of random number generators," said Mark Hart. "You’d have a better chance of winning both Mega Millions and Powerball on the same day than getting control of IUC-protected components."

  13. Uncrackable code for nuclear weapons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Mark

    Mark Hart, a scientist and engineer in Lawrence Livermore National Laboratory's (LLNL) Defense Technologies Division, has developed a new approach for ensuring nuclear weapons and their components can't fall prey to unauthorized use. The beauty of his approach: Let the weapon protect itself. "Using the random process of nuclear radioactive decay is the gold standard of random number generators," said Mark Hart. "You’d have a better chance of winning both Mega Millions and Powerball on the same day than getting control of IUC-protected components."

  14. Intercellular Variability in Protein Levels from Stochastic Expression and Noisy Cell Cycle Processes

    PubMed Central

    Soltani, Mohammad; Vargas-Garcia, Cesar A.; Antunes, Duarte; Singh, Abhyudai

    2016-01-01

    Inside individual cells, expression of genes is inherently stochastic and manifests as cell-to-cell variability or noise in protein copy numbers. Since proteins half-lives can be comparable to the cell-cycle length, randomness in cell-division times generates additional intercellular variability in protein levels. Moreover, as many mRNA/protein species are expressed at low-copy numbers, errors incurred in partitioning of molecules between two daughter cells are significant. We derive analytical formulas for the total noise in protein levels when the cell-cycle duration follows a general class of probability distributions. Using a novel hybrid approach the total noise is decomposed into components arising from i) stochastic expression; ii) partitioning errors at the time of cell division and iii) random cell-division events. These formulas reveal that random cell-division times not only generate additional extrinsic noise, but also critically affect the mean protein copy numbers and intrinsic noise components. Counter intuitively, in some parameter regimes, noise in protein levels can decrease as cell-division times become more stochastic. Computations are extended to consider genome duplication, where transcription rate is increased at a random point in the cell cycle. We systematically investigate how the timing of genome duplication influences different protein noise components. Intriguingly, results show that noise contribution from stochastic expression is minimized at an optimal genome-duplication time. Our theoretical results motivate new experimental methods for decomposing protein noise levels from synchronized and asynchronized single-cell expression data. Characterizing the contributions of individual noise mechanisms will lead to precise estimates of gene expression parameters and techniques for altering stochasticity to change phenotype of individual cells. PMID:27536771

  15. The Use of Monte Carlo Techniques to Teach Probability.

    ERIC Educational Resources Information Center

    Newell, G. J.; MacFarlane, J. D.

    1985-01-01

    Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…

  16. Computer programs and documentation

    NASA Technical Reports Server (NTRS)

    Speed, F. M.; Broadwater, S. L.

    1971-01-01

    Various statistical tests that were used to check out random number generators are described. A total of twelve different tests were considered, and from these, six were chosen to be used. The frequency test, max t test, run test, lag product test, gap test, and the matrix test are included.

  17. Fast Magnetic Micropropellers with Random Shapes

    PubMed Central

    2015-01-01

    Studying propulsion mechanisms in low Reynolds number fluid has implications for many fields, ranging from the biology of motile microorganisms and the physics of active matter to micromixing in catalysis and micro- and nanorobotics. The propulsion of magnetic micropropellers can be characterized by a dimensionless speed, which solely depends on the propeller geometry for a given axis of rotation. However, this dependence has so far been only investigated for helical propeller shapes, which were assumed to be optimal. In order to explore a larger variety of shapes, we experimentally studied the propulsion properties of randomly shaped magnetic micropropellers. Surprisingly, we found that their dimensionless speeds are high on average, comparable to previously reported nanofabricated helical micropropellers. The highest dimensionless speed we observed is higher than that of any previously reported propeller moving in a low Reynolds number fluid, proving that physical random shape generation can be a viable optimization strategy. PMID:26383225

  18. Sunspot random walk and 22-year variation

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua

    2012-01-01

    We examine two stochastic models for consistency with observed long-term secular trends in sunspot number and a faint, but semi-persistent, 22-yr signal: (1) a null hypothesis, a simple one-parameter random-walk model of sunspot-number cycle-to-cycle change, and, (2) an alternative hypothesis, a two-parameter random-walk model with an imposed 22-yr alternating amplitude. The observed secular trend in sunspots, seen from solar cycle 5 to 23, would not be an unlikely result of the accumulation of multiple random-walk steps. Statistical tests show that a 22-yr signal can be resolved in historical sunspot data; that is, the probability is low that it would be realized from random data. On the other hand, the 22-yr signal has a small amplitude compared to random variation, and so it has a relatively small effect on sunspot predictions. Many published predictions for cycle 24 sunspots fall within the dispersion of previous cycle-to-cycle sunspot differences. The probability is low that the Sun will, with the accumulation of random steps over the next few cycles, walk down to a Dalton-like minimum. Our models support published interpretations of sunspot secular variation and 22-yr variation resulting from cycle-to-cycle accumulation of dynamo-generated magnetic energy.

  19. Combined rule extraction and feature elimination in supervised classification.

    PubMed

    Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E

    2012-09-01

    There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.

  20. Identification of cancer-specific motifs in mimotope profiles of serum antibody repertoire.

    PubMed

    Gerasimov, Ekaterina; Zelikovsky, Alex; Măndoiu, Ion; Ionov, Yurij

    2017-06-07

    For fighting cancer, earlier detection is crucial. Circulating auto-antibodies produced by the patient's own immune system after exposure to cancer proteins are promising bio-markers for the early detection of cancer. Since an antibody recognizes not the whole antigen but 4-7 critical amino acids within the antigenic determinant (epitope), the whole proteome can be represented by a random peptide phage display library. This opens the possibility to develop an early cancer detection test based on a set of peptide sequences identified by comparing cancer patients' and healthy donors' global peptide profiles of antibody specificities. Due to the enormously large number of peptide sequences contained in global peptide profiles generated by next generation sequencing, the large number of cancer and control sera is required to identify cancer-specific peptides with high degree of statistical significance. To decrease the number of peptides in profiles generated by nextgen sequencing without losing cancer-specific sequences we used for generation of profiles the phage library enriched by panning on the pool of cancer sera. To further decrease the complexity of profiles we used computational methods for transforming a list of peptides constituting the mimotope profiles to the list motifs formed by similar peptide sequences. We have shown that the amino-acid order is meaningful in mimotope motifs since they contain significantly more peptides than motifs among peptides where amino-acids are randomly permuted. Also the single sample motifs significantly differ from motifs in peptides drawn from multiple samples. Finally, multiple cancer-specific motifs have been identified.

  1. Generating moment matching scenarios using optimization techniques

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2013-05-16

    An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less

  2. A self-organizing learning account of number-form synaesthesia.

    PubMed

    Makioka, Shogo

    2009-09-01

    Some people automatically and involuntarily "see" mental images of numbers in spatial arrays when they think of numbers. This phenomenon, called number forms, shares three key characteristics with the other types of synaesthesia, within-individual consistency, between-individual variety, and mixture of regularity and randomness. A theoretical framework called SOLA (self-organizing learning account of number forms) is proposed, which explains the generation process of number forms and the origin of those three characteristics. The simulations replicated the qualitative properties of the shapes of number forms, the property that numbers are aligned in order of size, that discontinuity usually occurs at the point of carry, and that continuous lines tend to have many bends.

  3. Next-generation narrow band imaging system for colonic polyp detection: a prospective multicenter randomized trial.

    PubMed

    Horimatsu, Takahiro; Sano, Yasushi; Tanaka, Shinji; Kawamura, Takuji; Saito, Shoichi; Iwatate, Mineo; Oka, Shiro; Uno, Koji; Yoshimura, Kenichi; Ishikawa, Hideki; Muto, Manabu; Tajiri, Hisao

    2015-07-01

    Previous studies have yielded conflicting results on the colonic polyp detection rate with narrow-band imaging (NBI) compared with white-light imaging (WLI). We compared the mean number of colonic polyps detected per patient for NBI versus WLI using a next-generation NBI system (EVIS LUCERA ELITE; Olympus Medical Systems) used with standard-definition (SD) colonoscopy and wide-angle (WA) colonoscopy. this study is a 2 × 2 factorial, prospective, multicenter randomized controlled trial. this study was conducted at five academic centers in Japan. patients were allocated to one of four groups: (1) WLI with SD colonoscopy (H260AZI), (2) NBI with SD colonoscopy (H260AZI), (3) WLI with WA colonoscopy (CF-HQ290), and (4) NBI with WA colonoscopy (CF-HQ290). the mean numbers of polyps detected per patient were compared between the four groups: WLI with/without WA colonoscopy and NBI with/without WA colonoscopy. Of the 454 patients recruited, 431 patients were enrolled. The total numbers of polyps detected by WLI with SD, NBI with SD, WLI with WA, and NBI with WA were 164, 176, 188, and 241, respectively. The mean number of polyps detected per patient was significantly higher in the NBI group than in the WLI group (2.01 vs 1.56; P = 0.032). The rate was not higher in the WA group than in the SD group (1.97 vs 1.61; P = 0.089). Although WA colonoscopy did not improve the polyp detection, next-generation NBI colonoscopy represents a significant improvement in the detection of colonic polyps.

  4. Coloring geographical threshold graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Percus, Allon; Muller, Tobias

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyzemore » the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.« less

  5. 25 CFR 543.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... software upgrades, data storage media replacement, etc.). The information recorded must be used when...., draw objects and back-up draw objects); and (ii) Random number generator software. (Additional information technology security standards can be found in § 543.16 of this part.) (2) The game software...

  6. 25 CFR 543.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... software upgrades, data storage media replacement, etc.). The information recorded must be used when...., draw objects and back-up draw objects); and (ii) Random number generator software. (Additional information technology security standards can be found in § 543.16 of this part.) (2) The game software...

  7. Simulations of Probabilities for Quantum Computing

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.

  8. Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis

    PubMed Central

    Hensen, B.; Kalb, N.; Blok, M. S.; Dréau, A. E.; Reiserer, A.; Vermeulen, R. F. L.; Schouten, R. N.; Markham, M.; Twitchen, D. J.; Goodenough, K.; Elkouss, D.; Wehner, S.; Taminiau, T. H.; Hanson, R.

    2016-01-01

    The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator. PMID:27509823

  9. Pseudorandom Number Generators for Mobile Devices: An Examination and Attempt to Improve Randomness

    DTIC Science & Technology

    2013-09-01

    Notes in Computer Science (LNCS), Vol. 4341), (Hanoi, Vietnam: Springer, 2006), 260–270. 36 Simon R. Blackburn , “The Linear Complexity of the Self... Blackburn , Simon R. ‘The Linear Complexity of the Self-Shrinking Generator.” IEEE Trans. Inf. Theory, 45 (September 1999). Blum, Leonore, Manuel...afloat when the waters have been rough! xv THIS PAGE INTENTIONALLY LEFT BLANK xvi I. INTRODUCTION When the average man thinks about war and

  10. Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2.

    PubMed

    Alharbi, Abdullah; Armstrong, Darren; Alharbi, Somayah; Shahrjerdi, Davood

    2017-12-26

    Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS 2 ) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS 2 monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark-Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS 2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.

  11. Secure distance ranging by electronic means

    DOEpatents

    Gritton, Dale G.

    1992-01-01

    A system for secure distance ranging between a reader 11 and a tag 12 wherein the distance between the two is determined by the time it takes to propagate a signal from the reader to the tag and for a responsive signal to return, and in which such time is random and unpredictable, except to the reader, even though the distance between the reader and tag remains the same. A random number (19) is sent from the reader and encrypted (26) by the tag into a number having 16 segments of 4 bits each (28). A first tag signal (31) is sent after such encryption. In response, a random width start pulse (13) is generated by the reader. When received in the tag, the width of the start pulse is measured (41) in the tag and a segment of the encrypted number is selected (42) in accordance with such width. A second tag pulse is generated at a time T after the start pulse arrives at the tag, the time T being dependent on the length of a variable time delay t.sub.v which is determined by the value of the bits in the selected segment of the encrypted number. At the reader, the total time from the beginning of the start pulse to the receipt of the second tag signal is measured (36, 37). The value of t.sub.v (21, 22, 23, 34) is known at the reader and the time T is subtracted (46) from the total time to find the actual propagation t.sub.p for signals to travel between the reader 11 and tag 12. The propagation time is then converted into distance (46).

  12. Finite-Difference Modeling of Seismic Wave Scattering in 3D Heterogeneous Media: Generation of Tangential Motion from an Explosion Source

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.

    2015-12-01

    Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  13. Anosov C-systems and random number generators

    NASA Astrophysics Data System (ADS)

    Savvidy, G. K.

    2016-08-01

    We further develop our previous proposal to use hyperbolic Anosov C-systems to generate pseudorandom numbers and to use them for efficient Monte Carlo calculations in high energy particle physics. All trajectories of hyperbolic dynamical systems are exponentially unstable, and C-systems therefore have mixing of all orders, a countable Lebesgue spectrum, and a positive Kolmogorov entropy. These exceptional ergodic properties follow from the C-condition introduced by Anosov. This condition defines a rich class of dynamical systems forming an open set in the space of all dynamical systems. An important property of C-systems is that they have a countable set of everywhere dense periodic trajectories and their density increases exponentially with entropy. Of special interest are the C-systems defined on higher-dimensional tori. Such C-systems are excellent candidates for generating pseudorandom numbers that can be used in Monte Carlo calculations. An efficient algorithm was recently constructed that allows generating long C-system trajectories very rapidly. These trajectories have good statistical properties and can be used for calculations in quantum chromodynamics and in high energy particle physics.

  14. Comparasion of Password Generator between Coupled Linear Congruential Generator (CLCG) and Linear Congruential Generator (LCG)

    NASA Astrophysics Data System (ADS)

    Imamah; Djunaidy, A.; Rachmad, A.; Damayanti, F.

    2018-01-01

    Password is needed to access the computing services. Text password is a combination between characters, numbers and symbols. One of issues is users will often choose guessable passwords, e.g. date of birth, name of pet, or anniversary date. To address this issue, we proposed password generator using Coupled Congruential method (CLCG). CLCG is a method to solve the weakness of Linear Congruential generator (LCG). In this research, we want to prove that CLCG is really good to generate random password compared to LCG method. The result of this research proves that the highest password strength is obtained by CLCG with score 77.4%. Besides of those things, we had proved that term of LCG is also applicable to CLCG.

  15. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  16. Generation of wideband chaos with suppressed time-delay signature by delayed self-interference.

    PubMed

    Wang, Anbang; Yang, Yibiao; Wang, Bingjie; Zhang, Beibei; Li, Lei; Wang, Yuncai

    2013-04-08

    We demonstrate experimentally and numerically a method using the incoherent delayed self-interference (DSI) of chaotic light from a semiconductor laser with optical feedback to generate wideband chaotic signal. The results show that, the DSI can eliminate the domination of laser relaxation oscillation existing in the chaotic laser light and therefore flatten and widen the power spectrum. Furthermore, the DSI depresses the time-delay signature induced by external cavity modes and improves the symmetry of probability distribution by more than one magnitude. We also experimentally show that this DSI signal is beneficial to the random number generation.

  17. NEMAR plotting computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program which generates CalComp plots of trajectory parameters is examined. The trajectory parameters are calculated and placed on a data file by the Near Earth Mission Analysis Routine computer program. The plot program accesses the data file and generates the plots as defined by inputs to the plot program. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included. Although this plot program utilizes a random access data file, a data file of the same type and formatted in 102 numbers per record could be generated by any computer program and used by this plot program.

  18. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and found the statistical range of β values. The observed value of β = 0.83 for the CMT catalog corresponds to a p value of p=0.004 leading us to conclude that the interevent natural times in the CMT catalog are not random. For the time series analysis, we calculated the autocorrelation function for the sequence of natural time intervals between large global earthquakes and again compared with data from 1.5 × 10^4 synthetic catalogs of random data. In this case, the spread of autocorrelation values was much larger, so we concluded that this approach is insensitive to deviations from random behavior.

  19. Random technique to encode complex valued holograms with on axis reconstruction onto phase-only displays.

    PubMed

    Luis Martínez Fuentes, Jose; Moreno, Ignacio

    2018-03-05

    A new technique for encoding the amplitude and phase of diffracted fields in digital holography is proposed. It is based on a random spatial multiplexing of two phase-only diffractive patterns. The first one is the phase information of the intended pattern, while the second one is a diverging optical element whose purpose is the control of the amplitude. A random number determines the choice between these two diffractive patterns at each pixel, and the amplitude information of the desired field governs its discrimination threshold. This proposed technique is computationally fast and does not require iterative methods, and the complex field reconstruction appears on axis. We experimentally demonstrate this new encoding technique with holograms implemented onto a flicker-free phase-only spatial light modulator (SLM), which allows the axial generation of such holograms. The experimental verification includes the phase measurement of generated patterns with a phase-shifting polarization interferometer implemented in the same experimental setup.

  20. Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.

  1. Disassortativity of random critical branching trees

    NASA Astrophysics Data System (ADS)

    Kim, J. S.; Kahng, B.; Kim, D.

    2009-06-01

    Random critical branching trees (CBTs) are generated by the multiplicative branching process, where the branching number is determined stochastically, independent of the degree of their ancestor. Here we show analytically that despite this stochastic independence, there exists the degree-degree correlation (DDC) in the CBT and it is disassortative. Moreover, the skeletons of fractal networks, the maximum spanning trees formed by the edge betweenness centrality, behave similarly to the CBT in the DDC. This analytic solution and observation support the argument that the fractal scaling in complex networks originates from the disassortativity in the DDC.

  2. Occupational Opportunities in Nebraska: 1974 Report.

    ERIC Educational Resources Information Center

    Nebraska Occupational Needs Research Coordinating Unit, Lincoln.

    The seventh annual study of occupational opportunities in Nebraska reflects State manpower needs and trends as revealed by employer listings (number of persons employed, job duties of persons employed, and a projection of future needs). A 5 percent random sample was generated from the revised master list of employers for each of the six State…

  3. ENZVU--An Enzyme Kinetics Computer Simulation Based upon a Conceptual Model of Enzyme Action.

    ERIC Educational Resources Information Center

    Graham, Ian

    1985-01-01

    Discusses a simulation on enzyme kinetics based upon the ability of computers to generate random numbers. The program includes: (1) enzyme catalysis in a restricted two-dimensional grid; (2) visual representation of catalysis; and (3) storage and manipulation of data. Suggested applications and conclusions are also discussed. (DH)

  4. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  5. Latin and Magic Squares

    ERIC Educational Resources Information Center

    Emanouilidis, Emanuel

    2005-01-01

    Latin squares have existed for hundreds of years but it wasn't until rather recently that Latin squares were used in other areas such as statistics, graph theory, coding theory and the generation of random numbers as well as in the design and analysis of experiments. This note describes Latin and diagonal Latin squares, a method of constructing…

  6. Latin and Cross Latin Squares

    ERIC Educational Resources Information Center

    Emanouilidis, Emanuel

    2008-01-01

    Latin squares were first introduced and studied by the famous mathematician Leonhard Euler in the 1700s. Through the years, Latin squares have been used in areas such as statistics, graph theory, coding theory, the generation of random numbers as well as in the design and analysis of experiments. Recently, with the international popularity of…

  7. SimEngine v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Hai D.

    2017-03-02

    SimEngine provides the core functionalities and components that are key to the development of discrete event simulation tools. These include events, activities, event queues, random number generators, and basic result tracking classes. SimEngine was designed for high performance, integrates seamlessly into any Microsoft .Net development environment, and provides a flexible API for simulation developers.

  8. The statistics of laser returns from cube-corner arrays on satellite

    NASA Technical Reports Server (NTRS)

    Lehr, C. G.

    1973-01-01

    A method first presented by Goodman is used to derive an equation for the statistical effects associated with laser returns from satellites having retroreflecting arrays of cube corners. The effect of the distribution on the returns of a satellite-tracking system is illustrated by a computation based on randomly generated numbers.

  9. Conflicts in Science the Classroom: Documentation and Management through Phenomenological Methodology

    ERIC Educational Resources Information Center

    Oloruntegbe, K. O.; Omoniyi, A. O.; Omoniyi, M. B. I.; Ojelade, I. A.

    2011-01-01

    The study investigated the nature of conflicts that are generated in the science classroom. Twenty video-recorded lessons taught by 10 randomly selected pre-service science teachers in teaching practice in a few Nigerian secondary schools were analyzed. Beside the expected goal attainment of the lessons a number of negative conflicts were…

  10. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by semi-randomly varying routing policies for different packets

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-11-23

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Nodes vary a choice of routing policy for routing data in the network in a semi-random manner, so that similarly situated packets are not always routed along the same path. Semi-random variation of the routing policy tends to avoid certain local hot spots of network activity, which might otherwise arise using more consistent routing determinations. Preferably, the originating node chooses a routing policy for a packet, and all intermediate nodes in the path route the packet according to that policy. Policies may be rotated on a round-robin basis, selected by generating a random number, or otherwise varied.

  11. Effects of inbreeding on economic traits of channel catfish.

    PubMed

    Bondari, K; Dunham, R A

    1987-05-01

    Inbred channel catfish (Ictalurus punctatus) were produced from two generations of full-sib matings to study the effect of inbreeding on reproduction, growth and survival. A randomly mated control line was propagated from the same base population to be used for the evaluation of the inbred fish. First generation inbred (I1) and control (C1) lines comprised five full-sib families each. Second generation inbred (I2) and control (C2) lines were produced by mating each male catfish from the I1 or C1 line to two females in sequence, one from the I1 and one from the C1 line. The design also produced two reciprocal outcross lines to be compared to their contemporary inbred and control lines. The coefficient of inbreeding for the inbred line increased from 0.25 in generation 1 to 0.375 in generation 2. The inbreeding coefficient was zero for all other lines. The resulting fish were performance tested in two locations, Tifton, Georgia and Auburn, Alabama and no genotype-environment interactions occurred. Results indicated that one generation of inbreeding increased number of days required for eggs to hatch by 21%, but did not significantly influence spawn weight or hatchability score. However, inbred females produced more eggs/kg body weight than control females. Two generations of full-sib mating in Georgia did not depress weight when expressed as a deviation to random controls but was depressed 13-16% when expressed as a deviation to half-sib out-crosses. Second generation inbreds produced in Alabama exhibited a 19% depression for growth rate when compared to either random or half-sib outcross controls. Survival rates at various age intervals was not decreased by inbreeding. The amount of inbreeding depression varied among families and between sexes.

  12. Exact Markov chains versus diffusion theory for haploid random mating.

    PubMed

    Tyvand, Peder A; Thorvaldsen, Steinar

    2010-05-01

    Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.

  13. Models for the hotspot distribution

    NASA Technical Reports Server (NTRS)

    Jurdy, Donna M.; Stefanick, Michael

    1990-01-01

    Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.

  14. NHash: Randomized N-Gram Hashing for Distributed Generation of Validatable Unique Study Identifiers in Multicenter Research.

    PubMed

    Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D; Cui, Licong

    2015-11-10

    A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f1, f2, ..., fk. The input for each function fi has 3 components: a random number r, an integer n, and input data m. The result, fi(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f1(r1, n1, m1), f2(r2, n2, m2), ..., fk(rk, nk, mk). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash.

  15. Benefits of Reiki therapy for a severely neutropenic patient with associated influences on a true random number generator.

    PubMed

    Morse, Melvin L; Beem, Lance W

    2011-12-01

    Reiki therapy is documented for relief of pain and stress. Energetic healing has been documented to alter biologic markers of illness such as hematocrit. True random number generators are reported to be affected by energy healers and spiritually oriented conscious awareness. The patient was a then 54-year-old severely ill man who had hepatitis C types 1 and 2 and who did not improve with conventional therapy. He also suffered from obesity, the metabolic syndrome, asthma, and hypertension. He was treated with experimental high-dose interferon/riboviron therapy with resultant profound anemia and neutropenia. Energetic healing and Reiki therapy was administered initially to enhance the patient's sense of well-being and to relieve anxiety. Possible effects on the patient's absolute neutrophil count and hematocrit were incidentally noted. Reiki therapy was then initiated at times of profound neutropenia to assess its possible effect on the patient's absolute neutrophil count (ANC). Reiki and other energetic healing sessions were monitored with a true random number generator (RNG). Statistically significant relationships were documented between Reiki therapy, a quieting of the electronically created white noise of the RNG during healing sessions, and improvement in the patient's ANC. The immediate clinical result was that the patient could tolerate the high-dose interferon regimen without missing doses because of absolute neutropenia. The patient was initially a late responder to interferon and had been given a 5% chance of clearing the virus. He remains clear of the virus 1 year after treatment. The association between changes in the RNG, Reiki therapy, and a patient's ANC is the first to the authors' knowledge in the medical literature. Future studies assessing the effects of energetic healing on specific biologic markers of disease are anticipated. Concurrent use of a true RNG may prove to correlate with the effectiveness of energetic therapy.

  16. Benefits of Reiki Therapy for a Severely Neutropenic Patient with Associated Influences on a True Random Number Generator

    PubMed Central

    Beem, Lance W.

    2011-01-01

    Abstract Background Reiki therapy is documented for relief of pain and stress. Energetic healing has been documented to alter biologic markers of illness such as hematocrit. True random number generators are reported to be affected by energy healers and spiritually oriented conscious awareness. Methods The patient was a then 54-year-old severely ill man who had hepatitis C types 1 and 2 and who did not improve with conventional therapy. He also suffered from obesity, the metabolic syndrome, asthma, and hypertension. He was treated with experimental high-dose interferon/riboviron therapy with resultant profound anemia and neutropenia. Energetic healing and Reiki therapy was administered initially to enhance the patient's sense of well-being and to relieve anxiety. Possible effects on the patient's absolute neutrophil count and hematocrit were incidentally noted. Reiki therapy was then initiated at times of profound neutropenia to assess its possible effect on the patient's absolute neutrophil count (ANC). Reiki and other energetic healing sessions were monitored with a true random number generator (RNG). Results Statistically significant relationships were documented between Reiki therapy, a quieting of the electronically created white noise of the RNG during healing sessions, and improvement in the patient's ANC. The immediate clinical result was that the patient could tolerate the high-dose interferon regimen without missing doses because of absolute neutropenia. The patient was initially a late responder to interferon and had been given a 5% chance of clearing the virus. He remains clear of the virus 1 year after treatment. Conclusions The association between changes in the RNG, Reiki therapy, and a patient's ANC is the first to the authors' knowledge in the medical literature. Future studies assessing the effects of energetic healing on specific biologic markers of disease are anticipated. Concurrent use of a true RNG may prove to correlate with the effectiveness of energetic therapy. PMID:22132706

  17. Polarization chaos and random bit generation in nonlinear fiber optics induced by a time-delayed counter-propagating feedback loop.

    PubMed

    Morosi, J; Berti, N; Akrout, A; Picozzi, A; Guasoni, M; Fatome, J

    2018-01-22

    In this manuscript, we experimentally and numerically investigate the chaotic dynamics of the state-of-polarization in a nonlinear optical fiber due to the cross-interaction between an incident signal and its intense backward replica generated at the fiber-end through an amplified reflective delayed loop. Thanks to the cross-polarization interaction between the two-delayed counter-propagating waves, the output polarization exhibits fast temporal chaotic dynamics, which enable a powerful scrambling process with moving speeds up to 600-krad/s. The performance of this all-optical scrambler was then evaluated on a 10-Gbit/s On/Off Keying telecom signal achieving an error-free transmission. We also describe how these temporal and chaotic polarization fluctuations can be exploited as an all-optical random number generator. To this aim, a billion-bit sequence was experimentally generated and successfully confronted to the dieharder benchmarking statistic tools. Our experimental analysis are supported by numerical simulations based on the resolution of counter-propagating coupled nonlinear propagation equations that confirm the observed behaviors.

  18. Dynamic defense and network randomization for computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Adrian R.; Stout, William M. S.; Hamlet, Jason R.

    The various technologies presented herein relate to determining a network attack is taking place, and further to adjust one or more network parameters such that the network becomes dynamically configured. A plurality of machine learning algorithms are configured to recognize an active attack pattern. Notification of the attack can be generated, and knowledge gained from the detected attack pattern can be utilized to improve the knowledge of the algorithms to detect a subsequent attack vector(s). Further, network settings and application communications can be dynamically randomized, wherein artificial diversity converts control systems into moving targets that help mitigate the early reconnaissancemore » stages of an attack. An attack(s) based upon a known static address(es) of a critical infrastructure network device(s) can be mitigated by the dynamic randomization. Network parameters that can be randomized include IP addresses, application port numbers, paths data packets navigate through the network, application randomization, etc.« less

  19. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  20. Microcanonical model for interface formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucklidge, A.; Zaleski, S.

    1988-04-01

    We describe a new cellular automaton model which allows us to simulate separation of phases. The model is an extension of existing cellular automata for the Ising model, such as Q2R. It conserves particle number and presents the qualitative features of spinodal decomposition. The dynamics is deterministic and does not require random number generators. The spins exchange energy with small local reservoirs or demons. The rate of relaxation to equilibrium is investigated, and the results are compared to the Lifshitz-Slyozov theory.

  1. Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network

    NASA Technical Reports Server (NTRS)

    Kuhn, D. Richard; Kacker, Raghu; Lei, Yu

    2010-01-01

    This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.

  2. Ratio index variables or ANCOVA? Fisher's cats revisited.

    PubMed

    Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S

    2010-01-01

    Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.

  3. Random Sequence for Optimal Low-Power Laser Generated Ultrasound

    NASA Astrophysics Data System (ADS)

    Vangi, D.; Virga, A.; Gulino, M. S.

    2017-08-01

    Low-power laser generated ultrasounds are lately gaining importance in the research world, thanks to the possibility of investigating a mechanical component structural integrity through a non-contact and Non-Destructive Testing (NDT) procedure. The ultrasounds are, however, very low in amplitude, making it necessary to use pre-processing and post-processing operations on the signals to detect them. The cross-correlation technique is used in this work, meaning that a random signal must be used as laser input. For this purpose, a highly random and simple-to-create code called T sequence, capable of enhancing the ultrasound detectability, is introduced (not previously available at the state of the art). Several important parameters which characterize the T sequence can influence the process: the number of pulses Npulses , the pulse duration δ and the distance between pulses dpulses . A Finite Element FE model of a 3 mm steel disk has been initially developed to analytically study the longitudinal ultrasound generation mechanism and the obtainable outputs. Later, experimental tests have shown that the T sequence is highly flexible for ultrasound detection purposes, making it optimal to use high Npulses and δ but low dpulses . In the end, apart from describing all phenomena that arise in the low-power laser generation process, the results of this study are also important for setting up an effective NDT procedure using this technology.

  4. Human cerebral potentials evoked by moving dynamic random dot stereograms.

    PubMed

    Herpers, M J; Caberg, H B; Mol, J M

    1981-07-01

    In 11 normal healthy human subjects an evoked potential was elicited by moving dynamic random dot stereograms. The random dots were generated by a minicomputer. An average of each of 8 EEG channels of the subjects tested was made. The maximum of the cerebral evoked potentials thus found was localized in the central and parietal region. No response earlier than 130--150 msec after the stimulus could be proved. The influence of fixation, the number of dots provided, an interocular interstimulus interval in the presentation of the dots, and lense accommodation movements on the evoked stereoptic potentials was investigated and discussed. An interocular interstimulus interval (left eye leading) in the presentation of the dots caused an increase in latency of the response much longer than the imposed interstimulus interval itself. It was shown that no accommodation was needed to perceive the depth impression, and to evoke the cerebral response with random dot stereograms. There are indications of an asymmetry between the two hemispheres in the handling of depth perception after 250 msec. The potential distribution of the evoked potentials strongly suggests that they are not generated in the occipital region.

  5. The Use and Validation of Qualitative Methods Used in Program Evaluation.

    ERIC Educational Resources Information Center

    Plucker, Frank E.

    When conducting a two-year college program review, there are several advantages to supplementing the standard quantitative research approach with qualitative measures. Qualitative research does not depend on a large number of random samples, it uses a flexible design which can be refined as the research is executed, and it generates findings in a…

  6. [Working memory and executive control: inhibitory processes in updating and random generation tasks].

    PubMed

    Macizo, Pedro; Bajo, Teresa; Soriano, Maria Felipa

    2006-02-01

    Working Memory (WM) span predicts subjects' performance in control executive tasks and, in addition, it has been related to the capacity to inhibit irrelevant information. In this paper we investigate the role of WM span in two executive tasks focusing our attention on inhibitory components of both tasks. High and low span participants recalled targets words rejecting irrelevant items at the same time (Experiment 1) and they generated random numbers (Experiment 2). Results showed a clear relation between WM span and performance in both tasks. In addition, analyses of intrusion errors (Experiment 1) and stereotyped responses (Experiment 2) indicated that high span individuals were able to efficiently use the inhibitory component implied in both tasks. The pattern of data provides support to the relation between WM span and control executive tasks through an inhibitory mechanism.

  7. Dynamic analyses, FPGA implementation and engineering applications of multi-butterfly chaotic attractors generated from generalised Sprott C system

    NASA Astrophysics Data System (ADS)

    Lai, Qiang; Zhao, Xiao-Wen; Rajagopal, Karthikeyan; Xu, Guanghui; Akgul, Akif; Guleryuz, Emre

    2018-01-01

    This paper considers the generation of multi-butterfly chaotic attractors from a generalised Sprott C system with multiple non-hyperbolic equilibria. The system is constructed by introducing an additional variable whose derivative has a switching function to the Sprott C system. It is numerically found that the system creates two-, three-, four-, five-butterfly attractors and any other multi-butterfly attractors. First, the dynamic analyses of multi-butterfly chaotic attractors are presented. Secondly, the field programmable gate array implementation, electronic circuit realisation and random number generator are done with the multi-butterfly chaotic attractors.

  8. Methodological reporting of randomized clinical trials in respiratory research in 2010.

    PubMed

    Lu, Yi; Yao, Qiuju; Gu, Jie; Shen, Ce

    2013-09-01

    Although randomized controlled trials (RCTs) are considered the highest level of evidence, they are also subject to bias, due to a lack of adequately reported randomization, and therefore the reporting should be as explicit as possible for readers to determine the significance of the contents. We evaluated the methodological quality of RCTs in respiratory research in high ranking clinical journals, published in 2010. We assessed the methodological quality, including generation of the allocation sequence, allocation concealment, double-blinding, sample-size calculation, intention-to-treat analysis, flow diagrams, number of medical centers involved, diseases, funding sources, types of interventions, trial registration, number of times the papers have been cited, journal impact factor, journal type, and journal endorsement of the CONSORT (Consolidated Standards of Reporting Trials) rules, in RCTs published in 12 top ranking clinical respiratory journals and 5 top ranking general medical journals. We included 176 trials, of which 93 (53%) reported adequate generation of the allocation sequence, 66 (38%) reported adequate allocation concealment, 79 (45%) were double-blind, 123 (70%) reported adequate sample-size calculation, 88 (50%) reported intention-to-treat analysis, and 122 (69%) included a flow diagram. Multivariate logistic regression analysis revealed that journal impact factor ≥ 5 was the only variable that significantly influenced adequate allocation sequence generation. Trial registration and journal impact factor ≥ 5 significantly influenced adequate allocation concealment. Medical interventions, trial registration, and journal endorsement of the CONSORT statement influenced adequate double-blinding. Publication in one of the general medical journal influenced adequate sample-size calculation. The methodological quality of RCTs in respiratory research needs improvement. Stricter enforcement of the CONSORT statement should enhance the quality of RCTs.

  9. Seating type and cognitive performance after 3 hours travel by high-speed boat in sea states 2-3.

    PubMed

    McMorris, Terry; Myers, Stephen; Dobbins, Trevor; Hall, Ben; Dyson, Rosemary

    2009-01-01

    Transit in high-speed marine craft subjects occupants to a rough ride as the boat impacts the waves. This induces high levels of physical stress, which may inhibit cognitive performance during military operations and life-saving activities. Land-based research suggests that suspension seats reduce vibration and, therefore, stress. We hypothesized that subjects using suspension seats would demonstrate better cognitive performance, lower perceptions of exertion, fatigue, and sleepiness, and lower salivary concentrations of cortisol than those using fixed seats. Subjects, naval personnel, were divided into fixed (N = 6) and suspension seat (N = 6) groups. Subjects undertook forward and backward number recall and random number generation tests pre- and post-transit (3 h in sea states 2-3). Salivary cortisol concentrations were sampled pre- (1100 h) and post-transit (1700 h) and at the same times on a control day. Post-transit perceptions of exertion, fatigue, and sleepiness were measured subjectively. The suspension seat group demonstrated better performance post-transit than the fixed seat group for forward number recall and showed a significant pre- to post-transit improvement in backward number recall. The suspension seat group reported less fatigue and sleepiness. The suspension seat group had significantly higher salivary cortisol concentrations than the fixed seat group post-transit. Regression analyses found a quadratic correlation between delta cortisol concentrations and delta random number generation scores (R2 = 0.68). Results show that the use of suspension seats during transit in high-speed marine craft may be advantageous with regard to cognitive performance.

  10. Inference from clustering with application to gene-expression microarrays.

    PubMed

    Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M

    2002-01-01

    There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.

  11. Super-resolving random-Gaussian apodized photon sieve.

    PubMed

    Sabatyan, Arash; Roshaninejad, Parisa

    2012-09-10

    A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.

  12. CNV-RF Is a Random Forest-Based Copy Number Variation Detection Method Using Next-Generation Sequencing.

    PubMed

    Onsongo, Getiria; Baughn, Linda B; Bower, Matthew; Henzler, Christine; Schomaker, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat

    2016-11-01

    Simultaneous detection of small copy number variations (CNVs) (<0.5 kb) and single-nucleotide variants in clinically significant genes is of great interest for clinical laboratories. The analytical variability in next-generation sequencing (NGS) and artifacts in coverage data because of issues with mappability along with lack of robust bioinformatics tools for CNV detection have limited the utility of targeted NGS data to identify CNVs. We describe the development and implementation of a bioinformatics algorithm, copy number variation-random forest (CNV-RF), that incorporates a machine learning component to identify CNVs from targeted NGS data. Using CNV-RF, we identified 12 of 13 deletions in samples with known CNVs, two cases with duplications, and identified novel deletions in 22 additional cases. Furthermore, no CNVs were identified among 60 genes in 14 cases with normal copy number and no CNVs were identified in another 104 patients with clinical suspicion of CNVs. All positive deletions and duplications were confirmed using a quantitative PCR method. CNV-RF also detected heterozygous deletions and duplications with a specificity of 50% across 4813 genes. The ability of CNV-RF to detect clinically relevant CNVs with a high degree of sensitivity along with confirmation using a low-cost quantitative PCR method provides a framework for providing comprehensive NGS-based CNV/single-nucleotide variant detection in a clinical molecular diagnostics laboratory. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  13. Randomly picked cosmid clones overlap the pyrB and oriC gap in the physical map of the E. coli chromosome.

    PubMed Central

    Knott, V; Rees, D J; Cheng, Z; Brownlee, G G

    1988-01-01

    Sets of overlapping cosmid clones generated by random sampling and fingerprinting methods complement data at pyrB (96.5') and oriC (84') in the published physical map of E. coli. A new cloning strategy using sheared DNA, and a low copy, inducible cosmid vector were used in order to reduce bias in libraries, in conjunction with micro-methods for preparing cosmid DNA from a large number of clones. Our results are relevant to the design of the best approach to the physical mapping of large genomes. PMID:2834694

  14. Human choice among five alternatives when reinforcers decay.

    PubMed

    Rothstein, Jacob B; Jensen, Greg; Neuringer, Allen

    2008-06-01

    Human participants played a computer game in which choices among five alternatives were concurrently reinforced according to dependent random-ratio schedules. "Dependent" indicates that choices to any of the wedges activated the random-number generators governing reinforcers on all five alternatives. Two conditions were compared. In the hold condition, once scheduled, a reinforcer - worth a constant five points - remained available until it was collected. In the decay condition, point values decreased with intervening responses, i.e., rapid collection was differentially reinforced. Slopes of matching functions were higher in the decay than hold condition. However inter-subject variability was high in both conditions.

  15. Simulation of Stochastic Processes by Coupled ODE-PDE

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.

  16. Reduced randomness in quantum cryptography with sequences of qubits encoded in the same basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamoureux, L.-P.; Cerf, N. J.; Bechmann-Pasquinucci, H.

    2006-03-15

    We consider the cloning of sequences of qubits prepared in the states used in the BB84 or six-state quantum cryptography protocol, and show that the single-qubit fidelity is unaffected even if entire sequences of qubits are prepared in the same basis. This result is only valid provided that the sequences are much shorter than the total key. It is of great importance for practical quantum cryptosystems because it reduces the need for high-speed random number generation without impairing on the security against finite-size cloning attacks.

  17. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation.

    PubMed

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-12-23

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy. In this study, the optimal number of selected atlases used was six, but for definitive conclusions about the optimal number of atlases and to improve the autosegmentation accuracy for clinical use, more atlases need to be included.

  18. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Restorations in abrasion/erosion cervical lesions: 8-year results of a triple blind randomized controlled trial.

    PubMed

    Dall'Orologio, Giovanni Dondi; Lorenzi, Roberta

    2014-10-01

    An equivalence randomized controlled trial within the subject was organized to evaluate the clinical long-term success of a new 2-step etch & rinse adhesive and a new nano-filled ormocer. 50 subjects, 21 males and 29 females aged between 21 and 65, were randomized to receive 150 restorations, 100 with the new restorative material, 50 with the composite as control, placed in non-carious cervical lesions with the same bonding system. The main outcome measure was the cause of failure at 8 years. Randomization was number table-generated, with allocation concealment by opaque sequentially numbered sealed and stapled envelopes. Subjects, examiner, and analyst were blinded to group assignment. Two interim analyses were performed. Data were analyzed by ANOVA and Cox test (P < 0.05). After 8 years, 40 subjects and 120 teeth were included in the analysis of the primary outcome. There were eight failures in the experimental group and four failures in the control group. The cumulative loss rate was 7% for both restorative materials, with the annual failure lower than 1%, without any statistically significant difference. There were two key elements of failure: the presence of sclerotic dentin and the relationship between lesion and gingival margin.

  20. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  1. Publication bias in animal research presented at the 2008 Society of Critical Care Medicine Conference.

    PubMed

    Conradi, Una; Joffe, Ari R

    2017-07-07

    To determine a direct measure of publication bias by determining subsequent full-paper publication (P) of studies reported in animal research abstracts presented at an international conference (A). We selected 100 random (using a random-number generator) A from the 2008 Society of Critical Care Medicine Conference. Using a data collection form and study manual, we recorded methodology and result variables from A. We searched PubMed and EMBASE to June 2015, and DOAJ and Google Scholar to May 2017 to screen for subsequent P. Methodology and result variables were recorded from P to determine changes in reporting from A. Predictors of P were examined using Fisher's Exact Test. 62% (95% CI 52-71%) of studies described in A were subsequently P after a median 19 [IQR 9-33.3] months from conference presentation. Reporting of studies in A was of low quality: randomized 27% (the method of randomization and allocation concealment not described), blinded 0%, sample-size calculation stated 0%, specifying the primary outcome 26%, numbers given with denominators 6%, and stating number of animals used 47%. Only being an orally presented (vs. poster presented) A (14/16 vs. 48/84, p = 0.025) predicted P. Reporting of studies in P was of poor quality: randomized 39% (the method of randomization and allocation concealment not described), likely blinded 6%, primary outcome specified 5%, sample size calculation stated 0%, numbers given with denominators 34%, and number of animals used stated 56%. Changes in reporting from A to P occurred: from non-randomized to randomized 19%, from non-blinded to blinded 6%, from negative to positive outcomes 8%, from having to not having a stated primary outcome 16%, and from non-statistically to statistically significant findings 37%. Post-hoc, using publication data, P was predicted by having positive outcomes (published 62/62, unpublished 33/38; p = 0.003), or statistically significant results (published 58/62, unpublished 20/38; p < 0.001). Only 62% (95% CI 52-71%) of animal research A are subsequently P; this was predicted by oral presentation of the A, finally having positive outcomes, and finally having statistically significant results. Publication bias is prevalent in critical care animal research.

  2. Embedded random matrix ensembles from nuclear structure and their recent applications

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.; Chavda, N. D.

    Embedded random matrix ensembles generated by random interactions (of low body rank and usually two-body) in the presence of a one-body mean field, introduced in nuclear structure physics, are now established to be indispensable in describing statistical properties of a large number of isolated finite quantum many-particle systems. Lie algebra symmetries of the interactions, as identified from nuclear shell model and the interacting boson model, led to the introduction of a variety of embedded ensembles (EEs). These ensembles with a mean field and chaos generating two-body interaction generate in three different stages, delocalization of wave functions in the Fock space of the mean-field basis states. The last stage corresponds to what one may call thermalization and complex nuclei, as seen from many shell model calculations, lie in this region. Besides briefly describing them, their recent applications to nuclear structure are presented and they are (i) nuclear level densities with interactions; (ii) orbit occupancies; (iii) neutrinoless double beta decay nuclear transition matrix elements as transition strengths. In addition, their applications are also presented briefly that go beyond nuclear structure and they are (i) fidelity, decoherence, entanglement and thermalization in isolated finite quantum systems with interactions; (ii) quantum transport in disordered networks connected by many-body interactions with centrosymmetry; (iii) semicircle to Gaussian transition in eigenvalue densities with k-body random interactions and its relation to the Sachdev-Ye-Kitaev (SYK) model for majorana fermions.

  3. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    NASA Astrophysics Data System (ADS)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  4. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    PubMed

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical programming language and the Python program HeatMapWrapper [ https://doi.org/10.5281/zenodo.495163 ] for heat map generation.

  5. Configurable Cellular Automata for Pseudorandom Number Generation

    NASA Astrophysics Data System (ADS)

    Quieta, Marie Therese; Guan, Sheng-Uei

    This paper proposes a generalized structure of cellular automata (CA) — the configurable cellular automata (CoCA). With selected properties from programmable CA (PCA) and controllable CA (CCA), a new approach to cellular automata is developed. In CoCA, the cells are dynamically reconfigured at run-time via a control CA. Reconfiguration of a cell simply means varying the properties of that cell with time. Some examples of properties to be reconfigured are rule selection, boundary condition, and radius. While the objective of this paper is to propose CoCA as a new CA method, the main focus is to design a CoCA that can function as a good pseudorandom number generator (PRNG). As a PRNG, CoCA can be a suitable candidate as it can pass 17 out of 18 Diehard tests with 31 cells. CoCA PRNG's performance based on Diehard test is considered superior over other CA PRNG works. Moreover, CoCA opens new rooms for research not only in the field of random number generation, but in modeling complex systems as well.

  6. Seven lessons from manyfield inflation in random potentials

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David

    2018-01-01

    We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.

  7. Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data

    USGS Publications Warehouse

    Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.

    2001-01-01

    Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).

  8. Identification of cultivars and validation of genetic relationships in Mangifera indica L. using RAPD markers.

    PubMed

    Schnell, R J; Ronning, C M; Knight, R J

    1995-02-01

    Twenty-five accessions of mango were examined for random amplified polymorphic DNA (RAPD) genetic markers with 80 10-mer random primers. Of the 80 primers screened, 33 did not amplify, 19 were monomorphic, and 28 gave reproducible, polymorphic DNA amplification patterns. Eleven primers were selected from the 28 for the study. The number of bands generated was primer- and genotype-dependent, and ranged from 1 to 10. No primer gave unique banding patterns for each of the 25 accessions; however, ten different combinations of 2 primer banding patterns produced unique fingerprints for each accession. A maternal half-sib (MHS) family was included among the 25 accessions to see if genetic relationships could be detected. RAPD data were used to generate simple matching coefficients, which were analyzed phenetically and by means of principal coordinate analysis (PCA). The MHS clustered together in both the phenetic and the PCA while the randomly selected accessions were scattered with no apparent pattern. The uses of RAPD analysis for Mangifera germ plasm classification and clonal identification are discussed.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hymel, Ross

    The Public Key (PK) FPGA software performs asymmetric authentication using the 163-bit Elliptic Curve Digital Signature Algorithm (ECDSA) on an embedded FPGA platform. A digital signature is created on user-supplied data, and communication with a host system is performed via a Serial Peripheral Interface (SPI) bus. Software includes all components necessary for signing, including custom random number generator for key creation and SHA-256 for data hashing.

  10. Use of Tabu Search in a Solver to Map Complex Networks onto Emulab Testbeds

    DTIC Science & Technology

    2007-03-01

    for. Our PF Chang dinners and Starbuck coffee breaks were pivotal in reconstituting me to take on another day at AFIT. I don’t know how I would’ve done...performance of assign. Option Description Value -s <seed> Random Number Generator Seed varies -P Prune Unsuable Pclasses n/a -H <float> Branching

  11. High-Performance Single-Photon Sources via Spatial Multiplexing

    DTIC Science & Technology

    2014-01-01

    ingredient for tasks such as quantum cryptography , quantum repeater, quantum teleportation, quantum computing, and truly-random number generation. Recently...SECURITY CLASSIFICATION OF: Single photons sources are desired for many potential quantum information applications. One common method to produce...photons sources are desired for many potential quantum information applications. One common method to produce single photons is based on a “heralding

  12. PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.

  13. Deficits in inhibitory control and conflict resolution on cognitive and motor tasks in Parkinson's disease.

    PubMed

    Obeso, Ignacio; Wilkinson, Leonora; Casabona, Enrique; Bringas, Maria Luisa; Álvarez, Mario; Álvarez, Lázaro; Pavón, Nancy; Rodríguez-Oroz, Maria-Cruz; Macías, Raúl; Obeso, Jose A; Jahanshahi, Marjan

    2011-07-01

    Recent imaging studies in healthy controls with a conditional stop signal reaction time (RT) task have implicated the subthalamic nucleus (STN) in response inhibition and the pre-supplementary motor area (pre-SMA) in conflict resolution. Parkinson's disease (PD) is characterized by striatal dopamine deficiency and overactivity of the STN and underactivation of the pre-SMA during movement. We used the conditional stop signal RT task to investigate whether PD produced similar or dissociable effects on response initiation, response inhibition and response initiation under conflict. In addition, we also examined inhibition of prepotent responses on three cognitive tasks: the Stroop, random number generation and Hayling sentence completion. PD patients were impaired on the conditional stop signal reaction time task, with response initiation both in situations with or without conflict and response inhibition all being significantly delayed, and had significantly greater difficulty in suppressing prepotent or habitual responses on the Stroop, Hayling and random number generation tasks relative to controls. These results demonstrate the existence of a generalized inhibitory deficit in PD, which suggest that PD is a disorder of inhibition as well as activation and that in situations of conflict, executive control over responses is compromised.

  14. High speed true random number generator with a new structure of coarse-tuning PDL in FPGA

    NASA Astrophysics Data System (ADS)

    Fang, Hongzhen; Wang, Pengjun; Cheng, Xu; Zhou, Keji

    2018-03-01

    A metastability-based TRNG (true random number generator) is presented in this paper, and implemented in FPGA. The metastable state of a D flip-flop is tunable through a two-stage PDL (programmable delay line). With the proposed coarse-tuning PDL structure, the TRNG core does not require extra placement and routing to ensure its entropy. Furthermore, the core needs fewer stages of coarse-tuning PDL at higher operating frequency, and thus saves more resources in FPGA. The designed TRNG achieves 25 Mbps @ 100 MHz throughput after proper post-processing, which is several times higher than other previous TRNGs based on FPGA. Moreover, the robustness of the system is enhanced with the adoption of a feedback system. The quality of the designed TRNG is verified by NIST (National Institute of Standards and Technology) and also accepted by class P1 of the AIS-20/31 test suite. Project supported by the S&T Plan of Zhejiang Provincial Science and Technology Department (No. 2016C31078), the National Natural Science Foundation of China (Nos. 61574041, 61474068, 61234002), and the K.C. Wong Magna Fund in Ningbo University, China.

  15. Selecting the best design for nonstandard toxicology experiments.

    PubMed

    Webb, Jennifer M; Smucker, Byran J; Bailer, A John

    2014-10-01

    Although many experiments in environmental toxicology use standard statistical experimental designs, there are situations that arise where no such standard design is natural or applicable because of logistical constraints. For example, the layout of a laboratory may suggest that each shelf serve as a block, with the number of experimental units per shelf either greater than or less than the number of treatments in a way that precludes the use of a typical block design. In such cases, an effective and powerful alternative is to employ optimal experimental design principles, a strategy that produces designs with precise statistical estimates. Here, a D-optimal design was generated for an experiment in environmental toxicology that has 2 factors, 16 treatments, and constraints similar to those described above. After initial consideration of a randomized complete block design and an intuitive cyclic design, it was decided to compare a D-optimal design and a slightly more complicated version of the cyclic design. Simulations were conducted generating random responses under a variety of scenarios that reflect conditions motivated by a similar toxicology study, and the designs were evaluated via D-efficiency as well as by a power analysis. The cyclic design performed well compared to the D-optimal design. © 2014 SETAC.

  16. Female reproductive success variation in a Pseudotsuga menziesii seed orchard as revealed by pedigree reconstruction from a bulk seed collection.

    PubMed

    El-Kassaby, Yousry A; Funda, Tomas; Lai, Ben S K

    2010-01-01

    The impact of female reproductive success on the mating system, gene flow, and genetic diversity of the filial generation was studied using a random sample of 801 bulk seed from a 49-clone Pseudotsuga menziesii seed orchard. We used microsatellite DNA fingerprinting and pedigree reconstruction to assign each seed's maternal and paternal parents and directly estimated clonal reproductive success, selfing rate, and the proportion of seed sired by outside pollen sources. Unlike most family array mating system and gene flow studies conducted on natural and experimental populations, which used an equal number of seeds per maternal genotype and thus generating unbiased inferences only on male reproductive success, the random sample we used was a representative of the entire seed crop; therefore, provided a unique opportunity to draw unbiased inferences on both female and male reproductive success variation. Selfing rate and the number of seed sired by outside pollen sources were found to be a function of female fertility variation. This variation also substantially and negatively affected female effective population size. Additionally, the results provided convincing evidence that the use of clone size as a proxy to fertility is questionable and requires further consideration.

  17. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  18. NHash: Randomized N-Gram Hashing for Distributed Generation of Validatable Unique Study Identifiers in Multicenter Research

    PubMed Central

    Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D

    2015-01-01

    Background A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. Objective This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. Methods NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f 1, f 2, ..., f k. The input for each function f i has 3 components: a random number r, an integer n, and input data m. The result, f i(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f 1(r 1, n 1, m 1), f 2(r 2, n 2, m 2), ..., f k(r k, n k, m k). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. Results We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). Conclusions The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash. PMID:26554419

  19. Effect of Oral Carbohydrate Intake on Labor Progress: Randomized Controlled Trial

    PubMed Central

    Rahmani, R; Khakbazan, Z; Yavari, P; Granmayeh, M; Yavari, L

    2012-01-01

    Background Lack of information regarding biochemical changes in women during labor and its outcomes on maternal and neonatal health still is an unanswered question. This study aims to explore the effectiveness of oral carbohydrate intake during labor on the duration of the active phase and other maternal and neonatal outcomes. Methods: A parallel prospective randomized controlled trial, conducted at the University Affiliated Teaching Hospital in Gonabad. Totally, 190 women were randomly assigned to an intervention (N=87) or control (N=90) group. Inclusion criteria were low-risk women with singleton cephalic presentation; and cervical dilatation 3–4 cm. Randomization was used by random number generator on every day. Odd numbers was used for intervention and even numbers for control group. Intervention was based on the preferences between: 3 medium dates plus 110 ml water; 3 dates plus 110 ml light tea without sugar; or 110 ml orange juice. The protocol is only run once but women ate and drank gradually before second stage of labor. Control group were fasted as routine practice. Neither participants nor care givers or staff could be blinded to group allocation. Differences between duration of the active phase of labor were assessed as primary outcome measure. Results: There was significant difference in the length of second stage of labor (P <.05). The effect size for this variable was 0.48. There were no significant differences in other maternal and neonatal outcomes. Conclusions: Oral intake of carbohydrate was an effective method for shortening the duration of second stage of labor in low-risk women. PMID:23304677

  20. A nanodiamond-tapered fiber system with high single-mode coupling efficiency.

    PubMed

    Schröder, Tim; Fujiwara, Masazumi; Noda, Tetsuya; Zhao, Hong-Quan; Benson, Oliver; Takeuchi, Shigeki

    2012-05-07

    We present a fiber-coupled diamond-based single photon system. Single nanodiamonds containing nitrogen vacancy defect centers are deposited on a tapered fiber of 273 nanometer in diameter providing a record-high number of 689,000 single photons per second from a defect center in a single-mode fiber. The system can be cooled to cryogenic temperatures and coupled evanescently to other nanophotonic structures, such as microresonators. The system is suitable for integrated quantum transmission experiments, two-photon interference, quantum-random-number generation and nano-magnetometry.

  1. Extracting Information about the Rotator Cuff from Magnetic Resonance Images Using Deterministic and Random Techniques

    PubMed Central

    De Los Ríos, F. A.; Paluszny, M.

    2015-01-01

    We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281

  2. RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis.

    PubMed

    Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab

    2012-01-01

    RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. http://www.cemb.edu.pk/sw.html RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language.

  3. Random-access algorithms for multiuser computer communication networks. Doctoral thesis, 1 September 1986-31 August 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papantoni-Kazakos, P.; Paterakis, M.

    1988-07-01

    For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less

  4. Ground States of Random Spanning Trees on a D-Wave 2X

    NASA Astrophysics Data System (ADS)

    Hall, J. S.; Hobl, L.; Novotny, M. A.; Michielsen, Kristel

    The performances of two D-Wave 2 machines (476 and 496 qubits) and of a 1097-qubit D-Wave 2X were investigated. Each chip has a Chimera interaction graph calG . Problem input consists of values for the fields hj and for the two-qubit interactions Ji , j of an Ising spin-glass problem formulated on calG . Output is returned in terms of a spin configuration {sj } , with sj = +/- 1 . We generated random spanning trees (RSTs) uniformly distributed over all spanning trees of calG . On the 476-qubit D-Wave 2, RSTs were generated on the full chip with Ji , j = - 1 and hj = 0 and solved one thousand times. The distribution of solution energies and the average magnetization of each qubit were determined. On both the 476- and 1097-qubit machines, four identical spanning trees were generated on each quadrant of the chip. The statistical independence of these regions was investigated. In another study, on the D-Wave 2X, one hundred RSTs with random Ji , j ∈ { - 1 , 1 } and hj = 0 were generated on the full chip. Each RST problem was solved one hundred times and the number of times the ground state energy was found was recorded. This procedure was repeated for square subgraphs, with dimensions ranging from 7 ×7 to 11 ×11. Supported in part by NSF Grants DGE-0947419 and DMR-1206233. D-Wave time provided by D-Wave Systems and by the USRA Quantum Artificial Intelligence Laboratory Research Opportunity.

  5. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  6. Effect of a cash transfer programme for schooling on prevalence of HIV and herpes simplex type 2 in Malawi: a cluster randomised trial.

    PubMed

    Baird, Sarah J; Garfein, Richard S; McIntosh, Craig T; Ozler, Berk

    2012-04-07

    Lack of education and an economic dependence on men are often suggested as important risk factors for HIV infection in women. We assessed the efficacy of a cash transfer programme to reduce the risk of sexually transmitted infections in young women. In this cluster randomised trial, never-married women aged 13-22 years were recruited from 176 enumeration areas in the Zomba district of Malawi and randomly assigned with computer-generated random numbers by enumeration area (1:1) to receive cash payments (intervention group) or nothing (control group). Intervention enumeration areas were further randomly assigned with computer-generated random numbers to conditional (school attendance required to receive payment) and unconditional (no requirements to receive payment) groups. Participants in both intervention groups were randomly assigned by a lottery to receive monthly payments ranging from US$1 to $5, while their parents were independently assigned with computer-generated random numbers to receive $4-10. Behavioural risk assessments were done at baseline and 12 months; serology was tested at 18 months. Participants were not masked to treatment status but counsellors doing the serologic testing were. The primary outcomes were prevalence of HIV and herpes simplex virus 2 (HSV-2) at 18 months and were assessed by intention-to-treat analyses. The trial is registered, number NCT01333826. 88 enumeration areas were assigned to receive the intervention and 88 as controls. For the 1289 individuals enrolled in school at baseline with complete interview and biomarker data, weighted HIV prevalence at 18 month follow-up was 1·2% (seven of 490 participants) in the combined intervention group versus 3·0% (17 of 799 participants) in the control group (adjusted odds ratio [OR] 0·36, 95% CI 0·14-0·91); weighted HSV-2 prevalence was 0·7% (five of 488 participants) versus 3·0% (27 of 796 participants; adjusted OR 0·24, 0·09-0·65). In the intervention group, we noted no difference between conditional versus unconditional intervention groups for weighted HIV prevalence (3/235 [1%] vs 4/255 [2%]) or weighted HSV-2 prevalence (4/233 [1%] vs 1/255 [<1%]). For individuals who had already dropped out of school at baseline, we detected no significant difference between intervention and control groups for weighted HIV prevalence (23/210 [10%] vs 17/207 [8%]) or weighted HSV-2 prevalence (17/211 [8%] vs 17/208 [8%]). Cash transfer programmes can reduce HIV and HSV-2 infections in adolescent schoolgirls in low-income settings. Structural interventions that do not directly target sexual behaviour change can be important components of HIV prevention strategies. Global Development Network, Bill & Melinda Gates Foundation, National Bureau of Economic Research Africa Project, World Bank's Research Support Budget, and several World Bank trust funds (Gender Action Plan, Knowledge for Change Program, and Spanish Impact Evaluation fund). Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Complex network structure of musical compositions: Algorithmic generation of appealing music

    NASA Astrophysics Data System (ADS)

    Liu, Xiao Fan; Tse, Chi K.; Small, Michael

    2010-01-01

    In this paper we construct networks for music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurring connections. We analyze classical music from Bach, Mozart, Chopin, as well as other types of music such as Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. We conjecture that preserving the universal network properties is a necessary step in artificial composition of music. Power-law exponents of node degree, node strength and/or edge weight distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be composed artificially using a controlled random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. By generating a large number of compositions, we find that this algorithm generates music which has the necessary qualities to be subjectively judged as appealing.

  8. A fast ergodic algorithm for generating ensembles of equilateral random polygons

    NASA Astrophysics Data System (ADS)

    Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.

    2009-03-01

    Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.

  9. Parameterized reduced order models from a single mesh using hyper-dual numbers

    NASA Astrophysics Data System (ADS)

    Brake, M. R. W.; Fike, J. A.; Topping, S. D.

    2016-06-01

    In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.

  10. Randomized trial for answers to clinical questions: evaluating a pre-appraised versus a MEDLINE search protocol.

    PubMed

    Patel, Manesh R; Schardt, Connie M; Sanders, Linda L; Keitz, Sheri A

    2006-10-01

    The paper compares the speed, validity, and applicability of two different protocols for searching the primary medical literature. A randomized trial involving medicine residents was performed. An inpatient general medicine rotation was used. Thirty-two internal medicine residents were block randomized into four groups of eight. Success rate of each search protocol was measured by perceived search time, number of questions answered, and proportion of articles that were applicable and valid. Residents randomized to the MEDLINE-first (protocol A) group searched 120 questions, and residents randomized to the MEDLINE-last (protocol B) searched 133 questions. In protocol A, 104 answers (86.7%) and, in protocol B, 117 answers (88%) were found to clinical questions. In protocol A, residents reported that 26 (25.2%) of the answers were obtained quickly or rated as "fast" (<5 minutes) as opposed to 55 (51.9%) in protocol B, (P = 0.0004). A subset of questions and articles (n = 79) were reviewed by faculty who found that both protocols identified similar numbers of answer articles that addressed the questions and were felt to be valid using critical appraisal criteria. For resident-generated clinical questions, both protocols produced a similarly high percentage of applicable and valid articles. The MEDLINE-last search protocol was perceived to be faster. However, in the MEDLINE-last protocol, a significant portion of questions (23%) still required searching MEDLINE to find an answer.

  11. Home-based neurologic music therapy for arm hemiparesis following stroke: results from a pilot, feasibility randomized controlled trial.

    PubMed

    Street, Alexander J; Magee, Wendy L; Bateman, Andrew; Parker, Michael; Odell-Miller, Helen; Fachner, Jorg

    2018-01-01

    To assess the feasibility of a randomized controlled trial to evaluate music therapy as a home-based intervention for arm hemiparesis in stroke. A pilot feasibility randomized controlled trial, with cross-over design. Randomization by statistician using computer-generated, random numbers concealed in opaque envelopes. Participants' homes across Cambridgeshire, UK. Eleven people with stroke and arm hemiparesis, 3-60 months post stroke, following discharge from community rehabilitation. Each participant engaged in therapeutic instrumental music performance in 12 individual clinical contacts, twice weekly for six weeks. Feasibility was estimated by recruitment from three community stroke teams over a 12-month period, attrition rates, completion of treatment and successful data collection. Structured interviews were conducted pre and post intervention to establish participant tolerance and preference. Action Research Arm Test and Nine-hole Peg Test data were collected at weeks 1, 6, 9, 15 and 18, pre and post intervention by a blinded assessor. A total of 11 of 14 invited participants were recruited (intervention n = 6, waitlist n = 5). In total, 10 completed treatment and data collection. It cannot be concluded whether a larger trial would be feasible due to unavailable data regarding a number of eligible patients screened. Adherence to treatment, retention and interview responses might suggest that the intervention was motivating for participants. ClinicalTrials.gov identifier NCT 02310438.

  12. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  13. Recombination of polynucleotide sequences using random or defined primers

    DOEpatents

    Arnold, Frances H.; Shao, Zhixin; Affholter, Joseph A.; Zhao, Huimin H; Giver, Lorraine J.

    2000-01-01

    A method for in vitro mutagenesis and recombination of polynucleotide sequences based on polymerase-catalyzed extension of primer oligonucleotides is disclosed. The method involves priming template polynucleotide(s) with random-sequences or defined-sequence primers to generate a pool of short DNA fragments with a low level of point mutations. The DNA fragments are subjected to denaturization followed by annealing and further enzyme-catalyzed DNA polymerization. This procedure is repeated a sufficient number of times to produce full-length genes which comprise mutants of the original template polynucleotides. These genes can be further amplified by the polymerase chain reaction and cloned into a vector for expression of the encoded proteins.

  14. Recombination of polynucleotide sequences using random or defined primers

    DOEpatents

    Arnold, Frances H.; Shao, Zhixin; Affholter, Joseph A.; Zhao, Huimin; Giver, Lorraine J.

    2001-01-01

    A method for in vitro mutagenesis and recombination of polynucleotide sequences based on polymerase-catalyzed extension of primer oligonucleotides is disclosed. The method involves priming template polynucleotide(s) with random-sequences or defined-sequence primers to generate a pool of short DNA fragments with a low level of point mutations. The DNA fragments are subjected to denaturization followed by annealing and further enzyme-catalyzed DNA polymerization. This procedure is repeated a sufficient number of times to produce full-length genes which comprise mutants of the original template polynucleotides. These genes can be further amplified by the polymerase chain reaction and cloned into a vector for expression of the encoded proteins.

  15. Testing, Selection, and Implementation of Random Number Generators

    DTIC Science & Technology

    2008-07-01

    Complexity and Lempel - Ziv Compression tests. This causes concern for cryptographic use but is not relevant for our applications. In fact, the features of...Linear Complexity, Lempel - Ziv Compression , and Matrix Rank test failures excluded. The Mersenne Twister is widely accepted by the community; in fact...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information. Send comments

  16. Nonquadratic Variation of the Blum Blum Shub Pseudorandom Number Generator

    DTIC Science & Technology

    2016-09-01

    maximum 200 words) Cryptography is essential for secure online communications. Many different types of ciphers are implemented in modern-day... cryptography , but they all have one common factor. All ciphers require a source of randomness, which makes them unpre- dictable. One such source of this...Martinsen Second Reader Craig Rasmussen Chair, Department of Applied Mathematics iii THIS PAGE INTENTIONALLY LEFT BLANK iv ABSTRACT Cryptography is

  17. Jet meandering by a foil pitching in quiescent fluid

    NASA Astrophysics Data System (ADS)

    Shinde, Sachin Y.; Arakeri, Jaywant H.

    2013-04-01

    The flow produced by a rigid symmetric NACA0015 airfoil purely pitching at a fixed location in quiescent fluid (the limiting case of infinite Strouhal number) is studied using visualizations and particle image velocimetry. A weak jet is generated whose inclination changes continually with time. This meandering is observed to be random and independent of the initial conditions, over a wide range of pitching parameters.

  18. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  19. Requirements for a loophole-free photonic Bell test using imperfect setting generators

    NASA Astrophysics Data System (ADS)

    Kofler, Johannes; Giustina, Marissa; Larsson, Jan-Åke; Mitchell, Morgan W.

    2016-03-01

    Experimental violations of Bell inequalities are in general vulnerable to so-called loopholes. In this work, we analyze the characteristics of a loophole-free Bell test with photons, closing simultaneously the locality, freedom-of-choice, fair-sampling (i.e., detection), coincidence-time, and memory loopholes. We pay special attention to the effect of excess predictability in the setting choices due to nonideal random-number generators. We discuss necessary adaptations of the Clauser-Horne and Eberhard inequality when using such imperfect devices and—using Hoeffding's inequality and Doob's optional stopping theorem—the statistical analysis in such Bell tests.

  20. Image encryption using random sequence generated from generalized information domain

    NASA Astrophysics Data System (ADS)

    Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu

    2016-05-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.

  1. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning

    PubMed Central

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630

  2. Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, V. A.

    2008-01-01

    There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.

  3. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  4. Datasets for supplier selection and order allocation with green criteria, all-unit quantity discounts and varying number of suppliers.

    PubMed

    Hamdan, Sadeque; Cheaitou, Ali

    2017-08-01

    This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.

  5. Effects of ion channel noise on neural circuits: an application to the respiratory pattern generator to investigate breathing variability.

    PubMed

    Yu, Haitao; Dhingra, Rishi R; Dick, Thomas E; Galán, Roberto F

    2017-01-01

    Neural activity generally displays irregular firing patterns even in circuits with apparently regular outputs, such as motor pattern generators, in which the output frequency fluctuates randomly around a mean value. This "circuit noise" is inherited from the random firing of single neurons, which emerges from stochastic ion channel gating (channel noise), spontaneous neurotransmitter release, and its diffusion and binding to synaptic receptors. Here we demonstrate how to expand conductance-based network models that are originally deterministic to include realistic, physiological noise, focusing on stochastic ion channel gating. We illustrate this procedure with a well-established conductance-based model of the respiratory pattern generator, which allows us to investigate how channel noise affects neural dynamics at the circuit level and, in particular, to understand the relationship between the respiratory pattern and its breath-to-breath variability. We show that as the channel number increases, the duration of inspiration and expiration varies, and so does the coefficient of variation of the breath-to-breath interval, which attains a minimum when the mean duration of expiration slightly exceeds that of inspiration. For small channel numbers, the variability of the expiratory phase dominates over that of the inspiratory phase, and vice versa for large channel numbers. Among the four different cell types in the respiratory pattern generator, pacemaker cells exhibit the highest sensitivity to channel noise. The model shows that suppressing input from the pons leads to longer inspiratory phases, a reduction in breathing frequency, and larger breath-to-breath variability, whereas enhanced input from the raphe nucleus increases breathing frequency without changing its pattern. A major source of noise in neuronal circuits is the "flickering" of ion currents passing through the neurons' membranes (channel noise), which cannot be suppressed experimentally. Computational simulations are therefore the best way to investigate the effects of this physiological noise by manipulating its level at will. We investigate the role of noise in the respiratory pattern generator and show that endogenous, breath-to-breath variability is tightly linked to the respiratory pattern. Copyright © 2017 the American Physiological Society.

  6. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  8. Application of Stochastic Labeling with Random-Sequence Barcodes for Simultaneous Quantification and Sequencing of Environmental 16S rRNA Genes.

    PubMed

    Hoshino, Tatsuhiko; Inagaki, Fumio

    2017-01-01

    Next-generation sequencing (NGS) is a powerful tool for analyzing environmental DNA and provides the comprehensive molecular view of microbial communities. For obtaining the copy number of particular sequences in the NGS library, however, additional quantitative analysis as quantitative PCR (qPCR) or digital PCR (dPCR) is required. Furthermore, number of sequences in a sequence library does not always reflect the original copy number of a target gene because of biases caused by PCR amplification, making it difficult to convert the proportion of particular sequences in the NGS library to the copy number using the mass of input DNA. To address this issue, we applied stochastic labeling approach with random-tag sequences and developed a NGS-based quantification protocol, which enables simultaneous sequencing and quantification of the targeted DNA. This quantitative sequencing (qSeq) is initiated from single-primer extension (SPE) using a primer with random tag adjacent to the 5' end of target-specific sequence. During SPE, each DNA molecule is stochastically labeled with the random tag. Subsequently, first-round PCR is conducted, specifically targeting the SPE product, followed by second-round PCR to index for NGS. The number of random tags is only determined during the SPE step and is therefore not affected by the two rounds of PCR that may introduce amplification biases. In the case of 16S rRNA genes, after NGS sequencing and taxonomic classification, the absolute number of target phylotypes 16S rRNA gene can be estimated by Poisson statistics by counting random tags incorporated at the end of sequence. To test the feasibility of this approach, the 16S rRNA gene of Sulfolobus tokodaii was subjected to qSeq, which resulted in accurate quantification of 5.0 × 103 to 5.0 × 104 copies of the 16S rRNA gene. Furthermore, qSeq was applied to mock microbial communities and environmental samples, and the results were comparable to those obtained using digital PCR and relative abundance based on a standard sequence library. We demonstrated that the qSeq protocol proposed here is advantageous for providing less-biased absolute copy numbers of each target DNA with NGS sequencing at one time. By this new experiment scheme in microbial ecology, microbial community compositions can be explored in more quantitative manner, thus expanding our knowledge of microbial ecosystems in natural environments.

  9. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  10. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  11. Population Fluctuation and Altitudinal Distribution of Tetraleurodes perseae (Nakahara) (Hemiptera: Aleyrodidae) in Avocado (Lauraceae) in Morelos, Mexico.

    PubMed

    García-Palacios, Daniel; Bautista-Martínez, Néstor; Lagunes-Tejeda, Ángel; Carrillo-Sánchez, José Luis; Nieto-Ángel, Daniel; García-Gutiérrez, Cipriano

    2016-01-01

    Although whiteflies Tetraleurodes perseae (Nakahara) (Hemiptera: Aleyrodidae) are considered a secondary pest of avocado crops, their presence and the damages that they cause can decrease crop vigor and affect production. The objective of the present work was to determine the population fluctuation and altitudinal distribution of the T. perseae Nakahara whitefly in avocado trees, as well as to determine the number of possible generations in one year. The study was done in three orchards in Morelos state, located at different altitudes, from February 2014 to April 2015. Samplings were done every 21 days from 10 randomly chosen trees in each orchard. The samples were taken randomly from the middle stratus (1.6 m in height) of each tree; in buds or young leaves for the number of adults and leaves only for nymphs. Additionally, two yellow traps (7 × 14 cm) with glue were placed in each tree for adult samplings. Data were collected regarding vegetative budding, rainfall, relative humidity, and temperature. T. perseae was present in all three sampled orchards, with a greater presence in the lowest orchard, during the whole study period. In the orchard with the lowest altitudinal gradient (1,736 masl), 11 whitefly generations developed; 10 generations developed in the medium gradient orchard (1,934 masl); and 8 generations developed in the highest orchard (2,230 masl). The adults showed a positive relationship with regard to vegetative buds, while the nymphs had a negative relationship with regard to relative humidity. The rest of the parameters showed diverse effects on the species depending on the altitude of the orchard. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America.

  12. Population Fluctuation and Altitudinal Distribution of Tetraleurodes perseae (Nakahara) (Hemiptera: Aleyrodidae) in Avocado (Lauraceae) in Morelos, Mexico

    PubMed Central

    García-Palacios, Daniel; Bautista-Martínez, Néstor; Lagunes-Tejeda, Ángel; Carrillo-Sánchez, José Luis; Nieto-Ángel, Daniel; García-Gutiérrez, Cipriano

    2016-01-01

    Although whiteflies Tetraleurodes perseae (Nakahara) (Hemiptera: Aleyrodidae) are considered a secondary pest of avocado crops, their presence and the damages that they cause can decrease crop vigor and affect production. The objective of the present work was to determine the population fluctuation and altitudinal distribution of the T. perseae Nakahara whitefly in avocado trees, as well as to determine the number of possible generations in one year. The study was done in three orchards in Morelos state, located at different altitudes, from February 2014 to April 2015. Samplings were done every 21 days from 10 randomly chosen trees in each orchard. The samples were taken randomly from the middle stratus (1.6 m in height) of each tree; in buds or young leaves for the number of adults and leaves only for nymphs. Additionally, two yellow traps (7 × 14 cm) with glue were placed in each tree for adult samplings. Data were collected regarding vegetative budding, rainfall, relative humidity, and temperature. T. perseae was present in all three sampled orchards, with a greater presence in the lowest orchard, during the whole study period. In the orchard with the lowest altitudinal gradient (1,736 masl), 11 whitefly generations developed; 10 generations developed in the medium gradient orchard (1,934 masl); and 8 generations developed in the highest orchard (2,230 masl). The adults showed a positive relationship with regard to vegetative buds, while the nymphs had a negative relationship with regard to relative humidity. The rest of the parameters showed diverse effects on the species depending on the altitude of the orchard. PMID:27658809

  13. Dynamic Loads Generation for Multi-Point Vibration Excitation Problems

    NASA Technical Reports Server (NTRS)

    Shen, Lawrence

    2011-01-01

    A random-force method has been developed to predict dynamic loads produced by rocket-engine random vibrations for new rocket-engine designs. The method develops random forces at multiple excitation points based on random vibration environments scaled from accelerometer data obtained during hot-fire tests of existing rocket engines. This random-force method applies random forces to the model and creates expected dynamic response in a manner that simulates the way the operating engine applies self-generated random vibration forces (random pressure acting on an area) with the resulting responses that we measure with accelerometers. This innovation includes the methodology (implementation sequence), the computer code, two methods to generate the random-force vibration spectra, and two methods to reduce some of the inherent conservatism in the dynamic loads. This methodology would be implemented to generate the random-force spectra at excitation nodes without requiring the use of artificial boundary conditions in a finite element model. More accurate random dynamic loads than those predicted by current industry methods can then be generated using the random force spectra. The scaling method used to develop the initial power spectral density (PSD) environments for deriving the random forces for the rocket engine case is based on the Barrett Criteria developed at Marshall Space Flight Center in 1963. This invention approach can be applied in the aerospace, automotive, and other industries to obtain reliable dynamic loads and responses from a finite element model for any structure subject to multipoint random vibration excitations.

  14. Research on Aero-Thermodynamic Distortion Induced Structural Dynamic Response of Multistage Compressor Blading

    DTIC Science & Technology

    1992-03-01

    of realistic reduced frequency values for the ftost time. 14. SUIUECT TEIEMS IS. NUMBER OF PAGES Unsteady Aerodynamic, 143 Flow Induced Vibrations 16...Flat Plate APPENDIX X. Prediction of Turbulence Generated Random Vibrational 106 Response of Turbomachinery Blading 3 APPENDIX XI. Viscous Oscillating...failure is fatigue caused by vibrations at levels exceeding3 material endurance limits. These vibrations occur when a periodic forcing function, with

  15. DARPA Quantum Network Testbed

    DTIC Science & Technology

    2007-07-01

    End-to-End Security with Photonic Switching...............................28 8.4 Year 3 – Adding a Link that implements Entanglement -Based QKD... entangled photon pairs at 1550nm. • Built a highspeed (~10 MHz) physical random number generator, and integrated it into Bob. This design provides an...each kind of photonic setup in the Quantum Network, i.e., over time it will grow to include descriptions of the weak-coherent link, the entangled

  16. Spin Glass Patch Planting

    NASA Technical Reports Server (NTRS)

    Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.

    2016-01-01

    In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.

  17. Direct Synthesis of Microwave Waveforms for Quantum Computing

    NASA Astrophysics Data System (ADS)

    Raftery, James; Vrajitoarea, Andrei; Zhang, Gengyan; Leng, Zhaoqi; Srinivasan, Srikanth; Houck, Andrew

    Current state of the art quantum computing experiments in the microwave regime use control pulses generated by modulating microwave tones with baseband signals generated by an arbitrary waveform generator (AWG). Recent advances in digital analog conversion technology have made it possible to directly synthesize arbitrary microwave pulses with sampling rates of 65 gigasamples per second (GSa/s) or higher. These new ultra-wide bandwidth AWG's could dramatically simplify the classical control chain for quantum computing experiments, presenting potential cost savings and reducing the number of components that need to be carefully calibrated. Here we use a Keysight M8195A AWG to study the viability of such a simplified scheme, demonstrating randomized benchmarking of a superconducting qubit with high fidelity.

  18. The effect of working memory load on semantic illusions: what the phonological loop and central executive have to contribute.

    PubMed

    Büttner, Anke Caroline

    2012-01-01

    When asked how many animals of each kind Moses took on the Ark, most people respond with "two" despite the substituted name (Moses for Noah) in the question. Possible explanations for semantic illusions appear to be related to processing limitations such as those of working memory. Indeed, individual working memory capacity has an impact upon how sentences containing substitutions are processed. This experiment examined further the role of working memory in the occurrence of semantic illusions using a dual-task working memory load approach. Participants verified statements while engaging in either articulatory suppression or random number generation. Secondary task type had a significant effect on semantic illusion rate, but only when comparing the control condition to the two dual-task conditions. Furthermore, secondary task performance in the random number generation condition declined, suggesting a tradeoff between tasks. Response time analyses also showed a different pattern of processing across the conditions. The findings suggest that the phonological loop plays a role in representing semantic illusion sentences coherently and in monitoring for details, while the role of the central executive is to assist gist-processing of sentences. This usually efficient strategy leads to error in the case of semantic illusions.

  19. Certified randomness in quantum physics.

    PubMed

    Acín, Antonio; Masanes, Lluis

    2016-12-07

    The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.

  20. Entropy generation, particle creation, and quantum field theory in a cosmological spacetime: When do number and entropy increase\\?

    NASA Astrophysics Data System (ADS)

    Kandrup, Henry E.

    1988-06-01

    This paper reexamines the statistical quantum field theory of a free, minimally coupled, real scalar field Φ in a statically bounded, classical Friedmann cosmology, where the time-dependent scale factor Ω(t) tends to constant values Ω1 and Ω2 for tt2. The principal objective is to investigate the intuition that ``entropy'' S correlates with average particle number , so that increases in induced by parametric amplification manifest a one-to-one connection with increases in S. The definition of particle number Nk becomes unambiguous for t>t2 and t- is guaranteed generically to be positive only for special initial data which, in a number representation, are characterized by ``random phases'' in the sense that any relative phase for the projection of ρ(t1) into two different number eigenstates is ``random'' or ``unobservable physically,'' and averaged over in a density matrix. More importantly for the notion of entropy, random-phase initial data also guarantee an increase in the spread of P(\\{k,Nk\\}), so that, e.g., the sum of the variances Δ2N+/-k(t2) exceeds the initial Δ2N+/-k(t1). It is this increasing spread in P, rather than the growth in average numbers per se, which suggests that, for initial data manifesting random phases, SN(t2)>SN(t1), a result established rigorously in the limits of strong and weak particle creation.

  1. Photonic generation of polarization-resolved wideband chaos with time-delay concealment in three-cascaded vertical-cavity surface-emitting lasers.

    PubMed

    Liu, Huijie; Li, Nianqiang; Zhao, Qingchun

    2015-05-10

    Optical chaos generated by chaotic lasers has been widely used in several important applications, such as chaos-based communications and high-speed random-number generators. However, these applications are susceptible to degradation by the presence of time-delay (TD) signature identified from the chaotic output. Here we propose to achieve the concealment of TD signature, along with the enhancement of chaos bandwidth, in three-cascaded vertical-cavity surface-emitting lasers (VCSELs). The cascaded system is composed of an external-cavity master VCSEL, a solitary intermediate VCSEL, and a solitary slave VCSEL. Through mapping the evolutions of TD signature and chaos bandwidth in the parameter space of the injection strength and frequency detuning, photonic generation of polarization-resolved wideband chaos with TD concealment is numerically demonstrated for wide regions of the injection parameters.

  2. A Bayesian comparative effectiveness trial in action: developing a platform for multisite study adaptive randomization.

    PubMed

    Brown, Alexandra R; Gajewski, Byron J; Aaronson, Lauren S; Mudaranthakam, Dinesh Pal; Hunt, Suzanne L; Berry, Scott M; Quintana, Melanie; Pasnoor, Mamatha; Dimachkie, Mazen M; Jawdat, Omar; Herbelin, Laura; Barohn, Richard J

    2016-08-31

    In the last few decades, the number of trials using Bayesian methods has grown rapidly. Publications prior to 1990 included only three clinical trials that used Bayesian methods, but that number quickly jumped to 19 in the 1990s and to 99 from 2000 to 2012. While this literature provides many examples of Bayesian Adaptive Designs (BAD), none of the papers that are available walks the reader through the detailed process of conducting a BAD. This paper fills that gap by describing the BAD process used for one comparative effectiveness trial (Patient Assisted Intervention for Neuropathy: Comparison of Treatment in Real Life Situations) that can be generalized for use by others. A BAD was chosen with efficiency in mind. Response-adaptive randomization allows the potential for substantially smaller sample sizes, and can provide faster conclusions about which treatment or treatments are most effective. An Internet-based electronic data capture tool, which features a randomization module, facilitated data capture across study sites and an in-house computation software program was developed to implement the response-adaptive randomization. A process for adapting randomization with minimal interruption to study sites was developed. A new randomization table can be generated quickly and can be seamlessly integrated in the data capture tool with minimal interruption to study sites. This manuscript is the first to detail the technical process used to evaluate a multisite comparative effectiveness trial using adaptive randomization. An important opportunity for the application of Bayesian trials is in comparative effectiveness trials. The specific case study presented in this paper can be used as a model for conducting future clinical trials using a combination of statistical software and a web-based application. ClinicalTrials.gov Identifier: NCT02260388 , registered on 6 October 2014.

  3. A random forest algorithm for nowcasting of intense precipitation events

    NASA Astrophysics Data System (ADS)

    Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh

    2017-09-01

    Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.

  4. Programmable random interval generator

    NASA Technical Reports Server (NTRS)

    Lindsey, R. S., Jr.

    1973-01-01

    Random pulse generator can supply constant-amplitude randomly distributed pulses with average rate ranging from a few counts per second to more than one million counts per second. Generator requires no high-voltage power supply or any special thermal cooling apparatus. Device is uniquely versatile and provides wide dynamic range of operation.

  5. Analysis of Document Authentication Technique using Soft Magnetic Fibers

    NASA Astrophysics Data System (ADS)

    Aoki, Ayumi; Ikeda, Takashi; Yamada, Tsutomu; Takemura, Yasushi; Matsumoto, Tsutomu

    An artifact-metric system using magnetic fibers can be applied for authentications of stock certificate, bill, passport, plastic cards and other documents. Security of the system is guaranteed by its feature of difficulty in copy. This authentication system is based on randomly dispersed magnetic fibers embedded in documents. In this paper, a theoretical analysis was performed in order to evaluate this system. The position of the magnetic fibers was determined by a conventional function of random number generator. By measuring output waveforms by a magnetoresistance (MR) sensor, a false match rate (FMR) could be calculated. Optimizations of the density of the magnetic fibers and the dimension of the MR sensor were achieved.

  6. Home-based neurologic music therapy for arm hemiparesis following stroke: results from a pilot, feasibility randomized controlled trial

    PubMed Central

    Street, Alexander J; Magee, Wendy L; Bateman, Andrew; Parker, Michael; Odell-Miller, Helen; Fachner, Jorg

    2017-01-01

    Objective: To assess the feasibility of a randomized controlled trial to evaluate music therapy as a home-based intervention for arm hemiparesis in stroke. Design: A pilot feasibility randomized controlled trial, with cross-over design. Randomization by statistician using computer-generated, random numbers concealed in opaque envelopes. Setting: Participants’ homes across Cambridgeshire, UK. Subjects: Eleven people with stroke and arm hemiparesis, 3–60 months post stroke, following discharge from community rehabilitation. Interventions: Each participant engaged in therapeutic instrumental music performance in 12 individual clinical contacts, twice weekly for six weeks. Main measures: Feasibility was estimated by recruitment from three community stroke teams over a 12-month period, attrition rates, completion of treatment and successful data collection. Structured interviews were conducted pre and post intervention to establish participant tolerance and preference. Action Research Arm Test and Nine-hole Peg Test data were collected at weeks 1, 6, 9, 15 and 18, pre and post intervention by a blinded assessor. Results: A total of 11 of 14 invited participants were recruited (intervention n = 6, waitlist n = 5). In total, 10 completed treatment and data collection. Conclusion: It cannot be concluded whether a larger trial would be feasible due to unavailable data regarding a number of eligible patients screened. Adherence to treatment, retention and interview responses might suggest that the intervention was motivating for participants. Trial registration: ClinicalTrials.gov identifier NCT 02310438. PMID:28643570

  7. Large-deviation theory for diluted Wishart random matrices

    NASA Astrophysics Data System (ADS)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  8. Analysis of the expected density of internal equilibria in random evolutionary multi-player multi-strategy games.

    PubMed

    Duong, Manh Hong; Han, The Anh

    2016-12-01

    In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.

  9. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  10. A multi-center randomized controlled trial to compare a self-ligating bracket with a conventional bracket in a UK population: Part 1: Treatment efficiency.

    PubMed

    O'Dywer, Lian; Littlewood, Simon J; Rahman, Shahla; Spencer, R James; Barber, Sophy K; Russell, Joanne S

    2016-01-01

    To use a two-arm parallel trial to compare treatment efficiency between a self-ligating and a conventional preadjusted edgewise appliance system. A prospective multi-center randomized controlled clinical trial was conducted in three hospital orthodontic departments. Subjects were randomly allocated to receive treatment with either a self-ligating (3M SmartClip) or conventional (3M Victory) preadjusted edgewise appliance bracket system using a computer-generated random sequence concealed in opaque envelopes, with stratification for operator and center. Two operators followed a standardized protocol regarding bracket bonding procedure and archwire sequence. Efficiency of each ligation system was assessed by comparing the duration of treatment (months), total number of appointments (scheduled and emergency visits), and number of bracket bond failures. One hundred thirty-eight subjects (mean age 14 years 11 months) were enrolled in the study, of which 135 subjects (97.8%) completed treatment. The mean treatment time and number of visits were 25.12 months and 19.97 visits in the SmartClip group and 25.80 months and 20.37 visits in the Victory group. The overall bond failure rate was 6.6% for the SmartClip and 7.2% for Victory, with a similar debond distribution between the two appliances. No significant differences were found between the bracket systems in any of the outcome measures. No serious harm was observed from either bracket system. There was no clinically significant difference in treatment efficiency between treatment with a self-ligating bracket system and a conventional ligation system.

  11. Randomizer for High Data Rates

    NASA Technical Reports Server (NTRS)

    Garon, Howard; Sank, Victor J.

    2018-01-01

    NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others.

  12. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  13. RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis

    PubMed Central

    Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab

    2012-01-01

    RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. Availability http://www.cemb.edu.pk/sw.html Abbreviations RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language. PMID:23055611

  14. Sustained State-Independent Quantum Contextual Correlations from a Single Ion

    NASA Astrophysics Data System (ADS)

    Leupold, F. M.; Malinowski, M.; Zhang, C.; Negnevitsky, V.; Alonso, J.; Home, J. P.; Cabello, A.

    2018-05-01

    We use a single trapped-ion qutrit to demonstrate the quantum-state-independent violation of noncontextuality inequalities using a sequence of randomly chosen quantum nondemolition projective measurements. We concatenate 53 ×106 sequential measurements of 13 observables, and unambiguously violate an optimal noncontextual bound. We use the same data set to characterize imperfections including signaling and repeatability of the measurements. The experimental sequence was generated in real time with a quantum random number generator integrated into our control system to select the subsequent observable with a latency below 50 μ s , which can be used to constrain contextual hidden-variable models that might describe our results. The state-recycling experimental procedure is resilient to noise and independent of the qutrit state, substantiating the fact that the contextual nature of quantum physics is connected to measurements and not necessarily to designated states. The use of extended sequences of quantum nondemolition measurements finds applications in the fields of sensing and quantum information.

  15. Protein Charge and Mass Contribute to the Spatio-temporal Dynamics of Protein-Protein Interactions in a Minimal Proteome

    PubMed Central

    Xu, Yu; Wang, Hong; Nussinov, Ruth; Ma, Buyong

    2013-01-01

    We constructed and simulated a ‘minimal proteome’ model using Langevin dynamics. It contains 206 essential protein types which were compiled from the literature. For comparison, we generated six proteomes with randomized concentrations. We found that the net charges and molecular weights of the proteins in the minimal genome are not random. The net charge of a protein decreases linearly with molecular weight, with small proteins being mostly positively charged and large proteins negatively charged. The protein copy numbers in the minimal genome have the tendency to maximize the number of protein-protein interactions in the network. Negatively charged proteins which tend to have larger sizes can provide large collision cross-section allowing them to interact with other proteins; on the other hand, the smaller positively charged proteins could have higher diffusion speed and are more likely to collide with other proteins. Proteomes with random charge/mass populations form less stable clusters than those with experimental protein copy numbers. Our study suggests that ‘proper’ populations of negatively and positively charged proteins are important for maintaining a protein-protein interaction network in a proteome. It is interesting to note that the minimal genome model based on the charge and mass of E. Coli may have a larger protein-protein interaction network than that based on the lower organism M. pneumoniae. PMID:23420643

  16. Dynamic Simulation of Random Packing of Polydispersive Fine Particles

    NASA Astrophysics Data System (ADS)

    Ferraz, Carlos Handrey Araujo; Marques, Samuel Apolinário

    2018-02-01

    In this paper, we perform molecular dynamic (MD) simulations to study the two-dimensional packing process of both monosized and random size particles with radii ranging from 1.0 to 7.0 μm. The initial positions as well as the radii of five thousand fine particles were defined inside a rectangular box by using a random number generator. Both the translational and rotational movements of each particle were considered in the simulations. In order to deal with interacting fine particles, we take into account both the contact forces and the long-range dispersive forces. We account for normal and static/sliding tangential friction forces between particles and between particle and wall by means of a linear model approach, while the long-range dispersive forces are computed by using a Lennard-Jones-like potential. The packing processes were studied assuming different long-range interaction strengths. We carry out statistical calculations of the different quantities studied such as packing density, mean coordination number, kinetic energy, and radial distribution function as the system evolves over time. We find that the long-range dispersive forces can strongly influence the packing process dynamics as they might form large particle clusters, depending on the intensity of the long-range interaction strength.

  17. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  18. An investigation of error correcting techniques for OMV and AXAF

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  19. Study of Randomness in AES Ciphertexts Produced by Randomly Generated S-Boxes and S-Boxes with Various Modulus and Additive Constant Polynomials

    NASA Astrophysics Data System (ADS)

    Das, Suman; Sadique Uz Zaman, J. K. M.; Ghosh, Ranjan

    2016-06-01

    In Advanced Encryption Standard (AES), the standard S-Box is conventionally generated by using a particular irreducible polynomial {11B} in GF(28) as the modulus and a particular additive constant polynomial {63} in GF(2), though it can be generated by many other polynomials. In this paper, it has been shown that it is possible to generate secured AES S-Boxes by using some other selected modulus and additive polynomials and also can be generated randomly, using a PRNG like BBS. A comparative study has been made on the randomness of corresponding AES ciphertexts generated, using these S-Boxes, by the NIST Test Suite coded for this paper. It has been found that besides using the standard one, other moduli and additive constants are also able to generate equally or better random ciphertexts; the same is true for random S-Boxes also. As these new types of S-Boxes are user-defined, hence unknown, they are able to prevent linear and differential cryptanalysis. Moreover, they act as additional key-inputs to AES, thus increasing the key-space.

  20. On grey levels in random CAPTCHA generation

    NASA Astrophysics Data System (ADS)

    Newton, Fraser; Kouritzin, Michael A.

    2011-06-01

    A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.

  1. When Gravity Fails: Local Search Topology

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Cheeseman, Peter; Stutz, John; Lau, Sonie (Technical Monitor)

    1997-01-01

    Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called {\\em plateau moves), dominate the time spent in local search. We analyze and characterize {\\em plateaus) for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e. global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.

  2. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm.

    PubMed

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  3. Physics Flash December 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kippen, Karen Elizabeth

    This is the December 2016 issue of Physics Flash, the newsletter of the Physics Division of Los Alamos National Laboratory (LANL). In this issue, the following topics are covered: Novel liquid helium technique to aid highly sensitive search for a neutron electrical dipole moment; Silverleaf: Prototype Red Sage experiments performed at Q-site; John L. Kline named 2016 APS Fellow; Physics students in the news; First Entropy Engine quantum random number generator hits the market; and celebrating service.

  4. Three-observer Bell inequality violation on a two-qubit entangled state

    NASA Astrophysics Data System (ADS)

    Schiavon, Matteo; Calderaro, Luca; Pittaluga, Mirko; Vallone, Giuseppe; Villoresi, Paolo

    2017-03-01

    Bipartite Bell inequalities can simultaneously be violated by two different pairs of observers when weak measurements and signalling is employed. Here, we experimentally demonstrate the violation of two simultaneous CHSH inequalities by exploiting a two-photon polarisation maximally entangled state. Our results demonstrate that large double violation is experimentally achievable. Our demonstration may have impact for Quantum Key Distribution or certification of Quantum Random Number generators based on weak measurements.

  5. Immortality of cancers

    PubMed Central

    Duesberg, Peter; McCormack, Amanda

    2013-01-01

    Immortality is a common characteristic of cancers, but its origin and purpose are still unclear. Here we advance a karyotypic theory of immortality based on the theory that carcinogenesis is a form of speciation. Accordingly, cancers are generated from normal cells by random karyotypic rearrangements and selection for cancer-specific reproductive autonomy. Since such rearrangements unbalance long-established mitosis genes, cancer karyotypes vary spontaneously but are stabilized perpetually by clonal selections for autonomy. To test this theory we have analyzed neoplastic clones, presumably immortalized by transfection with overexpressed telomerase or with SV40 tumor virus, for the predicted clonal yet flexible karyotypes. The following results were obtained: (1) All immortal tumorigenic lines from cells transfected with overexpressed telomerase had clonal and flexible karyotypes; (2) Searching for the origin of such karyotypes, we found spontaneously increasing, random aneuploidy in human fibroblasts early after transfection with overexpressed telomerase; (3) Late after transfection, new immortal tumorigenic clones with new clonal and flexible karyotypes were found; (4) Testing immortality of one clone during 848 unselected generations showed the chromosome number was stable, but the copy numbers of 36% of chromosomes drifted ± 1; (5) Independent immortal tumorigenic clones with individual, flexible karyotypes arose after individual latencies; (6) Immortal tumorigenic clones with new flexible karyotypes also arose late from cells of a telomerase-deficient mouse rendered aneuploid by SV40 virus. Because immortality and tumorigenicity: (1) correlated exactly with individual clonal but flexible karyotypes; (2) originated simultaneously with such karyotypes; and (3) arose in the absence of telomerase, we conclude that clonal and flexible karyotypes generate the immortality of cancers. PMID:23388461

  6. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  7. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  8. Predicting protein functions from redundancies in large-scale protein interaction networks

    NASA Technical Reports Server (NTRS)

    Samanta, Manoj Pratim; Liang, Shoudan

    2003-01-01

    Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.

  9. Influence of radiation on metastability-based TRNG

    NASA Astrophysics Data System (ADS)

    Wieczorek, Piotr Z.; Wieczorek, Zbigniew

    2017-08-01

    This paper presents a True Random Number Generator (TRNG) based on Flip-Flops with violated timing constraints. The proposed circuit has been implemented in a Xilinx Spartan 6 device. The TRNG circuit utilizes the metastability phenomenon as a source of randomness. Therefore, in the paper the influence of timing constraints on the flip-flop metastability proximity is discussed. The metastable range of operation enhances the noise influence on a Flip-Flop behavior. Therefore, the influence of an external stochastic source on the flip-flop operation is also investigated. For this purpose a radioactive source of radiation was used. According to the results shown in the paper the radiation increases the unpredictability of the metastable process of flip-flops operating as the randomness source in the TRNG. The statistical properties of TRNG operating in an increased radiation conditions were verified with the NIST battery of statistical tests.

  10. Constructing high complexity synthetic libraries of long ORFs using in vitro selection

    NASA Technical Reports Server (NTRS)

    Cho, G.; Keefe, A. D.; Liu, R.; Wilson, D. S.; Szostak, J. W.

    2000-01-01

    We present a method that can significantly increase the complexity of protein libraries used for in vitro or in vivo protein selection experiments. Protein libraries are often encoded by chemically synthesized DNA, in which part of the open reading frame is randomized. There are, however, major obstacles associated with the chemical synthesis of long open reading frames, especially those containing random segments. Insertions and deletions that occur during chemical synthesis cause frameshifts, and stop codons in the random region will cause premature termination. These problems can together greatly reduce the number of full-length synthetic genes in the library. We describe a strategy in which smaller segments of the synthetic open reading frame are selected in vitro using mRNA display for the absence of frameshifts and stop codons. These smaller segments are then ligated together to form combinatorial libraries of long uninterrupted open reading frames. This process can increase the number of full-length open reading frames in libraries by up to two orders of magnitude, resulting in protein libraries with complexities of greater than 10(13). We have used this methodology to generate three types of displayed protein library: a completely random sequence library, a library of concatemerized oligopeptide cassettes with a propensity for forming amphipathic alpha-helical or beta-strand structures, and a library based on one of the most common enzymatic scaffolds, the alpha/beta (TIM) barrel. Copyright 2000 Academic Press.

  11. Analysis of time series for postal shipments in Regional VII East Java Indonesia

    NASA Astrophysics Data System (ADS)

    Kusrini, DE; Ulama, B. S. S.; Aridinanti, L.

    2018-03-01

    The change of number delivery goods through PT. Pos Regional VII East Java Indonesia indicates that the trend of increasing and decreasing the delivery of documents and non-documents in PT. Pos Regional VII East Java Indonesia is strongly influenced by conditions outside of PT. Pos Regional VII East Java Indonesia so that the prediction the number of document and non-documents requires a model that can accommodate it. Based on the time series plot monthly data fluctuations occur from 2013-2016 then the model is done using ARIMA or seasonal ARIMA and selected the best model based on the smallest AIC value. The results of data analysis about the number of shipments on each product sent through the Sub-Regional Postal Office VII East Java indicates that there are 5 post offices of 26 post offices entering the territory. The largest number of shipments is available on the PPB (Paket Pos Biasa is regular package shipment/non-document ) and SKH (Surat Kilat Khusus is Special Express Mail/document) products. The time series model generated is largely a Random walk model meaning that the number of shipment in the future is influenced by random effects that are difficult to predict. Some are AR and MA models, except for Express shipment products with Malang post office destination which has seasonal ARIMA model on lag 6 and 12. This means that the number of items in the following month is affected by the number of items in the previous 6 months.

  12. Spectral statistics and scattering resonances of complex primes arrays

    NASA Astrophysics Data System (ADS)

    Wang, Ren; Pinheiro, Felipe A.; Dal Negro, Luca

    2018-01-01

    We introduce a class of aperiodic arrays of electric dipoles generated from the distribution of prime numbers in complex quadratic fields (Eisenstein and Gaussian primes) as well as quaternion primes (Hurwitz and Lifschitz primes), and study the nature of their scattering resonances using the vectorial Green's matrix method. In these systems we demonstrate several distinctive spectral properties, such as the absence of level repulsion in the strongly scattering regime, critical statistics of level spacings, and the existence of critical modes, which are extended fractal modes with long lifetimes not supported by either random or periodic systems. Moreover, we show that one can predict important physical properties, such as the existence spectral gaps, by analyzing the eigenvalue distribution of the Green's matrix of the arrays in the complex plane. Our results unveil the importance of aperiodic correlations in prime number arrays for the engineering of gapped photonic media that support far richer mode localization and spectral properties compared to usual periodic and random media.

  13. Initial experience with a novel pre-sign-out quality assurance tool for review of random surgical pathology diagnoses in a subspecialty-based university practice.

    PubMed

    Owens, Scott R; Wiehagen, Luke T; Kelly, Susan M; Piccoli, Anthony L; Lassige, Karen; Yousem, Samuel A; Dhir, Rajiv; Parwani, Anil V

    2010-09-01

    We recently implemented a novel pre-sign-out quality assurance tool in our subspecialty-based surgical pathology practice at the University of Pittsburgh Medical Center. It randomly selects an adjustable percentage of cases for review by a second pathologist at the time the originating pathologist's electronic signature is entered and requires that the review be completed within 24 hours, before release of the final report. The tool replaced a retrospective audit system and it has been in successful use since January 2009. We report our initial experience for the first 14 months of its service. During this time, the disagreement numbers and levels were similar to those identified using the retrospective system, case turnaround time was not significantly affected, and the number of case amendments generated decreased. The tool is a useful quality assurance instrument and its prospective nature allows for the potential prevention of some serious errors.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.

    The Futility package contains the following: 1) Definition of the size of integers and real numbers; 2) A generic Unit test harness; 3) Definitions for some basic extensions to the Fortran language: arbitrary length strings, a parameter list construct, exception handlers, command line processor, timers; 4) Geometry definitions: point, line, plane, box, cylinder, polyhedron; 5) File wrapper functions: standard Fortran input/output files, Fortran binary files, HDF5 files; 6) Parallel wrapper functions: MPI, and Open MP abstraction layers, partitioning algorithms; 7) Math utilities: BLAS, Matrix and Vector definitions, Linear Solver methods and wrappers for other TPLs (PETSC, MKL, etc), preconditioner classes;more » 8) Misc: random number generator, water saturation properties, sorting algorithms.« less

  15. Experimental nonlocality-based randomness generation with nonprojective measurements

    NASA Astrophysics Data System (ADS)

    Gómez, S.; Mattar, A.; Gómez, E. S.; Cavalcanti, D.; Farías, O. Jiménez; Acín, A.; Lima, G.

    2018-04-01

    We report on an optical setup generating more than one bit of randomness from one entangled bit (i.e., a maximally entangled state of two qubits). The amount of randomness is certified through the observation of Bell nonlocal correlations. To attain this result we implemented a high-purity entanglement source and a nonprojective three-outcome measurement. Our implementation achieves a gain of 27% of randomness as compared with the standard methods using projective measurements. Additionally, we estimate the amount of randomness certified in a one-sided device-independent scenario, through the observation of Einstein-Podolsky-Rosen steering. Our results prove that nonprojective quantum measurements allow extending the limits for nonlocality-based certified randomness generation using current technology.

  16. The cosmic microwave background radiation power spectrum as a random bit generator for symmetric- and asymmetric-key cryptography.

    PubMed

    Lee, Jeffrey S; Cleaver, Gerald B

    2017-10-01

    In this note, the Cosmic Microwave Background (CMB) Radiation is shown to be capable of functioning as a Random Bit Generator, and constitutes an effectively infinite supply of truly random one-time pad values of arbitrary length. It is further argued that the CMB power spectrum potentially conforms to the FIPS 140-2 standard. Additionally, its applicability to the generation of a (n × n) random key matrix for a Vernam cipher is established.

  17. A new simple technique for improving the random properties of chaos-based cryptosystems

    NASA Astrophysics Data System (ADS)

    Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.

    2018-03-01

    A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.

  18. Method and apparatus for in-situ characterization of energy storage and energy conversion devices

    DOEpatents

    Christophersen, Jon P [Idaho Falls, ID; Motloch, Chester G [Idaho Falls, ID; Morrison, John L [Butte, MT; Albrecht, Weston [Layton, UT

    2010-03-09

    Disclosed are methods and apparatuses for determining an impedance of an energy-output device using a random noise stimulus applied to the energy-output device. A random noise signal is generated and converted to a random noise stimulus as a current source correlated to the random noise signal. A bias-reduced response of the energy-output device to the random noise stimulus is generated by comparing a voltage at the energy-output device terminal to an average voltage signal. The random noise stimulus and bias-reduced response may be periodically sampled to generate a time-varying current stimulus and a time-varying voltage response, which may be correlated to generate an autocorrelated stimulus, an autocorrelated response, and a cross-correlated response. Finally, the autocorrelated stimulus, the autocorrelated response, and the cross-correlated response may be combined to determine at least one of impedance amplitude, impedance phase, and complex impedance.

  19. Matched-filter algorithm for subpixel spectral detection in hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Borough, Howard C.

    1991-11-01

    Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.

  20. Load-Dependent Interference of Deep Brain Stimulation of the Subthalamic Nucleus with Switching from Automatic to Controlled Processing During Random Number Generation in Parkinson's Disease.

    PubMed

    Williams, Isobel Anne; Wilkinson, Leonora; Limousin, Patricia; Jahanshahi, Marjan

    2015-01-01

    Deep brain stimulation of the subthalamic nucleus (STN DBS) ameliorates the motor symptoms of Parkinson's disease (PD). However, some aspects of executive control are impaired with STN DBS. We tested the prediction that (i) STN DBS interferes with switching from automatic to controlled processing during fast-paced random number generation (RNG) (ii) STN DBS-induced cognitive control changes are load-dependent. Fifteen PD patients with bilateral STN DBS performed paced-RNG, under three levels of cognitive load synchronised with a pacing stimulus presented at 1, 0.5 and 0.33 Hz (faster rates require greater cognitive control), with DBS on or off. Measures of output randomness were calculated. Countscore 1 (CS1) indicates habitual counting in steps of one (CS1). Countscore 2 (CS2) indicates a more controlled strategy of counting in twos. The fastest rate was associated with an increased CS1 score with STN DBS on compared to off. At the slowest rate, patients had higher CS2 scores with DBS off than on, such that the differences between CS1 and CS2 scores disappeared. We provide evidence for a load-dependent effect of STN DBS on paced RNG in PD. Patients could switch to more controlled RNG strategies during conditions of low cognitive load at slower rates only when the STN stimulators were off, but when STN stimulation was on, they engaged in more automatic habitual counting under increased cognitive load. These findings are consistent with the proposal that the STN implements a switch signal from the medial frontal cortex which enables a shift from automatic to controlled processing.

  1. Load-Dependent Interference of Deep Brain Stimulation of the Subthalamic Nucleus with Switching from Automatic to Controlled Processing During Random Number Generation in Parkinson’s Disease

    PubMed Central

    Williams, Isobel Anne; Wilkinson, Leonora; Limousin, Patricia; Jahanshahi, Marjan

    2015-01-01

    Background: Deep brain stimulation of the subthalamic nucleus (STN DBS) ameliorates the motor symptoms of Parkinson’s disease (PD). However, some aspects of executive control are impaired with STN DBS. Objective: We tested the prediction that (i) STN DBS interferes with switching from automatic to controlled processing during fast-paced random number generation (RNG) (ii) STN DBS-induced cognitive control changes are load-dependent. Methods: Fifteen PD patients with bilateral STN DBS performed paced-RNG, under three levels of cognitive load synchronised with a pacing stimulus presented at 1, 0.5 and 0.33 Hz (faster rates require greater cognitive control), with DBS on or off. Measures of output randomness were calculated. Countscore 1 (CS1) indicates habitual counting in steps of one (CS1). Countscore 2 (CS2) indicates a more controlled strategy of counting in twos. Results: The fastest rate was associated with an increased CS1 score with STN DBS on compared to off. At the slowest rate, patients had higher CS2 scores with DBS off than on, such that the differences between CS1 and CS2 scores disappeared. Conclusions: We provide evidence for a load-dependent effect of STN DBS on paced RNG in PD. Patients could switch to more controlled RNG strategies during conditions of low cognitive load at slower rates only when the STN stimulators were off, but when STN stimulation was on, they engaged in more automatic habitual counting under increased cognitive load. These findings are consistent with the proposal that the STN implements a switch signal from the medial frontal cortex which enables a shift from automatic to controlled processing. PMID:25720447

  2. Assessing age-dependent susceptibility to measles in Japan.

    PubMed

    Kinoshita, Ryo; Nishiura, Hiroshi

    2017-06-05

    Routine vaccination against measles in Japan started in 1978. Whereas measles elimination was verified in 2015, multiple chains of measles transmission were observed in 2016. We aimed to reconstruct the age-dependent susceptibility to measles in Japan so that future vaccination strategies can be elucidated. An epidemiological model was used to quantify the age-dependent immune fraction using datasets of vaccination coverage and seroepidemiological survey. The second dose was interpreted in two different scenarios, i.e., booster and random shots. The effective reproduction number, the average number of secondary cases generated by a single infected individual, and the age at infection were explored using the age-dependent transmission model and the next generation matrix. While the herd immunity threshold of measles likely ranges from 90% to 95%, assuming that the basic reproductive number ranges from 10 to 20, the estimated immune fraction in Japan was below those thresholds in 2016, despite the fact that the estimates were above 80% for all ages. If the second dose completely acted as the booster shot, a proportion immune above 90% was achieved only among those aged 5years or below in 2016. Alternatively, if the second dose was randomly distributed regardless of primary vaccination status, a proportion immune over 90% was achieved among those aged below 25years. The effective reproduction number was estimated to range from 1.50 to 3.01 and from 1.50 to 3.00, respectively, for scenarios 1 and 2 in 2016; if the current vaccination schedule were continued, the reproduction number is projected to range from 1.50 to 3.01 and 1.39 to 2.78, respectively, in 2025. Japan continues to be prone to imported cases of measles. Supplementary vaccination among adults aged 20-49years would be effective if the chains of transmission continue to be observed in that age group. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Semantic, perceptual and number space: relations between category width and spatial processing.

    PubMed

    Brugger, Peter; Loetscher, Tobias; Graves, Roger E; Knoch, Daria

    2007-05-17

    Coarse semantic encoding and broad categorization behavior are the hallmarks of the right cerebral hemisphere's contribution to language processing. We correlated 40 healthy subjects' breadth of categorization as assessed with Pettigrew's category width scale with lateral asymmetries in perceptual and representational space. Specifically, we hypothesized broader category width to be associated with larger leftward spatial biases. For the 20 men, but not the 20 women, this hypothesis was confirmed both in a lateralized tachistoscopic task with chimeric faces and a random digit generation task; the higher a male participant's score on category width, the more pronounced were his left-visual field bias in the judgement of chimeric faces and his small-number preference in digit generation ("small" is to the left of "large" in number space). Subjects' category width was unrelated to lateral displacements in a blindfolded tactile-motor rod centering task. These findings indicate that visual-spatial functions of the right hemisphere should not be considered independent of the same hemisphere's contribution to language. Linguistic and spatial cognition may be more tightly interwoven than is currently assumed.

  4. Driven Boson Sampling.

    PubMed

    Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine

    2017-01-13

    Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.

  5. Linear Optical Quantum Metrology with Single Photons: Exploiting Spontaneously Generated Entanglement to Beat the Shot-Noise Limit

    NASA Astrophysics Data System (ADS)

    Motes, Keith R.; Olson, Jonathan P.; Rabeaux, Evan J.; Dowling, Jonathan P.; Olson, S. Jay; Rohde, Peter P.

    2015-05-01

    Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place—typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer—fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection—is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

  6. Extended-area nanostructuring of TiO2 with femtosecond laser pulses at 400 nm using a line focus.

    PubMed

    Das, Susanta Kumar; Dasari, Kiran; Rosenfeld, Arkadi; Grunwald, Ruediger

    2010-04-16

    An efficient way to generate nanoscale laser-induced periodic surface structures (LIPSS) in rutile-type TiO(2) with frequency-converted femtosecond laser pulses at wavelengths around 400 nm is reported. Extended-area structuring on fixed and moving substrates was obtained by exploiting the line focus of a cylindrical lens. Under defined conditions with respect to pulse number, pulse energy and scanning velocity, two types of ripple-like LIPSS with high and low spatial frequencies (HSFL, LSFL) with periods in the range of 90 nm and 340 nm, respectively, were formed. In particular, lower numbers of high energetic pulses favour the generation of LSFL whereas higher numbers of lower energetic pulses enable the preferential creation of HSFL. Theoretical calculations on the basis of the Drude model support the assumption that refractive index changes by photo-excited carriers are a major mechanism responsible for LSFL. Furthermore, the appearance of random substructures as small as 30 nm superimposing low spatial frequency ripples is demonstrated and their possible origin is discussed.

  7. Linear optical quantum metrology with single photons: exploiting spontaneously generated entanglement to beat the shot-noise limit.

    PubMed

    Motes, Keith R; Olson, Jonathan P; Rabeaux, Evan J; Dowling, Jonathan P; Olson, S Jay; Rohde, Peter P

    2015-05-01

    Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place--typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer--fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection--is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

  8. A model for bacterial colonization of sinking aggregates.

    PubMed

    Bearon, R N

    2007-01-01

    Sinking aggregates provide important nutrient-rich environments for marine bacteria. Quantifying the rate at which motile bacteria colonize such aggregations is important in understanding the microbial loop in the pelagic food web. In this paper, a simple analytical model is presented to predict the rate at which bacteria undergoing a random walk encounter a sinking aggregate. The model incorporates the flow field generated by the sinking aggregate, the swimming behavior of the bacteria, and the interaction of the flow with the swimming behavior. An expression for the encounter rate is computed in the limit of large Péclet number when the random walk can be approximated by a diffusion process. Comparison with an individual-based numerical simulation is also given.

  9. Robust PRNG based on homogeneously distributed chaotic dynamics

    NASA Astrophysics Data System (ADS)

    Garasym, Oleg; Lozi, René; Taralova, Ina

    2016-02-01

    This paper is devoted to the design of new chaotic Pseudo Random Number Generator (CPRNG). Exploring several topologies of network of 1-D coupled chaotic mapping, we focus first on two dimensional networks. Two topologically coupled maps are studied: TTL rc non-alternate, and TTL SC alternate. The primary idea of the novel maps has been based on an original coupling of the tent and logistic maps to achieve excellent random properties and homogeneous /uniform/ density in the phase plane, thus guaranteeing maximum security when used for chaos base cryptography. In this aim two new nonlinear CPRNG: MTTL 2 sc and NTTL 2 are proposed. The maps successfully passed numerous statistical, graphical and numerical tests, due to proposed ring coupling and injection mechanisms.

  10. The conflict between randomized clinical trials and the therapeutic obligation.

    PubMed

    Gifford, F

    1986-11-01

    The central dilemma concerning randomized clinical trials (RCTs) arises out of some simple facts about causal methodology (RCTs are the best way to generate the reliable causal knowledge necessary for optimally-informed action) and a prima facie plausible principle concerning how physicians should treat their patients (always do what it is most reasonable to believe will be best for the patient). A number of arguments related to this in the literature are considered. Attempts to avoid the dilemma fail. Appeals to informed consent and mechanisms for minimizing the resulting harm are important for policy, but informed consent is problematic and mechanisms for minimization of harm do not address the dilemma. Appeals to some sort of contract model of justification are promising and illuminating.

  11. Parts on Demand: Evaluation of Approaches to Achieve Flexible Manufacturing Systems for Navy Partson Demand. Volume 1

    DTIC Science & Technology

    1984-02-01

    measurable impact if changed. The following items were included in the sample: * Mark Zero Items -Low demand insurance items which represent about three...R&D efforts reviewed. The resulting assessment highlighted the generic enabling technologies and cross- cutting R&D projects required to focus current...supplied by spot buys, and which may generate Navy Inventory Control Numbers (NICN). Random samples of data were extracted from the Master Data File ( MDF

  12. Hard x ray imaging graphics development and literature search

    NASA Technical Reports Server (NTRS)

    Emslie, A. Gordon

    1991-01-01

    This report presents work performed between June 1990 and June 1991 and has the following objectives: (1) a comprehensive literature search of imaging technology and coded aperture imaging as well as relevant topics relating to solar flares; (2) an analysis of random number generators; and (3) programming simulation models of hard x ray telescopes. All programs are compatible with NASA/MSFC Space Science LAboratory VAX Cluster and are written in VAX FORTRAN and VAX IDL (Interactive Data Language).

  13. A Human-Automation Interface Model to Guide Automation of System Functions: A Way to Achieve Manning Goals in New Systems

    DTIC Science & Technology

    2006-06-01

    levels of automation applied as per Figure 13. .................................. 60 x THIS PAGE...models generated for this thesis were set to run for 60 minutes. To run the simulation for the set time, the analyst provides a random number seed to...1984). The IMPRINT 59 workload value of 60 has been used by a consensus of workload modeling SMEs to represent the ‘high’ threshold, while the

  14. Chemical reaction networks as a model to describe UVC- and radiolytically-induced reactions of simple compounds.

    PubMed

    Dondi, Daniele; Merli, Daniele; Albini, Angelo; Zeffiro, Alberto; Serpone, Nick

    2012-05-01

    When a chemical system is submitted to high energy sources (UV, ionizing radiation, plasma sparks, etc.), as is expected to be the case of prebiotic chemistry studies, a plethora of reactive intermediates could form. If oxygen is present in excess, carbon dioxide and water are the major products. More interesting is the case of reducing conditions where synthetic pathways are also possible. This article examines the theoretical modeling of such systems with random-generated chemical networks. Four types of random-generated chemical networks were considered that originated from a combination of two connection topologies (viz., Poisson and scale-free) with reversible and irreversible chemical reactions. The results were analyzed taking into account the number of the most abundant products required for reaching 50% of the total number of moles of compounds at equilibrium, as this may be related to an actual problem of complex mixture analysis. The model accounts for multi-component reaction systems with no a priori knowledge of reacting species and the intermediates involved if system components are sufficiently interconnected. The approach taken is relevant to an earlier study on reactions that may have occurred in prebiotic systems where only a few compounds were detected. A validation of the model was attained on the basis of results of UVC and radiolytic reactions of prebiotic mixtures of low molecular weight compounds likely present on the primeval Earth.

  15. Association between severe dorsolateral prefrontal dysfunction during random number generation and earlier onset in schizophrenia.

    PubMed

    Koike, Shinsuke; Takizawa, Ryu; Nishimura, Yukika; Marumo, Kohei; Kinou, Masaru; Kawakubo, Yuki; Rogers, Mark A; Kasai, Kiyoto

    2011-08-01

    Schizophrenia involves impairment in attention, working memory and executive processes associated with prefrontal cortical function, an essential contributor of social functioning. Age at onset is a major factor for predicting social outcome in schizophrenia. In clinical settings, we need an objective assessment tool for evaluating prefrontal function and social outcome. Participants included 22 right-handed patients with schizophrenia and 40 gender- and age-matched healthy controls. We used a 52-channel near-infrared spectroscopy (NIRS) instrument to measure oxygenated haemoglobin ([oxy-Hb]) changes over the prefrontal cortex during a random number generation (RNG) task. In healthy controls, we found significant [oxy-Hb] increase in the bilateral dorsolateral (DLPFC; BA9 and BA46) and ventrolateral prefrontal cortex (VLPFC; BA44, 45 and 47). The patients with schizophrenia showed significantly smaller activation than the healthy controls in the same approximate regions. In the patient group, a smaller [oxy-Hb] increase in the right DLPFC region (BA9) was significantly correlated with earlier age at onset. NIRS can detect prefrontal cortical dysfunction associated with an executive task, which was coupled with earlier age at onset in schizophrenia. Multichannel NIRS, a non-invasive and user-friendly instrument, may be useful in evaluating cognitive function and social outcome in clinical settings in psychiatry. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Terahertz imaging with compressive sensing

    NASA Astrophysics Data System (ADS)

    Chan, Wai Lam

    Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.

  17. Complex architecture of primes and natural numbers.

    PubMed

    García-Pérez, Guillermo; Serrano, M Ángeles; Boguñá, Marián

    2014-08-01

    Natural numbers can be divided in two nonoverlapping infinite sets, primes and composites, with composites factorizing into primes. Despite their apparent simplicity, the elucidation of the architecture of natural numbers with primes as building blocks remains elusive. Here, we propose a new approach to decoding the architecture of natural numbers based on complex networks and stochastic processes theory. We introduce a parameter-free non-Markovian dynamical model that naturally generates random primes and their relation with composite numbers with remarkable accuracy. Our model satisfies the prime number theorem as an emerging property and a refined version of Cramér's conjecture about the statistics of gaps between consecutive primes that seems closer to reality than the original Cramér's version. Regarding composites, the model helps us to derive the prime factors counting function, giving the probability of distinct prime factors for any integer. Probabilistic models like ours can help to get deeper insights about primes and the complex architecture of natural numbers.

  18. A new complexity measure for time series analysis and classification

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth

    2013-07-01

    Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).

  19. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  20. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  1. Revisiting sample size: are big trials the answer?

    PubMed

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  2. Dynamics of vegetative cytoplasm during generative cell formation and pollen maturation in Arabidopsis thaliana

    NASA Technical Reports Server (NTRS)

    Kuang, A.; Musgrave, M. E.

    1996-01-01

    Ultrastructural changes of pollen cytoplasm during generative cell formation and pollen maturation in Arabidopsis thaliana were studied. The pollen cytoplasm develops a complicated ultrastructure and changes dramatically during these stages. Lipid droplets increase after generative cell formation and their organization and distribution change with the developmental stage. Starch grains in amyloplasts increase in number and size during generative and sperm cell formation and decrease at pollen maturity. The shape and membrane system of mitochondria change only slightly. Dictyosomes become very prominent, and numerous associated vesicles are observed during and after sperm cell formation. Endoplasmic reticulum appears extensively as stacks during sperm cell formation. Free and polyribosomes are abundant in the cytoplasm at all developmental stages although they appear denser at certain stages and in some areas. In mature pollen, all organelles are randomly distributed throughout the vegetative cytoplasm and numerous small particles appear. Organization and distribution of storage substances and appearance of these small particles during generative and sperm cell formation and pollen maturation are discussed.

  3. A simple model for the generation of the vestibular evoked myogenic potential (VEMP).

    PubMed

    Wit, Hero P; Kingma, Charlotte M

    2006-06-01

    To describe the mechanism by which the vestibular evoked myogenic potential is generated. Vestibular evoked myogenic potential generation is modeled by adding a large number of muscle motor unit action potentials. These action potentials occur randomly in time along a 100 ms long time axis. But because between approximately 15 and 20 ms after a loud short sound stimulus (almost) no action potentials are generated during VEMP measurements in human subjects, no action potentials are present in the model during this time. The evoked potential is the result of the lack of amplitude cancellation in the averaged surface electromyogram at the edges of this 5 ms long time interval. The relatively simple model describes generation and some properties of the vestibular evoked myogenic potential very well. It is shown that, in contrast with other evoked potentials (BAEPs, VERs), the vestibular evoked myogenic potential is the result of an interruption of activity and not that of summed synchronized neural action potentials.

  4. Using the Quantile Mapping to improve a weather generator

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Themessl, M.; Gobiet, A.

    2012-04-01

    We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.

  5. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  6. Properties of dimension witnesses and their semidefinite programming relaxations

    NASA Astrophysics Data System (ADS)

    Mironowicz, Piotr; Li, Hong-Wei; Pawłowski, Marcin

    2014-08-01

    In this paper we develop a method for investigating semi-device-independent randomness expansion protocols that was introduced in Li et al. [H.-W. Li, P. Mironowicz, M. Pawłowski, Z.-Q. Yin, Y.-C. Wu, S. Wang, W. Chen, H.-G. Hu, G.-C. Guo, and Z.-F. Han, Phys. Rev. A 87, 020302(R) (2013), 10.1103/PhysRevA.87.020302]. This method allows us to lower bound, with semi-definite programming, the randomness obtained from random number generators based on dimension witnesses. We also investigate the robustness of some randomness expanders using this method. We show the role of an assumption about the trace of the measurement operators and a way to avoid it. The method is also generalized to systems of arbitrary dimension and for a more general form of dimension witnesses than in our previous paper. Finally, we introduce a procedure of dimension witness reduction, which can be used to obtain from an existing witness a new one with a higher amount of certifiable randomness. The presented methods find an application for experiments [J. Ahrens, P. Badziag, M. Pawlowski, M. Zukowski, and M. Bourennane, Phys. Rev. Lett. 112, 140401 (2014), 10.1103/PhysRevLett.112.140401].

  7. PLNoise: a package for exact numerical simulation of power-law noises

    NASA Astrophysics Data System (ADS)

    Milotti, Edoardo

    2006-08-01

    Many simulations of stochastic processes require colored noises: here I describe a small program library that generates samples with a tunable power-law spectral density: the algorithm can be modified to generate more general colored noises, and is exact for all time steps, even when they are unevenly spaced (as may often happen in the case of astronomical data, see e.g. [N.R. Lomb, Astrophys. Space Sci. 39 (1976) 447]. The method is exact in the sense that it reproduces a process that is theoretically guaranteed to produce a range-limited power-law spectrum 1/f with -1<β⩽1. The algorithm has a well-behaved computational complexity, it produces a nearly perfect Gaussian noise, and its computational efficiency depends on the required degree of noise Gaussianity. Program summaryTitle of program: PLNoise Catalogue identifier:ADXV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXV_v1_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Programming language used: ANSI C Computer: Any computer with an ANSI C compiler: the package has been tested with gcc version 3.2.3 on Red Hat Linux 3.2.3-52 and gcc version 4.0.0 and 4.0.1 on Apple Mac OS X-10.4 Operating system: All operating systems capable of running an ANSI C compiler No. of lines in distributed program, including test data, etc.:6238 No. of bytes in distributed program, including test data, etc.:52 387 Distribution format:tar.gz RAM: The code of the test program is very compact (about 50 Kbytes), but the program works with list management and allocates memory dynamically; in a typical run (like the one discussed in Section 4 in the long write-up) with average list length 2ṡ10, the RAM taken by the list is 200 Kbytes. External routines: The package needs external routines to generate uniform and exponential deviates. The implementation described here uses the random number generation library ranlib freely available from Netlib [B.W. Brown, J. Lovato, K. Russell, ranlib, available from Netlib, http://www.netlib.org/random/index.html, select the C version ranlib.c], but it has also been successfully tested with the random number routines in Numerical Recipes [W.H. Press, S.A. Teulkolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, second ed., Cambridge Univ. Press, Cambridge, 1992, pp. 274-290]. Notice that ranlib requires a pair of routines from the linear algebra package LINPACK, and that the distribution of ranlib includes the C source of these routines, in case LINPACK is not installed on the target machine. Nature of problem: Exact generation of different types of Gaussian colored noise. Solution method: Random superposition of relaxation processes [E. Milotti, Phys. Rev. E 72 (2005) 056701]. Unusual features: The algorithm is theoretically guaranteed to be exact, and unlike all other existing generators it can generate samples with uneven spacing. Additional comments: The program requires an initialization step; for some parameter sets this may become rather heavy. Running time: Running time varies widely with different input parameters, however in a test run like the one in Section 4 in this work, the generation routine took on average about 7 ms for each sample.

  8. Fungible Correlation Matrices: A Method for Generating Nonsingular, Singular, and Improper Correlation Matrices for Monte Carlo Research.

    PubMed

    Waller, Niels G

    2016-01-01

    For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.

  9. Constructing acoustic timefronts using random matrix theory.

    PubMed

    Hegewisch, Katherine C; Tomsovic, Steven

    2013-10-01

    In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history.

  10. Early stage hot spot analysis through standard cell base random pattern generation

    NASA Astrophysics Data System (ADS)

    Jeon, Joong-Won; Song, Jaewan; Kim, Jeong-Lim; Park, Seongyul; Yang, Seung-Hune; Lee, Sooryong; Kang, Hokyu; Madkour, Kareem; ElManhawy, Wael; Lee, SeungJo; Kwan, Joe

    2017-04-01

    Due to limited availability of DRC clean patterns during the process and RET recipe development, OPC recipes are not tested with high pattern coverage. Various kinds of pattern can help OPC engineer to detect sensitive patterns to lithographic effects. Random pattern generation is needed to secure robust OPC recipe. However, simple random patterns without considering real product layout style can't cover patterning hotspot in production levels. It is not effective to use them for OPC optimization thus it is important to generate random patterns similar to real product patterns. This paper presents a strategy for generating random patterns based on design architecture information and preventing hotspot in early process development stage through a tool called Layout Schema Generator (LSG). Using LSG, we generate standard cell based on random patterns reflecting real design cell structure - fin pitch, gate pitch and cell height. The output standard cells from LSG are applied to an analysis methodology to assess their hotspot severity by assigning a score according to their optical image parameters - NILS, MEEF, %PV band and thus potential hotspots can be defined by determining their ranking. This flow is demonstrated on Samsung 7nm technology optimizing OPC recipe and early enough in the process avoiding using problematic patterns.

  11. Subrandom methods for multidimensional nonuniform sampling.

    PubMed

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Does wastewater discharge have relations with increase of Turner syndrome and Down syndrome?

    PubMed

    Choi, Intae

    2017-01-01

    The purpose of this study is to examine whether water and air pollutants have a relationship with an increase in the genetic disorders Turner syndrome and Down syndrome, which are caused by congenital chromosomal abnormalities, and to generate a hypothesis about the genetic health effects of environmental pollutants. A panel regression based on random effect was conducted on Korea's metropolitan councils from 2012 to 2014. The dependent variable was the number of Turner syndrome and Down syndrome cases, and the main independent variables were those regarding the water and air pollution. Air pollutants did not have a significant impact on the number of Turner syndrome and Down syndrome cases; however, the increase in number of wastewater discharge companies did have a significant relationship with the number of cases. The more the number of wastewater discharge companies, the more the number Turner syndrome and Down syndrome cases were observed. Therefore, scientific investigation on water and air pollutants in relation with genetic health effects needs to be performed.

  13. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  14. Simulating synchronization in neuronal networks

    NASA Astrophysics Data System (ADS)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  15. Unbiased Combinatorial Genomic Approaches to Identify Alternative Therapeutic Targets within the TSC Signaling Network

    DTIC Science & Technology

    2013-06-01

    number of ways to generate either random mutations or specific alterations to the genome sequence . Unlike previous approaches however, both TALENs and...made to the donor construct will be incorporated into the endogenous genomic sequence (examples in Liu et al., 2012; Zu et al., 2013). One challenge... Drosophila with the CRISPR RNA-guided Cas9 nuclease. Genetics. 2013. Hwang WY, Fu Y, Reyon D, Maeder ML, Tsai SQ, Sander JD, et al. Efficient genome

  16. An Analysis of Two Layers of Encryption to Protect Network Traffic

    DTIC Science & Technology

    2010-06-01

    Published: 06/18/2001 CVSS Severity: 7.5 (HIGH) CVE-2001-1141 Summary: The Pseudo-Random Number Generator (PRNG) in SSLeay and OpenSSL be- fore 0.9.6b allows...x509cert function in KAME Racoon successfully verifies certifi- cates even when OpenSSL validation fails, which could allow remote attackers to...montgomery function in crypto/bn/bn mont.c in OpenSSL 0.9.8e and earlier does not properly perform Montgomery multiplication, which might allow local users to

  17. IEEE International Symposium Information Theory, held at Santa Monica California, February 9-12, 1981.

    DTIC Science & Technology

    1981-01-01

    Channel and study permutation codes as a special case. ,uch a code is generated by an initial vector x, a group G of orthogonal n by n matrices, and a...random-access components, is introduced and studied . Under this scheme, the network stations are divided into groups , each of which is assigned a...IEEE INFORMATION THEORY GROUP CO-SPONSORED BY: UNION RADIO SCIENTIFIQUE INTERNATIONALE IEEE Catalog Number 81 CH 1609-7 IT 𔃻. 81 ~20 04Q SECURITY

  18. A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications

    DTIC Science & Technology

    2001-05-15

    is based on a calculated test statistic value, which is a function of the data. If the test statistic value is S and the critical value is t, then...5 Defined in The Handbook of Applied Cryptography ; A. Menezes, P. Van Oorschot and S . Vanstone; CRC Press, 1997. The first 4...3rd ed. Reading: Addison-Wesley, Inc., pp. 61-80. [4] A. J. Menezes, P. C. van Oorschot, and S . A. Vanstone (1997), Handbook of Applied Cryptography

  19. Genesis of femtosecond-induced nanostructures on solid surfaces.

    PubMed

    Varlamova, Olga; Martens, Christian; Ratzke, Markus; Reif, Juergen

    2014-11-01

    The start and evolution of the formation of laser-induced periodic surface structures (LIPSS, ripples) are investigated. The important role of irradiation dose (fluence×number of pulses) for the properties of the generated structures is demonstrated. It is shown how, with an increasing dose, the structures evolve from random surface modification to regular sub-wavelength ripples, then coalesce to broader LIPSS and finally form more complex shapes when ablation produces deep craters. First experiments are presented following this evolution in one single irradiated spot.

  20. Computer modelling of grain microstructure in three dimensions

    NASA Astrophysics Data System (ADS)

    Narayan, K. Lakshmi

    We present a program that generates the two-dimensional micrographs of a three dimensional grain microstructure. The code utilizes a novel scanning, pixel mapping technique to secure statistical distributions of surface areas, grain sizes, aspect ratios, perimeters, number of nearest neighbors and volumes of the randomly nucleated particles. The program can be used for comparing the existing theories of grain growth, and interpretation of two-dimensional microstructure of three-dimensional samples. Special features have been included to minimize the computation time and resource requirements.

  1. Randomization in clinical trials in orthodontics: its significance in research design and methods to achieve it.

    PubMed

    Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore

    2011-12-01

    Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.

  2. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm

    PubMed Central

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228

  3. Fels-Rand: an Xlisp-Stat program for the comparative analysis of data under phylogenetic uncertainty.

    PubMed

    Blomberg, S

    2000-11-01

    Currently available programs for the comparative analysis of phylogenetic data do not perform optimally when the phylogeny is not completely specified (i.e. the phylogeny contains polytomies). Recent literature suggests that a better way to analyse the data would be to create random trees from the known phylogeny that are fully-resolved but consistent with the known tree. A computer program is presented, Fels-Rand, that performs such analyses. A randomisation procedure is used to generate trees that are fully resolved but whose structure is consistent with the original tree. Statistics are then calculated on a large number of these randomly-generated trees. Fels-Rand uses the object-oriented features of Xlisp-Stat to manipulate internal tree representations. Xlisp-Stat's dynamic graphing features are used to provide heuristic tools to aid in analysis, particularly outlier analysis. The usefulness of Xlisp-Stat as a system for phylogenetic computation is discussed. Available from the author or at http://www.uq.edu.au/~ansblomb/Fels-Rand.sit.hqx. Xlisp-Stat is available from http://stat.umn.edu/~luke/xls/xlsinfo/xlsinfo.html. s.blomberg@abdn.ac.uk

  4. Error free physically unclonable function with programmed resistive random access memory using reliable resistance states by specific identification-generation method

    NASA Astrophysics Data System (ADS)

    Tseng, Po-Hao; Hsu, Kai-Chieh; Lin, Yu-Yu; Lee, Feng-Min; Lee, Ming-Hsiu; Lung, Hsiang-Lan; Hsieh, Kuang-Yeu; Chung Wang, Keh; Lu, Chih-Yuan

    2018-04-01

    A high performance physically unclonable function (PUF) implemented with WO3 resistive random access memory (ReRAM) is presented in this paper. This robust ReRAM-PUF can eliminated bit flipping problem at very high temperature (up to 250 °C) due to plentiful read margin by using initial resistance state and set resistance state. It is also promised 10 years retention at the temperature range of 210 °C. These two stable resistance states enable stable operation at automotive environments from -40 to 125 °C without need of temperature compensation circuit. The high uniqueness of PUF can be achieved by implementing a proposed identification (ID)-generation method. Optimized forming condition can move 50% of the cells to low resistance state and the remaining 50% remain at initial high resistance state. The inter- and intra-PUF evaluations with unlimited separation of hamming distance (HD) are successfully demonstrated even under the corner condition. The number of reproduction was measured to exceed 107 times with 0% bit error rate (BER) at read voltage from 0.4 to 0.7 V.

  5. Understanding spatial connectivity of individuals with non-uniform population density.

    PubMed

    Wang, Pu; González, Marta C

    2009-08-28

    We construct a two-dimensional geometric graph connecting individuals placed in space within a given contact distance. The individuals are distributed using a measured country's density of population. We observe that while large clusters (group of individuals connected) emerge within some regions, they are trapped in detached urban areas owing to the low population density of the regions bordering them. To understand the emergence of a giant cluster that connects the entire population, we compare the empirical geometric graph with the one generated by placing the same number of individuals randomly in space. We find that, for small contact distances, the empirical distribution of population dominates the growth of connected components, but no critical percolation transition is observed in contrast to the graph generated by a random distribution of population. Our results show that contact distances from real-world situations as for WIFI and Bluetooth connections drop in a zone where a fully connected cluster is not observed, hinting that human mobility must play a crucial role in contact-based diseases and wireless viruses' large-scale spreading.

  6. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  7. Wire array Z-pinch insights for enhanced x-ray production

    NASA Astrophysics Data System (ADS)

    Sanford, T. W. L.; Mock, R. C.; Spielman, R. B.; Haines, M. G.; Chittenden, J. P.; Whitney, K. G.; Apruzese, J. P.; Peterson, D. L.; Greenly, J. B.; Sinars, D. B.; Reisman, D. B.; Mosher, D.

    1999-05-01

    Comparisons of measured total radiated x-ray power from annular wire-array z-pinches with a variety of models as a function of wire number, array mass, and load radius are reviewed. The data, which are comprehensive, have provided important insights into the features of wire-array dynamics that are critical for high x-ray power generation. Collectively, the comparisons of the data with the model calculations suggest that a number of underlying dynamical mechanisms involving cylindrical asymmetries and plasma instabilities contribute to the measured characteristics. For example, under the general assumption that the measured risetime of the total-radiated-power pulse is related to the thickness of the plasma shell formed on axis, the Heuristic Model [IEEE Trans. Plasma Sci. 26, 1275 (1998)] agrees with the measured risetime under a number of specific assumptions about the way the breakdown of the wires, the wire-plasma expansion, and the Rayleigh-Taylor instability in the r-z plane, develop. Likewise, in the high wire-number regime (where the wires are calculated to form a plasma shell prior to significant radial motion of the shell) the comparisons show that the variation in the power of the radiation generated as a function of load mass and array radius can be simulated by the two-dimensional Eulerian-radiation- magnetohydrodynamics code (E-RMHC) [Phys. Plasmas 3, 368 (1996)], using a single random-density perturbation that seeds the Rayleigh-Taylor instability in the r-z plane. For a given pulse-power generator, the comparisons suggest that (1) the smallest interwire gaps compatible with practical load construction and (2) the minimum implosion time consistent with the optimum required energy coupling of the generator to the load should produce the highest total-radiated-power levels.

  8. Scaling properties of the aerodynamic noise generated by low-speed fans

    NASA Astrophysics Data System (ADS)

    Canepa, Edward; Cattanei, Andrea; Mazzocut Zecchin, Fabio

    2017-11-01

    The spectral decomposition algorithm presented in the paper may be applied to selected parts of the SPL spectrum, i.e. to specific noise generating mechanisms. It yields the propagation and the generation functions, and indeed the Mach number scaling exponent associated with each mechanism as a function of the Strouhal number. The input data are SPL spectra obtained from measurements taken during speed ramps. Firstly, the basic theory and the implemented algorithm are described. Then, the behaviour of the new method is analysed with reference to numerically generated spectral data and the results are compared with the ones of an existing method based on the assumption that the scaling exponent is constant. Guidelines for the employment of both methods are provided. Finally, the method is applied to measurements taken on a cooling fan mounted on a test plenum designed following the ISO 10302 standards. The most common noise generating mechanisms are present and attention is focused on the low-frequency part of the spectrum, where the mechanisms are superposed. Generally, both propagation and generation functions are determined with better accuracy than the scaling exponent, whose values are usually consistent with expectations based on coherence and compactness of the acoustic sources. For periodic noise, the computed exponent is less accurate, as the related SPL data set has usually a limited size. The scaling exponent is very sensitive to the details of the experimental data, e.g. to slight inconsistencies or random errors.

  9. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  10. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  11. Methodological reporting quality of randomized controlled trials: A survey of seven core journals of orthopaedics from Mainland China over 5 years following the CONSORT statement.

    PubMed

    Zhang, J; Chen, X; Zhu, Q; Cui, J; Cao, L; Su, J

    2016-11-01

    In recent years, the number of randomized controlled trials (RCTs) in the field of orthopaedics is increasing in Mainland China. However, randomized controlled trials (RCTs) are inclined to bias if they lack methodological quality. Therefore, we performed a survey of RCT to assess: (1) What about the quality of RCTs in the field of orthopedics in Mainland China? (2) Whether there is difference between the core journals of the Chinese department of orthopedics and Orthopaedics Traumatology Surgery & Research (OTSR). This research aimed to evaluate the methodological reporting quality according to the CONSORT statement of randomized controlled trials (RCTs) in seven key orthopaedic journals published in Mainland China over 5 years from 2010 to 2014. All of the articles were hand researched on Chongqing VIP database between 2010 and 2014. Studies were considered eligible if the words "random", "randomly", "randomization", "randomized" were employed to describe the allocation way. Trials including animals, cadavers, trials published as abstracts and case report, trials dealing with subgroups analysis, or trials without the outcomes were excluded. In addition, eight articles selected from Orthopaedics Traumatology Surgery & Research (OTSR) between 2010 and 2014 were included in this study for comparison. The identified RCTs are analyzed using a modified version of the Consolidated Standards of Reporting Trials (CONSORT), including the sample size calculation, allocation sequence generation, allocation concealment, blinding and handling of dropouts. A total of 222 RCTs were identified in seven core orthopaedic journals. No trials reported adequate sample size calculation, 74 (33.4%) reported adequate allocation generation, 8 (3.7%) trials reported adequate allocation concealment, 18 (8.1%) trials reported adequate blinding and 16 (7.2%) trials reported handling of dropouts. In OTSR, 1 (12.5%) trial reported adequate sample size calculation, 4 (50.0%) reported adequate allocation generation, 1 (12.5%) trials reported adequate allocation concealment, 2 (25.0%) trials reported adequate blinding and 5 (62.5%) trials reported handling of dropouts. There were statistical differences as for sample size calculation and handling of dropouts between papers from Mainland China and OTSR (P<0.05). The findings of this study show that the methodological reporting quality of RCTs in seven core orthopaedic journals from the Mainland China is far from satisfaction and it needs to further improve to keep up with the standards of the CONSORT statement. Level III case control. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  12. Multiple ECG Fiducial Points-Based Random Binary Sequence Generation for Securing Wireless Body Area Networks.

    PubMed

    Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif

    2017-05-01

    Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.

  13. Generating and controlling homogeneous air turbulence using random jet arrays

    NASA Astrophysics Data System (ADS)

    Carter, Douglas; Petersen, Alec; Amili, Omid; Coletti, Filippo

    2016-12-01

    The use of random jet arrays, already employed in water tank facilities to generate zero-mean-flow homogeneous turbulence, is extended to air as a working fluid. A novel facility is introduced that uses two facing arrays of individually controlled jets (256 in total) to force steady homogeneous turbulence with negligible mean flow, shear, and strain. Quasi-synthetic jet pumps are created by expanding pressurized air through small straight nozzles and are actuated by fast-response low-voltage solenoid valves. Velocity fields, two-point correlations, energy spectra, and second-order structure functions are obtained from 2D PIV and are used to characterize the turbulence from the integral-to-the Kolmogorov scales. Several metrics are defined to quantify how well zero-mean-flow homogeneous turbulence is approximated for a wide range of forcing and geometric parameters. With increasing jet firing time duration, both the velocity fluctuations and the integral length scales are augmented and therefore the Reynolds number is increased. We reach a Taylor-microscale Reynolds number of 470, a large-scale Reynolds number of 74,000, and an integral-to-Kolmogorov length scale ratio of 680. The volume of the present homogeneous turbulence, the largest reported to date in a zero-mean-flow facility, is much larger than the integral length scale, allowing for the natural development of the energy cascade. The turbulence is found to be anisotropic irrespective of the distance between the jet arrays. Fine grids placed in front of the jets are effective at modulating the turbulence, reducing both velocity fluctuations and integral scales. Varying the jet-to-jet spacing within each array has no effect on the integral length scale, suggesting that this is dictated by the length scale of the jets.

  14. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  15. Random Item Generation Is Affected by Age

    ERIC Educational Resources Information Center

    Multani, Namita; Rudzicz, Frank; Wong, Wing Yiu Stephanie; Namasivayam, Aravind Kumar; van Lieshout, Pascal

    2016-01-01

    Purpose: Random item generation (RIG) involves central executive functioning. Measuring aspects of random sequences can therefore provide a simple method to complement other tools for cognitive assessment. We examine the extent to which RIG relates to specific measures of cognitive function, and whether those measures can be estimated using RIG…

  16. Simulation using computer-piloted point excitations of vibrations induced on a structure by an acoustic environment

    NASA Astrophysics Data System (ADS)

    Monteil, P.

    1981-11-01

    Computation of the overall levels and spectral densities of the responses measured on a launcher skin, the fairing for instance, merged into a random acoustic environment during take off, was studied. The analysis of transmission of these vibrations to the payload required the simulation of these responses by a shaker control system, using a small number of distributed shakers. Results show that this closed loop computerized digital system allows the acquisition of auto and cross spectral densities equal to those of the responses previously computed. However, wider application is sought, e.g., road and runway profiles. The problems of multiple input-output system identification, multiple true random signal generation, and real time programming are evoked. The system should allow for the control of four shakers.

  17. Randomized Trial of the Effect of Four Second-Generation Antipsychotics and One First-Generation Antipsychotic on Cigarette Smoking, Alcohol, and Drug Use in Chronic Schizophrenia.

    PubMed

    Mohamed, Somaia; Rosenheck, Robert A; Lin, Haiqun; Swartz, Marvin; McEvoy, Joseph; Stroup, Scott

    2015-07-01

    No large-scale randomized trial has compared the effect of different second-generation antipsychotic drugs and any first-generation drug on alcohol, drug and nicotine use in patients with schizophrenia. The Clinical Antipsychotic Trial of Intervention Effectiveness study randomly assigned 1432 patients formally diagnosed with schizophrenia to four second-generation antipsychotic drugs (olanzapine, risperidone quetiapine, and ziprasidone) and one first-generation antipsychotic (perphenazine) and followed them for up to 18 months. Secondary outcome data documented cigarettes smoked in the past week and alcohol and drug use severity ratings. At baseline, 61% of patients smoked, 35% used alcohol, and 23% used illicit drugs. Although there were significant effects of time showing reduction in substance use over the 18 months (all p < 0.0001), this study found no evidence that any antipsychotic was robustly superior to any other in a secondary analysis of data on substance use outcomes from a large 18-month randomized schizophrenia trial.

  18. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  19. Role of turbulence fluctuations on uncertainties of acoutic Doppler current profiler discharge measurements

    USGS Publications Warehouse

    Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin

    2012-01-01

    This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).

  20. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  1. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  2. Anderson localization for radial tree-like random quantum graphs

    NASA Astrophysics Data System (ADS)

    Hislop, Peter D.; Post, Olaf

    We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.

  3. Numerical study for heat generation/absorption in flow of nanofluid by a rotating disk

    NASA Astrophysics Data System (ADS)

    Aziz, Arsalan; Alsaedi, Ahmed; Muhammad, Taseer; Hayat, Tasawar

    2018-03-01

    Here MHD three-dimensional flow of viscous nanoliquid by a rotating disk with heat generation/absorption and slip effects is addressed. Thermophoresis and random motion features are also incorporated. Velocity, temperature and concentration slip conditions are imposed at boundary. Applied magnetic field is utilized. Low magnetic Reynolds number and boundary layer approximations have been employed in the problem formulation. Suitable transformations lead to strong nonlinear ordinary differential system. The obtained nonlinear system is solved numerically through NDSolve technique. Graphs have been sketched in order to analyze that how the velocity, temperature and concentration fields are affected by various pertinent variables. Moreover the numerical values for rates of heat and mass transfer have been tabulated and discussed.

  4. Entanglement routers via a wireless quantum network based on arbitrary two qubit systems

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2014-12-01

    A wireless quantum network is generated between multi-hops, where each hop consists of two entangled nodes. These nodes share a finite number of entangled two-qubit systems randomly. Different types of wireless quantum bridges (WQBS) are generated between the non-connected nodes. The efficiency of these WQBS to be used as quantum channels between its terminals to perform quantum teleportation is investigated. We suggest a theoretical wireless quantum communication protocol to teleport unknown quantum signals from one node to another, where the more powerful WQBS are used as quantum channels. It is shown that, by increasing the efficiency of the sources that emit the initial partial entangled states, one can increase the efficiency of the wireless quantum communication protocol.

  5. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    ERIC Educational Resources Information Center

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  6. Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.

  7. A test of the cross-scale resilience model: Functional richness in Mediterranean-climate ecosystems

    USGS Publications Warehouse

    Wardwell, D.A.; Allen, Craig R.; Peterson, G.D.; Tyre, A.J.

    2008-01-01

    Ecological resilience has been proposed to be generated, in part, in the discontinuous structure of complex systems. Environmental discontinuities are reflected in discontinuous, aggregated animal body mass distributions. Diversity of functional groups within body mass aggregations (scales) and redundancy of functional groups across body mass aggregations (scales) has been proposed to increase resilience. We evaluate that proposition by analyzing mammalian and avian communities of Mediterranean-climate ecosystems. We first determined that body mass distributions for each animal community were discontinuous. We then calculated the variance in richness of function across aggregations in each community, and compared observed values with distributions created by 1000 simulations using a null of random distribution of function, with the same n, number of discontinuities and number of functional groups as the observed data. Variance in the richness of functional groups across scales was significantly lower in real communities than in simulations in eight of nine sites. The distribution of function across body mass aggregations in the animal communities we analyzed was non-random, and supports the contentions of the cross-scale resilience model. ?? 2007 Elsevier B.V. All rights reserved.

  8. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    PubMed

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  9. The effects of sample scheduling and sample numbers on estimates of the annual fluxes of suspended sediment in fluvial systems

    USGS Publications Warehouse

    Horowitz, Arthur J.; Clarke, Robin T.; Merten, Gustavo Henrique

    2015-01-01

    Since the 1970s, there has been both continuing and growing interest in developing accurate estimates of the annual fluvial transport (fluxes and loads) of suspended sediment and sediment-associated chemical constituents. This study provides an evaluation of the effects of manual sample numbers (from 4 to 12 year−1) and sample scheduling (random-based, calendar-based and hydrology-based) on the precision, bias and accuracy of annual suspended sediment flux estimates. The evaluation is based on data from selected US Geological Survey daily suspended sediment stations in the USA and covers basins ranging in area from just over 900 km2 to nearly 2 million km2 and annual suspended sediment fluxes ranging from about 4 Kt year−1 to about 200 Mt year−1. The results appear to indicate that there is a scale effect for random-based and calendar-based sampling schemes, with larger sample numbers required as basin size decreases. All the sampling schemes evaluated display some level of positive (overestimates) or negative (underestimates) bias. The study further indicates that hydrology-based sampling schemes are likely to generate the most accurate annual suspended sediment flux estimates with the fewest number of samples, regardless of basin size. This type of scheme seems most appropriate when the determination of suspended sediment concentrations, sediment-associated chemical concentrations, annual suspended sediment and annual suspended sediment-associated chemical fluxes only represent a few of the parameters of interest in multidisciplinary, multiparameter monitoring programmes. The results are just as applicable to the calibration of autosamplers/suspended sediment surrogates currently used to measure/estimate suspended sediment concentrations and ultimately, annual suspended sediment fluxes, because manual samples are required to adjust the sample data/measurements generated by these techniques so that they provide depth-integrated and cross-sectionally representative data. 

  10. Change in genetic size of small-closed populations: Lessons from a domestic mammal population.

    PubMed

    Ghafouri-Kesbi, Farhad

    2010-10-01

    The aim of this study was to monitor changes in genetic size of a small-closed population of Iranian Zandi sheep, by using pedigree information from animals born between 1991 and 2005. The genetic size was assessed by using measures based on the probability of identity-by-descend of genes (coancestry, f, and effective population size, N(e) ), as well as measures based on probability of gene origin (effective number of founders, f(e) , effective number of founder genomes, f(g) , and effective number of non-founder genomes, f(ne) ). Average coancestry, or the degree of genetic similarity of individuals, increased from 0.81% to 1.44% during the period 1993 to 2005, at the same time that N(e) decreased from 263 to 93. The observed trend for f(e) was irregular throughout the experiment in a way that f(e) was 68, 87, 77, 92, and 80 in 1993, 1996, 1999, 2002, and 2005, respectively. Simultaneously, f(g) , the most informative effective number, decreased from 61 to 35. The index of genetic diversity (GD) which was obtained from estimates of f(g) , decreased about 2% throughout the period studied. In addition, a noticeable reduction was observed in the estimates of f(ne) from 595 in 1993 to 61 in 2005. The higher than 1 ratio of f(e) to f(g) indicated the presence of bottlenecks and genetic drift in the development of this population of Zandi sheep. From 1993 to 1999, f(ne) was much higher than f(e) , thereby indicating that with respect to loss of genetic diversity, the unequal contribution of founders was more important than the random genetic drift in non-founder generations. Subsequently, random genetic drift in non-founder generations was the major reason for f(e) > f(ne) . The minimization of average coancestry in new reproductive individuals was recommended as a means of preserving the population against a further loss in genetic diversity.

  11. Prophylactic antipsychotic use for postoperative delirium: a systematic review and meta-analysis.

    PubMed

    Hirota, Tomoya; Kishi, Taro

    2013-12-01

    Although antipsychotics have been used empirically to prevent the development of postoperative delirium, there has been no confirming evidence to support their use. Thus, we conducted a systematic review and a meta-analysis to elucidate their efficacy and tolerability in surgical patients. MEDLINE, EMBASE, the Cochrane Library databases, CINAHL, and PsycINFO were searched up to February 2013 without language restrictions, using the following keywords: (antipsychotics OR [nonproprietary name of each antipsychotic medication, separated by OR]) AND delirium AND (randomized OR random OR randomly). Randomized controlled trials comparing prophylactic use of antipsychotics with placebo in surgical patients were included. Two authors extracted and scrutinized the data. The risk ratio (RR), 95% confidence interval (CI), number needed to treat (NNT), and standardized mean difference were used. Six studies (3 haloperidol, 1 olanzapine, and 2 risperidone) including 1,689 surgical patients were identified. The results showed significant efficacy in reducing the occurrence of delirium (RR = 0.50, 95% CI = 0.34 to 0.73, P = .0003; NNT = 7, P = .001, 6 studies). Sensitivity analysis showed that second-generation antipsychotics were superior to placebo (RR = 0.36, P < .00001; NNT = 4, P < .00001), whereas haloperidol failed to show superiority to placebo. There were no statistically significant differences between groups in severity of delirium, discontinuation rate, or rates of several adverse events. Our results suggest that second-generation antipsychotics are more beneficial than placebo for preventing the incidence of delirium. Among patients who do develop delirium, the severity of delirium is not reduced in those who received prophylactic antipsychotics. © Copyright 2013 Physicians Postgraduate Press, Inc.

  12. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  13. The relations between network-operation and topological-property in a scale-free and small-world network with community structure

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Yao, Bing

    2017-10-01

    It is always an open, demanding and difficult task for generating available model to simulate dynamical functions and reveal inner principles from complex systems and networks. In this article, due to lots of real-life and artificial networks are built from series of simple and small groups (components), we discuss some interesting and helpful network-operation to generate more realistic network models. In view of community structure (modular topology), we present a class of sparse network models N(t , m) . At the moment, we capture the fact the N(t , 4) has not only scale-free feature, which means that the probability that a randomly selected vertex with degree k decays as a power-law, following P(k) ∼k-γ, where γ is the degree exponent, but also small-world property, which indicates that the typical distance between two uniform randomly chosen vertices grows proportionally to logarithm of the order of N(t , 4) , namely, relatively shorter diameter and lower average path length, simultaneously displays higher clustering coefficient. Next, as a new topological parameter correlating to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees over a network is studied in more detail, an exact analytical solution for the number of spanning trees of the N(t , 4) is obtained. Based on the network-operation, part hub-vertex linking with each other will be helpful for structuring various network models and investigating the rules related with real-life networks.

  14. Classification of urine sediment based on convolution neural network

    NASA Astrophysics Data System (ADS)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  15. The nature and perception of fluctuations in human musical rhythms

    NASA Astrophysics Data System (ADS)

    Hennig, Holger; Fleischmann, Ragnar; Fredebohm, Anneke; Hagmayer, York; Nagler, Jan; Witt, Annette; Theis, Fabian; Geisel, Theo

    2012-02-01

    Although human musical performances represent one of the most valuable achievements of mankind, the best musicians perform imperfectly. Musical rhythms are not entirely accurate and thus inevitably deviate from the ideal beat pattern. Nevertheless, computer generated perfect beat patterns are frequently devalued by listeners due to a perceived lack of human touch. Professional audio editing software therefore offers a humanizing feature which artificially generates rhythmic fluctuations. However, the built-in humanizing units are essentially random number generators producing only simple uncorrelated fluctuations. Here, for the first time, we establish long-range fluctuations as an inevitable natural companion of both simple and complex human rhythmic performances [1]. Moreover, we demonstrate that listeners strongly prefer long-range correlated fluctuations in musical rhythms. Thus, the favorable fluctuation type for humanizing interbeat intervals coincides with the one generically inherent in human musical performances. [1] HH et al., PLoS ONE,6,e26457 (2011)

  16. Analysis on flood generation processes by means of a continuous simulation model

    NASA Astrophysics Data System (ADS)

    Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.

    2006-03-01

    In the present research, we exploited a continuous hydrological simulation to investigate on key variables responsible of flood peak formation. With this purpose, a distributed hydrological model (DREAM) is used in cascade with a rainfall generator (IRP-Iterated Random Pulse) to simulate a large number of extreme events providing insight into the main controls of flood generation mechanisms. Investigated variables are those used in theoretically derived probability distribution of floods based on the concept of partial contributing area (e.g. Iacobellis and Fiorentino, 2000). The continuous simulation model is used to investigate on the hydrological losses occurring during extreme events, the variability of the source area contributing to the flood peak and its lag-time. Results suggest interesting simplification for the theoretical probability distribution of floods according to the different climatic and geomorfologic environments. The study is applied to two basins located in Southern Italy with different climatic characteristics.

  17. The Origin of Apollo Objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perlmutter, Saul

    1984-03-29

    The source of the Earth-orbit-crossing asteroids has been much debated. (This class of asteroidal bodies includes the Apollo, Aten, and some Amor objects, each with its own orbital characteristics; we shall use the term Apollo objects to mean all Earth-crossers.) It is difficult to find a mechanism which would create new Apollo objects at a sufficient rate to balance the loss due to collision with planets and ejection from the solar system, and thus explain the estimated steady-state number. A likely source is the main asteroid belt, since it has similar photometric characteristics. There are gaps in the main beltmore » which correspond to orbits resonant with the orbits of Jupiter and Saturn, and it has been shown that the resonances can perturb a body into an Earth-crossing orbit. Apollo objects could thus be generated when random collisions between asteroids in the main belt sent fragments into these resonant orbits. Calculations of the creation rate from these random collisions, however, yielcl numbers too low by a factor of four. This rate could be significantly lower given the uncertainty in the efficiency of the resonance mechanism. As an alternative, it was suggested that the evaporation of a comet's volatile mantle as it passes near the sun could provide enough non-gravitational force to move the comet into an orbit with aphelion inside of Jupiter's orbit, and thus safe from ejection from the solar system. The probability of such an event occurring is unknown, although the recent discovery of the 'asteroid' 1983 TB, with an orbit matching that of the Geminid meteor shower, suggests that such a mechanism has occurred at least once. New evidence from paleontology and geophysics, however, suggests a better solution to the problem of the source of the Apollos. M. Davis, P. Hut, and R. A. Muller recently proposed that an unseen companion to the sun passes through the Oort cloud every 28 million years, sending a shower of comets to the Earth; this provides an explanation for the periodicity of the fossil record of extinctions found by D. M. Raup and J. J. Sepkoski. W. Alvarez and R. A. Muller have shown that the craters on the earth have an age distribution with a periodicity and phase consistent with this hypothesis. These periodic comet showers would of course pass through the entire solar system, colliding with other bodies besides the earth. When the target is the asteroid belt, many small comets will have sufficient kinetic energy to disrupt large asteroids. This will generate many more fragments in the resonant orbits than would be generated by random collisions of asteroids with each other, and hence more Apollo objects. In this report, we shall calculate approximately (A) the number of comets per shower which cross the asteroid belt, (B) the probability of collisions with a single asteroid per shower, (C) the number of fragments with radius > 0.5 km which reach Apollo orbits, and (D) the current expected number of Apollos derived from comet/asteroid collisions. Given conservative assumptions, the calculated number is in agreement with observations.« less

  18. Graphene based widely-tunable and singly-polarized pulse generation with random fiber lasers

    PubMed Central

    Yao, B. C.; Rao, Y. J.; Wang, Z. N.; Wu, Y.; Zhou, J. H.; Wu, H.; Fan, M. Q.; Cao, X. L.; Zhang, W. L.; Chen, Y. F.; Li, Y. R.; Churkin, D.; Turitsyn, S.; Wong, C. W.

    2015-01-01

    Pulse generation often requires a stabilized cavity and its corresponding mode structure for initial phase-locking. Contrastingly, modeless cavity-free random lasers provide new possibilities for high quantum efficiency lasing that could potentially be widely tunable spectrally and temporally. Pulse generation in random lasers, however, has remained elusive since the discovery of modeless gain lasing. Here we report coherent pulse generation with modeless random lasers based on the unique polarization selectivity and broadband saturable absorption of monolayer graphene. Simultaneous temporal compression of cavity-free pulses are observed with such a polarization modulation, along with a broadly-tunable pulsewidth across two orders of magnitude down to 900 ps, a broadly-tunable repetition rate across three orders of magnitude up to 3 MHz, and a singly-polarized pulse train at 41 dB extinction ratio, about an order of magnitude larger than conventional pulsed fiber lasers. Moreover, our graphene-based pulse formation also demonstrates robust pulse-to-pulse stability and wide-wavelength operation due to the cavity-less feature. Such a graphene-based architecture not only provides a tunable pulsed random laser for fiber-optic sensing, speckle-free imaging, and laser-material processing, but also a new way for the non-random CW fiber lasers to generate widely tunable and singly-polarized pulses. PMID:26687730

  19. Graphene based widely-tunable and singly-polarized pulse generation with random fiber lasers.

    PubMed

    Yao, B C; Rao, Y J; Wang, Z N; Wu, Y; Zhou, J H; Wu, H; Fan, M Q; Cao, X L; Zhang, W L; Chen, Y F; Li, Y R; Churkin, D; Turitsyn, S; Wong, C W

    2015-12-21

    Pulse generation often requires a stabilized cavity and its corresponding mode structure for initial phase-locking. Contrastingly, modeless cavity-free random lasers provide new possibilities for high quantum efficiency lasing that could potentially be widely tunable spectrally and temporally. Pulse generation in random lasers, however, has remained elusive since the discovery of modeless gain lasing. Here we report coherent pulse generation with modeless random lasers based on the unique polarization selectivity and broadband saturable absorption of monolayer graphene. Simultaneous temporal compression of cavity-free pulses are observed with such a polarization modulation, along with a broadly-tunable pulsewidth across two orders of magnitude down to 900 ps, a broadly-tunable repetition rate across three orders of magnitude up to 3 MHz, and a singly-polarized pulse train at 41 dB extinction ratio, about an order of magnitude larger than conventional pulsed fiber lasers. Moreover, our graphene-based pulse formation also demonstrates robust pulse-to-pulse stability and wide-wavelength operation due to the cavity-less feature. Such a graphene-based architecture not only provides a tunable pulsed random laser for fiber-optic sensing, speckle-free imaging, and laser-material processing, but also a new way for the non-random CW fiber lasers to generate widely tunable and singly-polarized pulses.

  20. Effects of First-Grade Number Knowledge Tutoring With Contrasting Forms of Practice.

    PubMed

    Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Schatschneider, Christopher; Hamlett, Carol L; Deselms, Jacqueline; Seethaler, Pamela M; Wilson, Julie; Craddock, Caitlin F; Bryant, Joan D; Luther, Kurstin; Changas, Paul

    2013-01-01

    The purpose of this study was to investigate the effects of 1st-grade number knowledge tutoring with contrasting forms of practice. Tutoring occurred 3 times per week for 16 weeks. In each 30-min session, the major emphasis (25 min) was number knowledge; the other 5 min provided practice in 1 of 2 forms. Nonspeeded practice reinforced relations and principles addressed in number knowledge tutoring. Speeded practice promoted quick responding and use of efficient counting procedures to generate many correct responses. At-risk students were randomly assigned to number knowledge tutoring with speeded practice ( n = 195), number knowledge tutoring with nonspeeded practice ( n = 190), and control (no tutoring, n = 206). Each tutoring condition produced stronger learning than control on all 4 mathematics outcomes. Speeded practice produced stronger learning than nonspeeded practice on arithmetic and 2-digit calculations, but effects were comparable on number knowledge and word problems. Effects of both practice conditions on arithmetic were partially mediated by increased reliance on retrieval, but only speeded practice helped at-risk children compensate for weak reasoning ability.

Top