Sample records for true random number

  1. PUFKEY: A High-Security and High-Throughput Hardware True Random Number Generator for Sensor Networks

    PubMed Central

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-01-01

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks. PMID:26501283

  2. PUFKEY: a high-security and high-throughput hardware true random number generator for sensor networks.

    PubMed

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-10-16

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.

  3. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  4. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  5. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  6. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  7. Quantum random number generator based on quantum nature of vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.

    2017-11-01

    Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.

  8. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  9. Random ambience using high fidelity images

    NASA Astrophysics Data System (ADS)

    Abu, Nur Azman; Sahib, Shahrin

    2011-06-01

    Most of the secure communication nowadays mandates true random keys as an input. These operations are mostly designed and taken care of by the developers of the cryptosystem. Due to the nature of confidential crypto development today, pseudorandom keys are typically designed and still preferred by the developers of the cryptosystem. However, these pseudorandom keys are predictable, periodic and repeatable, hence they carry minimal entropy. True random keys are believed to be generated only via hardware random number generators. Careful statistical analysis is still required to have any confidence the process and apparatus generates numbers that are sufficiently random to suit the cryptographic use. In this underlying research, each moment in life is considered unique in itself. The random key is unique for the given moment generated by the user whenever he or she needs the random keys in practical secure communication. An ambience of high fidelity digital image shall be tested for its randomness according to the NIST Statistical Test Suite. Recommendation on generating a simple 4 megabits per second random cryptographic keys live shall be reported.

  10. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  11. Solution-Processed Carbon Nanotube True Random Number Generator.

    PubMed

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  12. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  13. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    PubMed Central

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  14. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  15. KMCLib 1.1: Extended random number support and technical updates to the KMCLib general framework for kinetic Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-11-01

    We here present a revised version, v1.1, of the KMCLib general framework for kinetic Monte-Carlo (KMC) simulations. The generation of random numbers in KMCLib now relies on the C++11 standard library implementation, and support has been added for the user to choose from a set of C++11 implemented random number generators. The Mersenne-twister, the 24 and 48 bit RANLUX and a 'minimal-standard' PRNG are supported. We have also included the possibility to use true random numbers via the C++11 std::random_device generator. This release also includes technical updates to support the use of an extended range of operating systems and compilers.

  16. True randomness from an incoherent source

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2017-11-01

    Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.

  17. Superparamagnetic perpendicular magnetic tunnel junctions for true random number generators

    NASA Astrophysics Data System (ADS)

    Parks, Bradley; Bapna, Mukund; Igbokwe, Julianne; Almasi, Hamid; Wang, Weigang; Majetich, Sara A.

    2018-05-01

    Superparamagnetic perpendicular magnetic tunnel junctions are fabricated and analyzed for use in random number generators. Time-resolved resistance measurements are used as streams of bits in statistical tests for randomness. Voltage control of the thermal stability enables tuning the average speed of random bit generation up to 70 kHz in a 60 nm diameter device. In its most efficient operating mode, the device generates random bits at an energy cost of 600 fJ/bit. A narrow range of magnetic field tunes the probability of a given state from 0 to 1, offering a means of probabilistic computing.

  18. Quantum random number generation

    DOE PAGES

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; ...

    2016-06-28

    Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  19. Truly random number generation: an example

    NASA Astrophysics Data System (ADS)

    Frauchiger, Daniela; Renner, Renato

    2013-10-01

    Randomness is crucial for a variety of applications, ranging from gambling to computer simulations, and from cryptography to statistics. However, many of the currently used methods for generating randomness do not meet the criteria that are necessary for these applications to work properly and safely. A common problem is that a sequence of numbers may look random but nevertheless not be truly random. In fact, the sequence may pass all standard statistical tests and yet be perfectly predictable. This renders it useless for many applications. For example, in cryptography, the predictability of a "andomly" chosen password is obviously undesirable. Here, we review a recently developed approach to generating true | and hence unpredictable | randomness.

  20. Social Noise: Generating Random Numbers from Twitter Streams

    NASA Astrophysics Data System (ADS)

    Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús

    2015-12-01

    Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.

  1. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  2. Source-Device-Independent Ultrafast Quantum Random Number Generation.

    PubMed

    Marangon, Davide G; Vallone, Giuseppe; Villoresi, Paolo

    2017-02-10

    Secure random numbers are a fundamental element of many applications in science, statistics, cryptography and more in general in security protocols. We present a method that enables the generation of high-speed unpredictable random numbers from the quadratures of an electromagnetic field without any assumption on the input state. The method allows us to eliminate the numbers that can be predicted due to the presence of classical and quantum side information. In particular, we introduce a procedure to estimate a bound on the conditional min-entropy based on the entropic uncertainty principle for position and momentum observables of infinite dimensional quantum systems. By the above method, we experimentally demonstrated the generation of secure true random bits at a rate greater than 1.7 Gbit/s.

  3. Fast physical-random number generation using laser diode's frequency noise: influence of frequency discriminator

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo

    2018-02-01

    Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.

  4. Seminar on Understanding Digital Control and Analysis in Vibration Test Systems, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A number of techniques for dealing with important technical aspects of the random vibration control problem are described. These include the generation of pseudo-random and true random noise, the control spectrum estimation problem, the accuracy/speed tradeoff, and control correction strategies. System hardware, the operator-system interface, safety features, and operational capabilities of sophisticated digital random vibration control systems are also discussed.

  5. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  6. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  7. Experimental study of a quantum random-number generator based on two independent lasers

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Xu, Feihu

    2017-12-01

    A quantum random-number generator (QRNG) can produce true randomness by utilizing the inherent probabilistic nature of quantum mechanics. Recently, the spontaneous-emission quantum phase noise of the laser has been widely deployed for quantum random-number generation, due to its high rate, its low cost, and the feasibility of chip-scale integration. Here, we perform a comprehensive experimental study of a phase-noise-based QRNG with two independent lasers, each of which operates in either continuous-wave (CW) or pulsed mode. We implement the QRNG by operating the two lasers in three configurations, namely, CW + CW, CW + pulsed, and pulsed + pulsed, and demonstrate their trade-offs, strengths, and weaknesses.

  8. Renal denervation in comparison with intensified pharmacotherapy in true resistant hypertension: 2-year outcomes of randomized PRAGUE-15 study.

    PubMed

    Rosa, Ján; Widimský, Petr; Waldauf, Petr; Zelinka, Tomáš; Petrák, Ondřej; Táborský, Miloš; Branny, Marian; Toušek, Petr; Čurila, Karol; Lambert, Lukáš; Bednář, František; Holaj, Robert; Štrauch, Branislav; Václavík, Jan; Kociánová, Eva; Nykl, Igor; Jiravský, Otakar; Rappová, Gabriela; Indra, Tomáš; Krátká, Zuzana; Widimský, Jiří

    2017-05-01

    The randomized, multicentre study compared the efficacy of renal denervation (RDN) versus spironolactone addition in patients with true resistant hypertension. We present the 24-month data. A total of 106 patients with true resistant hypertension were enrolled in this study: 52 patients were randomized to RDN and 54 patients to the spironolactone addition, with baseline SBP of 159 ± 17 and 155 ± 17 mmHg and average number of drugs 5.1 and 5.4, respectively. Two-year data are available in 86 patients. Spironolactone addition, as crossover after 1 year, was performed in 23 patients after RDN, and spironolactone addition followed by RDN was performed in five patients. Similar and comparable reduction of 24-h SBP after RDN or spironolactone addition after randomization was observed, 9.1 mmHg (P = 0.001) and 10.9 mmHg (P = 0.001), respectively. Similar decrease of office blood pressure (BP) was observed, 17.7 mmHg (P < 0.001) versus 14.1 mmHg (P < 0.001), whereas the number of antihypertensive drugs did not differ significantly between groups. Crossover analysis showed nonsignificantly better efficacy of spironolactone addition in 24-h SBP and office SBP reduction than RDN (3.7 mmHg, P = 0.27 and 4.6 mmHg, P = 0.28 in favour of spironolactone addition, respectively). Meanwhile, the number of antihypertensive drugs was significantly increased after spironolactone addition (+0.7, P = 0.001). In the settings of true resistant hypertension, spironolactone addition (if tolerated) seems to be of better efficacy than RDN in BP reduction over a period of 24 months. However, by contrast to the 12-month results, BP changes were not significantly greater.

  9. Digital servo control of random sound fields

    NASA Technical Reports Server (NTRS)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu

    Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  11. N-state random switching based on quantum tunnelling

    NASA Astrophysics Data System (ADS)

    Bernardo Gavito, Ramón; Jiménez Urbanos, Fernando; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J.; Woodhead, Christopher S.; Missous, Mohamed; Roedig, Utz; Young, Robert J.

    2017-08-01

    In this work, we show how the hysteretic behaviour of resonant tunnelling diodes (RTDs) can be exploited for new functionalities. In particular, the RTDs exhibit a stochastic 2-state switching mechanism that could be useful for random number generation and cryptographic applications. This behaviour can be scaled to N-bit switching, by connecting various RTDs in series. The InGaAs/AlAs RTDs used in our experiments display very sharp negative differential resistance (NDR) peaks at room temperature which show hysteresis cycles that, rather than having a fixed switching threshold, show a probability distribution about a central value. We propose to use this intrinsic uncertainty emerging from the quantum nature of the RTDs as a source of randomness. We show that a combination of two RTDs in series results in devices with three-state outputs and discuss the possibility of scaling to N-state devices by subsequent series connections of RTDs, which we demonstrate for the up to the 4-state case. In this work, we suggest using that the intrinsic uncertainty in the conduction paths of resonant tunnelling diodes can behave as a source of randomness that can be integrated into current electronics to produce on-chip true random number generators. The N-shaped I-V characteristic of RTDs results in a two-level random voltage output when driven with current pulse trains. Electrical characterisation and randomness testing of the devices was conducted in order to determine the validity of the true randomness assumption. Based on the results obtained for the single RTD case, we suggest the possibility of using multi-well devices to generate N-state random switching devices for their use in random number generation or multi-valued logic devices.

  12. FPGA and USB based control board for quantum random number generator

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Wan, Xu; Zhang, Hong-Fei; Gao, Yuan; Chen, Teng-Yun; Liang, Hao

    2009-09-01

    The design and implementation of FPGA-and-USB-based control board for quantum experiments are discussed. The usage of quantum true random number generator, control- logic in FPGA and communication with computer through USB protocol are proposed in this paper. Programmable controlled signal input and output ports are implemented. The error-detections of data frame header and frame length are designed. This board has been used in our decoy-state based quantum key distribution (QKD) system successfully.

  13. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  14. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  15. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  16. Design of high-throughput and low-power true random number generator utilizing perpendicularly magnetized voltage-controlled magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.

    2017-05-01

    A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.

  17. Benefits of Reiki therapy for a severely neutropenic patient with associated influences on a true random number generator.

    PubMed

    Morse, Melvin L; Beem, Lance W

    2011-12-01

    Reiki therapy is documented for relief of pain and stress. Energetic healing has been documented to alter biologic markers of illness such as hematocrit. True random number generators are reported to be affected by energy healers and spiritually oriented conscious awareness. The patient was a then 54-year-old severely ill man who had hepatitis C types 1 and 2 and who did not improve with conventional therapy. He also suffered from obesity, the metabolic syndrome, asthma, and hypertension. He was treated with experimental high-dose interferon/riboviron therapy with resultant profound anemia and neutropenia. Energetic healing and Reiki therapy was administered initially to enhance the patient's sense of well-being and to relieve anxiety. Possible effects on the patient's absolute neutrophil count and hematocrit were incidentally noted. Reiki therapy was then initiated at times of profound neutropenia to assess its possible effect on the patient's absolute neutrophil count (ANC). Reiki and other energetic healing sessions were monitored with a true random number generator (RNG). Statistically significant relationships were documented between Reiki therapy, a quieting of the electronically created white noise of the RNG during healing sessions, and improvement in the patient's ANC. The immediate clinical result was that the patient could tolerate the high-dose interferon regimen without missing doses because of absolute neutropenia. The patient was initially a late responder to interferon and had been given a 5% chance of clearing the virus. He remains clear of the virus 1 year after treatment. The association between changes in the RNG, Reiki therapy, and a patient's ANC is the first to the authors' knowledge in the medical literature. Future studies assessing the effects of energetic healing on specific biologic markers of disease are anticipated. Concurrent use of a true RNG may prove to correlate with the effectiveness of energetic therapy.

  18. Benefits of Reiki Therapy for a Severely Neutropenic Patient with Associated Influences on a True Random Number Generator

    PubMed Central

    Beem, Lance W.

    2011-01-01

    Abstract Background Reiki therapy is documented for relief of pain and stress. Energetic healing has been documented to alter biologic markers of illness such as hematocrit. True random number generators are reported to be affected by energy healers and spiritually oriented conscious awareness. Methods The patient was a then 54-year-old severely ill man who had hepatitis C types 1 and 2 and who did not improve with conventional therapy. He also suffered from obesity, the metabolic syndrome, asthma, and hypertension. He was treated with experimental high-dose interferon/riboviron therapy with resultant profound anemia and neutropenia. Energetic healing and Reiki therapy was administered initially to enhance the patient's sense of well-being and to relieve anxiety. Possible effects on the patient's absolute neutrophil count and hematocrit were incidentally noted. Reiki therapy was then initiated at times of profound neutropenia to assess its possible effect on the patient's absolute neutrophil count (ANC). Reiki and other energetic healing sessions were monitored with a true random number generator (RNG). Results Statistically significant relationships were documented between Reiki therapy, a quieting of the electronically created white noise of the RNG during healing sessions, and improvement in the patient's ANC. The immediate clinical result was that the patient could tolerate the high-dose interferon regimen without missing doses because of absolute neutropenia. The patient was initially a late responder to interferon and had been given a 5% chance of clearing the virus. He remains clear of the virus 1 year after treatment. Conclusions The association between changes in the RNG, Reiki therapy, and a patient's ANC is the first to the authors' knowledge in the medical literature. Future studies assessing the effects of energetic healing on specific biologic markers of disease are anticipated. Concurrent use of a true RNG may prove to correlate with the effectiveness of energetic therapy. PMID:22132706

  19. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.

  20. FPGA Implementation of Metastability-Based True Random Number Generator

    NASA Astrophysics Data System (ADS)

    Hata, Hisashi; Ichikawa, Shuichi

    True random number generators (TRNGs) are important as a basis for computer security. Though there are some TRNGs composed of analog circuit, the use of digital circuits is desired for the application of TRNGs to logic LSIs. Some of the digital TRNGs utilize jitter in free-running ring oscillators as a source of entropy, which consume large power. Another type of TRNG exploits the metastability of a latch to generate entropy. Although this kind of TRNG has been mostly implemented with full-custom LSI technology, this study presents an implementation based on common FPGA technology. Our TRNG is comprised of logic gates only, and can be integrated in any kind of logic LSI. The RS latch in our TRNG is implemented as a hard-macro to guarantee the quality of randomness by minimizing the signal skew and load imbalance of internal nodes. To improve the quality and throughput, the output of 64-256 latches are XOR'ed. The derived design was verified on a Xilinx Virtex-4 FPGA (XC4VFX20), and passed NIST statistical test suite without post-processing. Our TRNG with 256 latches occupies 580 slices, while achieving 12.5Mbps throughput.

  1. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Iaroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  2. On the design of henon and logistic map-based random number generator

    NASA Astrophysics Data System (ADS)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  3. Generating and using truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2012-01-01

    The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.

  4. Investigating the limits of PET/CT imaging at very low true count rates and high random fractions in ion-beam therapy monitoring.

    PubMed

    Kurz, Christopher; Bauer, Julia; Conti, Maurizio; Guérin, Laura; Eriksson, Lars; Parodi, Katia

    2015-07-01

    External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β(+)-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small number of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended reconstruction scheme has been applied to exemplary postirradiation patient data-sets. Among the investigated reconstruction options, the overall best results in terms of image noise, activity quantification, and accurate geometrical recovery were achieved using the ordered subset expectation maximization reconstruction algorithm with time-of-flight (TOF) and point-spread function (PSF) information. For this algorithm, reasonably accurate (better than 5%) and precise (uncertainty of the mean activity below 10%) imaging can be provided down to 80,000 true coincidences at 96% RF. Image noise and geometrical fidelity are generally improved for fewer iterations. The main limitation for PET-based treatment monitoring has been identified in the small number of true coincidences, rather than the high intrinsic random background. Application of the optimized reconstruction scheme to patient data-sets results in a 25% - 50% reduced image noise at a comparable activity quantification accuracy and an improved geometrical performance with respect to the formerly used reconstruction scheme at HIT, adopted from nuclear medicine applications. Under the poor statistical conditions in PET-based treatment monitoring, improved results can be achieved by considering PSF and TOF information during image reconstruction and by applying less iterations than in conventional nuclear medicine imaging. Geometrical fidelity and image noise are mainly limited by the low number of true coincidences, not the high LSO-related random background. The retrieved results might also impact other emerging PET applications at low counting statistics.

  5. The LSS Review. Volume 3, Number 2

    ERIC Educational Resources Information Center

    Page, Stephen, Ed.; Shaw, Danielle, Ed.

    2004-01-01

    Beginners in many disciplines learn that correlation never proves causation, but sometimes, even in public health, correlation, mistaken for causation, becomes the basis for policy and great expenditures of public and private money. "True experiments" with random assignment to experimental and control groups hold a special place in the…

  6. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  7. {sup 90}Y -PET imaging: Exploring limitations and accuracy under conditions of low counts and high random fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene

    Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at extremely high random fraction and low counts for iterative algorithms. Point spread function (PSF) correction and TOF reconstruction in general reduce background variability and noise and increase recovered concentration. Results for patient data indicated a good correlation between the expected and PET reconstructed activities. A linear relationship between the expected and the measured activities in the organ of interest was observed for all reconstruction method used: a linearity coefficient of 0.89 ± 0.05 for the Biograph mCT and 0.81 ± 0.05 for the Biograph TruePoint. Conclusions: Due to the low counts and high random fraction, accurate image quantification of {sup 90}Y during selective internal radionuclide therapy is affected by random coincidence estimation, scatter correction, and any positivity constraint of the algorithm. Nevertheless, phantom and patient studies showed that the impact of number of true and random coincidences on quantitative results was found to be limited as long as ordinary Poisson ordered subsets expectation maximization reconstruction algorithms with random smoothing are used. Adding PSF correction and TOF information to the reconstruction greatly improves the image quality in terms of bias, variability, noise reduction, and detectability. On the patient studies, the total activity in the field of view is in general accurately measured by Biograph mCT and slightly overestimated by the Biograph TruePoint.« less

  8. A compressed sensing X-ray camera with a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  9. Benford's Law and Why the Integers Are Not What We Think They Are: A Critical Numeracy of Benford's Law

    ERIC Educational Resources Information Center

    Stoessiger, Rex

    2013-01-01

    A critical numeracy examination of Benford's Law suggests that our understanding of the integers is faulty. We think of them as equally likely to turn up as the first digit of a random real world number. For many real world data sets this is not true. In many cases, ranging from eBay auction prices to six digit numbers in Google to the…

  10. True and Quasi-Experimental Designs. ERIC/AE Digest.

    ERIC Educational Resources Information Center

    Gribbons, Barry; Herman, Joan

    Among the different types of experimental design are two general categories: true experimental designs and quasi- experimental designs. True experimental designs include more than one purposively created group, common measured outcomes, and random assignment. Quasi-experimental designs are commonly used when random assignment is not practical or…

  11. A perfect correlate does not a surrogate make

    PubMed Central

    Baker, Stuart G; Kramer, Barnett S

    2003-01-01

    Background There is common belief among some medical researchers that if a potential surrogate endpoint is highly correlated with a true endpoint, then a positive (or negative) difference in potential surrogate endpoints between randomization groups would imply a positive (or negative) difference in unobserved true endpoints between randomization groups. We investigate this belief when the potential surrogate and unobserved true endpoints are perfectly correlated within each randomization group. Methods We use a graphical approach. The vertical axis is the unobserved true endpoint and the horizontal axis is the potential surrogate endpoint. Perfect correlation within each randomization group implies that, for each randomization group, potential surrogate and true endpoints are related by a straight line. In this scenario the investigator does not know the slopes or intercepts. We consider a plausible example where the slope of the line is higher for the experimental group than for the control group. Results In our example with unknown lines, a decrease in mean potential surrogate endpoints from control to experimental groups corresponds to an increase in mean true endpoint from control to experimental groups. Thus the potential surrogate endpoints give the wrong inference. Similar results hold for binary potential surrogate and true outcomes (although the notion of correlation does not apply). The potential surrogate endpointwould give the correct inference if either (i) the unknown lines for the two group coincided, which means that the distribution of true endpoint conditional on potential surrogate endpoint does not depend on treatment group, which is called the Prentice Criterion or (ii) if one could accurately predict the lines based on data from prior studies. Conclusion Perfect correlation between potential surrogate and unobserved true outcomes within randomized groups does not guarantee correct inference based on a potential surrogate endpoint. Even in early phase trials, investigators should not base conclusions on potential surrogate endpoints in which the only validation is high correlation with the true endpoint within a group. PMID:12962545

  12. Role of Adding Spironolactone and Renal Denervation in True Resistant Hypertension: One-Year Outcomes of Randomized PRAGUE-15 Study.

    PubMed

    Rosa, Ján; Widimský, Petr; Waldauf, Petr; Lambert, Lukáš; Zelinka, Tomáš; Táborský, Miloš; Branny, Marian; Toušek, Petr; Petrák, Ondřej; Čurila, Karol; Bednář, František; Holaj, Robert; Štrauch, Branislav; Václavík, Jan; Nykl, Igor; Krátká, Zuzana; Kociánová, Eva; Jiravský, Otakar; Rappová, Gabriela; Indra, Tomáš; Widimský, Jiří

    2016-02-01

    This randomized, multicenter study compared the relative efficacy of renal denervation (RDN) versus pharmacotherapy alone in patients with true resistant hypertension and assessed the effect of spironolactone addition. We present here the 12-month data. A total of 106 patients with true resistant hypertension were enrolled in this study: 52 patients were randomized to RDN and 54 patients to the spironolactone addition, with baseline systolic blood pressure of 159±17 and 155±17 mm Hg and average number of drugs 5.1 and 5.4, respectively. Twelve-month results are available in 101 patients. The intention-to-treat analysis found a comparable mean 24-hour systolic blood pressure decline of 6.4 mm Hg, P=0.001 in RDN versus 8.2 mm Hg, P=0.002 in the pharmacotherapy group. Per-protocol analysis revealed a significant difference of 24-hour systolic blood pressure decline between complete RDN (6.3 mm Hg, P=0.004) and the subgroup where spironolactone was added, and this continued within the 12 months (15 mm Hg, P= 0.003). Renal artery computed tomography angiograms before and after 1 year post-RDN did not reveal any relevant changes. This study shows that over a period of 12 months, RDN is safe, with no serious side effects and no major changes in the renal arteries. RDN in the settings of true resistant hypertension with confirmed compliance is not superior to intensified pharmacological treatment. Spironolactone addition (if tolerated) seems to be more effective in blood pressure reduction. © 2015 American Heart Association, Inc.

  13. The heterogeneity statistic I(2) can be biased in small meta-analyses.

    PubMed

    von Hippel, Paul T

    2015-04-14

    Estimated effects vary across studies, partly because of random sampling error and partly because of heterogeneity. In meta-analysis, the fraction of variance that is due to heterogeneity is estimated by the statistic I(2). We calculate the bias of I(2), focusing on the situation where the number of studies in the meta-analysis is small. Small meta-analyses are common; in the Cochrane Library, the median number of studies per meta-analysis is 7 or fewer. We use Mathematica software to calculate the expectation and bias of I(2). I(2) has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small, but the bias is typically negative when the true fraction of heterogeneity is large. For example, with 7 studies and no true heterogeneity, I(2) will overestimate heterogeneity by an average of 12 percentage points, but with 7 studies and 80 percent true heterogeneity, I(2) can underestimate heterogeneity by an average of 28 percentage points. Biases of 12-28 percentage points are not trivial when one considers that, in the Cochrane Library, the median I(2) estimate is 21 percent. The point estimate I(2) should be interpreted cautiously when a meta-analysis has few studies. In small meta-analyses, confidence intervals should supplement or replace the biased point estimate I(2).

  14. Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler

    2016-01-01

    This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.

  15. Digital-Analog Hybrid Scheme and Its Application to Chaotic Random Number Generators

    NASA Astrophysics Data System (ADS)

    Yuan, Zeshi; Li, Hongtao; Miao, Yunchi; Hu, Wen; Zhu, Xiaohua

    2017-12-01

    Practical random number generation (RNG) circuits are typically achieved with analog devices or digital approaches. Digital-based techniques, which use field programmable gate array (FPGA) and graphics processing units (GPU) etc. usually have better performances than analog methods as they are programmable, efficient and robust. However, digital realizations suffer from the effect of finite precision. Accordingly, the generated random numbers (RNs) are actually periodic instead of being real random. To tackle this limitation, in this paper we propose a novel digital-analog hybrid scheme that employs the digital unit as the main body, and minimum analog devices to generate physical RNs. Moreover, the possibility of realizing the proposed scheme with only one memory element is discussed. Without loss of generality, we use the capacitor and the memristor along with FPGA to construct the proposed hybrid system, and a chaotic true random number generator (TRNG) circuit is realized, producing physical RNs at a throughput of Gbit/s scale. These RNs successfully pass all the tests in the NIST SP800-22 package, confirming the significance of the scheme in practical applications. In addition, the use of this new scheme is not restricted to RNGs, and it also provides a strategy to solve the effect of finite precision in other digital systems.

  16. Beyond Moore's law: towards competitive quantum devices

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2015-05-01

    A century after the invention of quantum theory and fifty years after Bell's inequality we see the first quantum devices emerge as products that aim to be competitive with the best classical computing devices. While a universal quantum computer of non-trivial size is still out of reach there exist a number commercial and experimental devices: quantum random number generators, quantum simulators and quantum annealers. In this colloquium I will present some of these devices and validation tests we performed on them. Quantum random number generators use the inherent randomness in quantum measurements to produce true random numbers, unlike classical pseudorandom number generators which are inherently deterministic. Optical lattice emulators use ultracold atomic gases in optical lattices to mimic typical models of condensed matter physics. In my talk I will focus especially on the devices built by Canadian company D-Wave systems, which are special purpose quantum simulators for solving hard classical optimization problems. I will review the controversy around the quantum nature of these devices and will compare them to state of the art classical algorithms. I will end with an outlook towards universal quantum computing and end with the question: which important problems that are intractable even for post-exa-scale classical computers could we expect to solve once we have a universal quantum computer?

  17. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  18. A compressed sensing X-ray camera with a multilayer architecture

    DOE PAGES

    Wang, Zhehui; Laroshenko, O.; Li, S.; ...

    2018-01-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  19. High speed true random number generator with a new structure of coarse-tuning PDL in FPGA

    NASA Astrophysics Data System (ADS)

    Fang, Hongzhen; Wang, Pengjun; Cheng, Xu; Zhou, Keji

    2018-03-01

    A metastability-based TRNG (true random number generator) is presented in this paper, and implemented in FPGA. The metastable state of a D flip-flop is tunable through a two-stage PDL (programmable delay line). With the proposed coarse-tuning PDL structure, the TRNG core does not require extra placement and routing to ensure its entropy. Furthermore, the core needs fewer stages of coarse-tuning PDL at higher operating frequency, and thus saves more resources in FPGA. The designed TRNG achieves 25 Mbps @ 100 MHz throughput after proper post-processing, which is several times higher than other previous TRNGs based on FPGA. Moreover, the robustness of the system is enhanced with the adoption of a feedback system. The quality of the designed TRNG is verified by NIST (National Institute of Standards and Technology) and also accepted by class P1 of the AIS-20/31 test suite. Project supported by the S&T Plan of Zhejiang Provincial Science and Technology Department (No. 2016C31078), the National Natural Science Foundation of China (Nos. 61574041, 61474068, 61234002), and the K.C. Wong Magna Fund in Ningbo University, China.

  20. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  1. A compressed sensing X-ray camera with a multilayer architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Laroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  2. Investigating the limits of PET/CT imaging at very low true count rates and high random fractions in ion-beam therapy monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurz, Christopher, E-mail: Christopher.Kurz@physik.uni-muenchen.de; Bauer, Julia; Conti, Maurizio

    Purpose: External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β{sup +}-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small numbermore » of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. Methods: The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended reconstruction scheme has been applied to exemplary postirradiation patient data-sets. Results: Among the investigated reconstruction options, the overall best results in terms of image noise, activity quantification, and accurate geometrical recovery were achieved using the ordered subset expectation maximization reconstruction algorithm with time-of-flight (TOF) and point-spread function (PSF) information. For this algorithm, reasonably accurate (better than 5%) and precise (uncertainty of the mean activity below 10%) imaging can be provided down to 80 000 true coincidences at 96% RF. Image noise and geometrical fidelity are generally improved for fewer iterations. The main limitation for PET-based treatment monitoring has been identified in the small number of true coincidences, rather than the high intrinsic random background. Application of the optimized reconstruction scheme to patient data-sets results in a 25% − 50% reduced image noise at a comparable activity quantification accuracy and an improved geometrical performance with respect to the formerly used reconstruction scheme at HIT, adopted from nuclear medicine applications. Conclusions: Under the poor statistical conditions in PET-based treatment monitoring, improved results can be achieved by considering PSF and TOF information during image reconstruction and by applying less iterations than in conventional nuclear medicine imaging. Geometrical fidelity and image noise are mainly limited by the low number of true coincidences, not the high LSO-related random background. The retrieved results might also impact other emerging PET applications at low counting statistics.« less

  3. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    PubMed Central

    Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N.

    2015-01-01

    Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics. PMID:25954306

  4. ARTS: automated randomization of multiple traits for study design.

    PubMed

    Maienschein-Cline, Mark; Lei, Zhengdeng; Gardeux, Vincent; Abbasi, Taimur; Machado, Roberto F; Gordeuk, Victor; Desai, Ankit A; Saraf, Santosh; Bahroos, Neil; Lussier, Yves

    2014-06-01

    Collecting data from large studies on high-throughput platforms, such as microarray or next-generation sequencing, typically requires processing samples in batches. There are often systematic but unpredictable biases from batch-to-batch, so proper randomization of biologically relevant traits across batches is crucial for distinguishing true biological differences from experimental artifacts. When a large number of traits are biologically relevant, as is common for clinical studies of patients with varying sex, age, genotype and medical background, proper randomization can be extremely difficult to prepare by hand, especially because traits may affect biological inferences, such as differential expression, in a combinatorial manner. Here we present ARTS (automated randomization of multiple traits for study design), which aids researchers in study design by automatically optimizing batch assignment for any number of samples, any number of traits and any batch size. ARTS is implemented in Perl and is available at github.com/mmaiensc/ARTS. ARTS is also available in the Galaxy Tool Shed, and can be used at the Galaxy installation hosted by the UIC Center for Research Informatics (CRI) at galaxy.cri.uic.edu. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. An Empirical Test of a Strategy for Training Examinees in the Use of Partial Information in Taking Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Bliss, Leonard B.

    The aim of this study was to show that the superiority of corrected-for-guessing scores over number right scores as true score estimates depends on the ability of examinees to recognize situations where they can eliminate one or more alternatives as incorrect and to omit items where they would only be guessing randomly. Previous investigations…

  6. Use of World Wide Web-based directories for tracing subjects in epidemiologic studies.

    PubMed

    Koo, M M; Rohan, T E

    2000-11-01

    The recent availability of World Wide Web-based directories has opened up a new approach for tracing subjects in epidemiologic studies. The completeness of two World Wide Web-based directories (Canada411 and InfoSpace Canada) for subject tracing was evaluated by using a randomized crossover design for 346 adults randomly selected from respondents in an ongoing cohort study. About half (56.4%) of the subjects were successfully located by using either Canada411 or InfoSpace. Of the 43.6% of the subjects who could not be located using either directory, the majority (73.5%) were female. Overall, there was no clear advantage of one directory over the other. Although Canada411 could find significantly more subjects than InfoSpace, the number of potential matches returned by Canada411 was also higher, which meant that a longer list of potential matches had to be examined before a true match could be found. One strategy to minimize the number of potential matches per true match is to first search by InfoSpace with the last name and first name, then by Canada411 with the last name and first name, and finally by InfoSpace with the last name and first initial. Internet-based searches represent a potentially useful approach to tracing subjects in epidemiologic studies.

  7. Assessment of wadeable stream resources in the driftless area ecoregion in Western Wisconsin using a probabilistic sampling design.

    PubMed

    Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen

    2009-03-01

    The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.

  8. Percolation in real multiplex networks

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra; Radicchi, Filippo

    2016-12-01

    We present an exact mathematical framework able to describe site-percolation transitions in real multiplex networks. Specifically, we consider the average percolation diagram valid over an infinite number of random configurations where nodes are present in the system with given probability. The approach relies on the locally treelike ansatz, so that it is expected to accurately reproduce the true percolation diagram of sparse multiplex networks with negligible number of short loops. The performance of our theory is tested in social, biological, and transportation multiplex graphs. When compared against previously introduced methods, we observe improvements in the prediction of the percolation diagrams in all networks analyzed. Results from our method confirm previous claims about the robustness of real multiplex networks, in the sense that the average connectedness of the system does not exhibit any significant abrupt change as its individual components are randomly destroyed.

  9. A novel image encryption algorithm based on chaos maps with Markov properties

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  10. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.

  11. Random SU(2) invariant tensors

    NASA Astrophysics Data System (ADS)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  12. On marker-based parentage verification via non-linear optimization.

    PubMed

    Boerner, Vinzent

    2017-06-15

    Parentage verification by molecular markers is mainly based on short tandem repeat markers. Single nucleotide polymorphisms (SNPs) as bi-allelic markers have become the markers of choice for genotyping projects. Thus, the subsequent step is to use SNP genotypes for parentage verification as well. Recent developments of algorithms such as evaluating opposing homozygous SNP genotypes have drawbacks, for example the inability of rejecting all animals of a sample of potential parents. This paper describes an algorithm for parentage verification by constrained regression which overcomes the latter limitation and proves to be very fast and accurate even when the number of SNPs is as low as 50. The algorithm was tested on a sample of 14,816 animals with 50, 100 and 500 SNP genotypes randomly selected from 40k genotypes. The samples of putative parents of these animals contained either five random animals, or four random animals and the true sire. Parentage assignment was performed by ranking of regression coefficients, or by setting a minimum threshold for regression coefficients. The assignment quality was evaluated by the power of assignment (P[Formula: see text]) and the power of exclusion (P[Formula: see text]). If the sample of putative parents contained the true sire and parentage was assigned by coefficient ranking, P[Formula: see text] and P[Formula: see text] were both higher than 0.99 for the 500 and 100 SNP genotypes, and higher than 0.98 for the 50 SNP genotypes. When parentage was assigned by a coefficient threshold, P[Formula: see text] was higher than 0.99 regardless of the number of SNPs, but P[Formula: see text] decreased from 0.99 (500 SNPs) to 0.97 (100 SNPs) and 0.92 (50 SNPs). If the sample of putative parents did not contain the true sire and parentage was rejected using a coefficient threshold, the algorithm achieved a P[Formula: see text] of 1 (500 SNPs), 0.99 (100 SNPs) and 0.97 (50 SNPs). The algorithm described here is easy to implement, fast and accurate, and is able to assign parentage using genomic marker data with a size as low as 50 SNPs.

  13. Assessing Class-Wide Consistency and Randomness in Responses to True or False Questions Administered Online

    ERIC Educational Resources Information Center

    Pawl, Andrew; Teodorescu, Raluca E.; Peterson, Joseph D.

    2013-01-01

    We have developed simple data-mining algorithms to assess the consistency and the randomness of student responses to problems consisting of multiple true or false statements. In this paper we describe the algorithms and use them to analyze data from introductory physics courses. We investigate statements that emerge as outliers because the class…

  14. Study designs for determining and comparing sensitivities of disease screening tests.

    PubMed

    Prorok, Philip C; Kramer, Barnett S; Miller, Anthony B

    2015-12-01

    To investigate the capability of various study designs to determine the sensitivity of a disease screening test. Quantities that can be calculated from these designs were derived and examined for their relationship to true sensitivity (the ability to detect unrecognized disease that would surface clinically in the absence of screening) and overdiagnosis. To examine the sensitivity of one test, the single cohort design, in which all participants receive the test, is particularly weak, providing only an upper bound on the true sensitivity, and yields no information about overdiagnosis. A randomized design, with one control arm and participants tested in the other, that includes sufficient post-screening follow-up, allows calculation of bounds on, and an approximation to, true sensitivity and also determination of overdiagnosis. Without follow-up, bounds on the true sensitivity can be calculated. To compare two tests, the single cohort paired design in which all participants receive both tests is precarious. The three arm randomized design with post screening follow-up is preferred, yielding an approximation to the true sensitivity, bounds on the true sensitivity, and the extent of overdiagnosis of each test. Without post screening follow-up, bounds on the true sensitivities can be calculated. When an unscreened control arm is not possible, the two-arm randomized design is recommended. Individual test sensitivities cannot be determined, but with sufficient post-screening follow-up, an order relationship can be established, as can the difference in overdiagnosis between the two tests. © The Author(s) 2015.

  15. Noise Analysis of Simultaneous Quantum Key Distribution and Classical Communication Scheme Using a True Local Oscillator

    DOE PAGES

    Qi, Bing; Lim, Charles Ci Wen

    2018-05-07

    Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less

  16. Noise Analysis of Simultaneous Quantum Key Distribution and Classical Communication Scheme Using a True Local Oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Bing; Lim, Charles Ci Wen

    Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less

  17. Noise Analysis of Simultaneous Quantum Key Distribution and Classical Communication Scheme Using a True Local Oscillator

    NASA Astrophysics Data System (ADS)

    Qi, Bing; Lim, Charles Ci Wen

    2018-05-01

    Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact that the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary's point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. We conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.

  18. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  19. Physical layer one-time-pad data encryption through synchronized semiconductor laser networks

    NASA Astrophysics Data System (ADS)

    Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris

    2016-02-01

    Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.

  20. Biological monitoring of environmental quality: The use of developmental instability

    USGS Publications Warehouse

    Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.

    1994-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.

  1. Influence of radiation on metastability-based TRNG

    NASA Astrophysics Data System (ADS)

    Wieczorek, Piotr Z.; Wieczorek, Zbigniew

    2017-08-01

    This paper presents a True Random Number Generator (TRNG) based on Flip-Flops with violated timing constraints. The proposed circuit has been implemented in a Xilinx Spartan 6 device. The TRNG circuit utilizes the metastability phenomenon as a source of randomness. Therefore, in the paper the influence of timing constraints on the flip-flop metastability proximity is discussed. The metastable range of operation enhances the noise influence on a Flip-Flop behavior. Therefore, the influence of an external stochastic source on the flip-flop operation is also investigated. For this purpose a radioactive source of radiation was used. According to the results shown in the paper the radiation increases the unpredictability of the metastable process of flip-flops operating as the randomness source in the TRNG. The statistical properties of TRNG operating in an increased radiation conditions were verified with the NIST battery of statistical tests.

  2. Test-based exclusion diets in gastro-esophageal reflux disease patients: a randomized controlled pilot trial.

    PubMed

    Caselli, Michele; Zuliani, Giovanni; Cassol, Francesca; Fusetti, Nadia; Zeni, Elena; Lo Cascio, Natalina; Soavi, Cecilia; Gullini, Sergio

    2014-12-07

    To investigate the clinical response of gastro-esophageal reflux disease (GERD) symptoms to exclusion diets based on food intolerance tests. A double blind, randomized, controlled pilot trial was performed in 38 GERD patients partially or completely non-responders to proton pump inhibitors (PPI) treatment. Fasting blood samples from each patients were obtained; leukocytotoxic test was performed by incubating the blood with a panel of 60 food items to be tested. The reaction of leukocytes (rounding, vacuolization, lack of movement, flattening, fragmentation or disintegration of cell wall) was then evaluated by optical microscopy and rated as follows: level 0 = negative, level 1 = slightly positive, level 2 = moderately positive, and level 3 = highly positive. A "true" diet excluding food items inducing moderate-severe reactions, and a "control" diet including them was developed for each patient. Then, twenty patients received the "true" diet and 18 the "control" diet; after one month (T1) symptoms severity was scored by the GERD impact scale (GIS). Hence, patients in the "control" group were switched to the "true" diet, and symptom severity was re-assessed after three months (T2). At baseline (T0) the mean GIS global score was 6.68 (range: 5-12) with no difference between "true" and control group (6.6 ± 1.19 vs 6.7 ± 1.7). All patients reacted moderately/severely to at least 1 food (range: 5-19), with a significantly greater number of food substances inducing reaction in controls compared with the "true" diet group (11.6 vs 7.0, P < 0.001). Food items more frequently involved were milk, lettuce, brewer's yeast, pork, coffee, rice, sole asparagus, and tuna, followed by eggs, tomato, grain, shrimps, and chemical yeast. At T1 both groups displayed a reduction of GIS score ("true" group 3.3 ± 1.7, -50%, P = 0.001; control group 4.9 ± 2.8, -26.9%, P = 0.02), although the GIS score was significantly lower in "true" vs "control" group (P = 0.04). At T2, after the diet switch, the "control" group showed a further reduction in GIS score (2.7 ± 1.9, -44.9%, P = 0.01), while the "true" group did not (2.6 ± 1.8, -21.3%, P = 0.19), so that the GIS scores didn't differ between the two groups. Our results suggest that food intolerance may play a role in GERD symptoms development, and leucocytotoxic test-based exclusion diets may be a possible therapeutic approach when PPI are not effective or indicated.

  3. Asymmetrically dominated choice problems, the isolation hypothesis and random incentive mechanisms.

    PubMed

    Cox, James C; Sadiraj, Vjollca; Schmidt, Ulrich

    2014-01-01

    This paper presents an experimental study of the random incentive mechanisms which are a standard procedure in economic and psychological experiments. Random incentive mechanisms have several advantages but are incentive-compatible only if responses to the single tasks are independent. This is true if either the independence axiom of expected utility theory or the isolation hypothesis of prospect theory holds. We present a simple test of this in the context of choice under risk. In the baseline (one task) treatment we observe risk behavior in a given choice problem. We show that by integrating a second, asymmetrically dominated choice problem in a random incentive mechanism risk behavior can be manipulated systematically. This implies that the isolation hypothesis is violated and the random incentive mechanism does not elicit true preferences in our example.

  4. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  5. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  6. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  7. Imputation of a true endpoint from a surrogate: application to a cluster randomized controlled trial with partial information on the true endpoint.

    PubMed

    Nixon, Richard M; Duffy, Stephen W; Fender, Guy R K

    2003-09-24

    The Anglia Menorrhagia Education Study (AMES) is a randomized controlled trial testing the effectiveness of an education package applied to general practices. Binary data are available from two sources; general practitioner reported referrals to hospital, and referrals to hospital determined by independent audit of the general practices. The former may be regarded as a surrogate for the latter, which is regarded as the true endpoint. Data are only available for the true end point on a sub set of the practices, but there are surrogate data for almost all of the audited practices and for most of the remaining practices. The aim of this paper was to estimate the treatment effect using data from every practice in the study. Where the true endpoint was not available, it was estimated by three approaches, a regression method, multiple imputation and a full likelihood model. Including the surrogate data in the analysis yielded an estimate of the treatment effect which was more precise than an estimate gained from using the true end point data alone. The full likelihood method provides a new imputation tool at the disposal of trials with surrogate data.

  8. Structurally complex and highly active RNA ligases derived from random RNA sequences

    NASA Technical Reports Server (NTRS)

    Ekland, E. H.; Szostak, J. W.; Bartel, D. P.

    1995-01-01

    Seven families of RNA ligases, previously isolated from random RNA sequences, fall into three classes on the basis of secondary structure and regiospecificity of ligation. Two of the three classes of ribozymes have been engineered to act as true enzymes, catalyzing the multiple-turnover transformation of substrates into products. The most complex of these ribozymes has a minimal catalytic domain of 93 nucleotides. An optimized version of this ribozyme has a kcat exceeding one per second, a value far greater than that of most natural RNA catalysts and approaching that of comparable protein enzymes. The fact that such a large and complex ligase emerged from a very limited sampling of sequence space implies the existence of a large number of distinct RNA structures of equivalent complexity and activity.

  9. Simulation using computer-piloted point excitations of vibrations induced on a structure by an acoustic environment

    NASA Astrophysics Data System (ADS)

    Monteil, P.

    1981-11-01

    Computation of the overall levels and spectral densities of the responses measured on a launcher skin, the fairing for instance, merged into a random acoustic environment during take off, was studied. The analysis of transmission of these vibrations to the payload required the simulation of these responses by a shaker control system, using a small number of distributed shakers. Results show that this closed loop computerized digital system allows the acquisition of auto and cross spectral densities equal to those of the responses previously computed. However, wider application is sought, e.g., road and runway profiles. The problems of multiple input-output system identification, multiple true random signal generation, and real time programming are evoked. The system should allow for the control of four shakers.

  10. Performance of the Automated Self-Administered 24-hour Recall relative to a measure of true intakes and to an interviewer-administered 24-h recall.

    PubMed

    Kirkpatrick, Sharon I; Subar, Amy F; Douglass, Deirdre; Zimmerman, Thea P; Thompson, Frances E; Kahle, Lisa L; George, Stephanie M; Dodd, Kevin W; Potischman, Nancy

    2014-07-01

    The Automated Self-Administered 24-hour Recall (ASA24), a freely available Web-based tool, was developed to enhance the feasibility of collecting high-quality dietary intake data from large samples. The purpose of this study was to assess the criterion validity of ASA24 through a feeding study in which the true intake for 3 meals was known. True intake and plate waste from 3 meals were ascertained for 81 adults by inconspicuously weighing foods and beverages offered at a buffet before and after each participant served him- or herself. Participants were randomly assigned to complete an ASA24 or an interviewer-administered Automated Multiple-Pass Method (AMPM) recall the following day. With the use of linear and Poisson regression analysis, we examined the associations between recall mode and 1) the proportions of items consumed for which a match was reported and that were excluded, 2) the number of intrusions (items reported but not consumed), and 3) differences between energy, nutrient, food group, and portion size estimates based on true and reported intakes. Respondents completing ASA24 reported 80% of items truly consumed compared with 83% in AMPM (P = 0.07). For both ASA24 and AMPM, additions to or ingredients in multicomponent foods and drinks were more frequently omitted than were main foods or drinks. The number of intrusions was higher in ASA24 (P < 0.01). Little evidence of differences by recall mode was found in the gap between true and reported energy, nutrient, and food group intakes or portion sizes. Although the interviewer-administered AMPM performed somewhat better relative to true intakes for matches, exclusions, and intrusions, ASA24 performed well. Given the substantial cost savings that ASA24 offers, it has the potential to make important contributions to research aimed at describing the diets of populations, assessing the effect of interventions on diet, and elucidating diet and health relations. This trial was registered at clinicaltrials.gov as NCT00978406. © 2014 American Society for Nutrition.

  11. Analyzing degradation data with a random effects spline regression model

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-03-17

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  12. Analyzing degradation data with a random effects spline regression model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  13. Black Clouds vs Random Variation in Hospital Admissions.

    PubMed

    Ong, Luei Wern; Dawson, Jeffrey D; Ely, John W

    2018-06-01

    Physicians often accuse their peers of being "black clouds" if they repeatedly have more than the average number of hospital admissions while on call. Our purpose was to determine whether the black-cloud phenomenon is real or explainable by random variation. We analyzed hospital admissions to the University of Iowa family medicine service from July 1, 2010 to June 30, 2015. Analyses were stratified by peer group (eg, night shift attending physicians, day shift senior residents). We analyzed admission numbers to find evidence of black-cloud physicians (those with significantly more admissions than their peers) and white-cloud physicians (those with significantly fewer admissions). The statistical significance of whether there were actual differences across physicians was tested with mixed-effects negative binomial regression. The 5-year study included 96 physicians and 6,194 admissions. The number of daytime admissions ranged from 0 to 10 (mean 2.17, SD 1.63). Night admissions ranged from 0 to 11 (mean 1.23, SD 1.22). Admissions increased from 1,016 in the first year to 1,523 in the fifth year. We found 18 white-cloud and 16 black-cloud physicians in simple regression models that did not control for this upward trend. After including study year and other potential confounding variables in the regression models, there were no significant associations between physicians and admission numbers and therefore no true black or white clouds. In this study, apparent black-cloud and white-cloud physicians could be explained by random variation in hospital admissions. However, this randomness incorporated a wide range in workload among physicians, with potential impact on resident education at the low end and patient safety at the high end.

  14. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  15. Astronomical random numbers for quantum foundations experiments

    NASA Astrophysics Data System (ADS)

    Leung, Calvin; Brown, Amy; Nguyen, Hien; Friedman, Andrew S.; Kaiser, David I.; Gallicchio, Jason

    2018-04-01

    Photons from distant astronomical sources can be used as a classical source of randomness to improve fundamental tests of quantum nonlocality, wave-particle duality, and local realism through Bell's inequality and delayed-choice quantum eraser tests inspired by Wheeler's cosmic-scale Mach-Zehnder interferometer gedanken experiment. Such sources of random numbers may also be useful for information-theoretic applications such as key distribution for quantum cryptography. Building on the design of an astronomical random number generator developed for the recent cosmic Bell experiment [Handsteiner et al. Phys. Rev. Lett. 118, 060401 (2017), 10.1103/PhysRevLett.118.060401], in this paper we report on the design and characterization of a device that, with 20-nanosecond latency, outputs a bit based on whether the wavelength of an incoming photon is greater than or less than ≈700 nm. Using the one-meter telescope at the Jet Propulsion Laboratory Table Mountain Observatory, we generated random bits from astronomical photons in both color channels from 50 stars of varying color and magnitude, and from 12 quasars with redshifts up to z =3.9 . With stars, we achieved bit rates of ˜1 ×106Hz/m 2 , limited by saturation of our single-photon detectors, and with quasars of magnitudes between 12.9 and 16, we achieved rates between ˜102 and 2 ×103Hz /m2 . For bright quasars, the resulting bitstreams exhibit sufficiently low amounts of statistical predictability as quantified by the mutual information. In addition, a sufficiently high fraction of bits generated are of true astronomical origin in order to address both the locality and freedom-of-choice loopholes when used to set the measurement settings in a test of the Bell-CHSH inequality.

  16. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL

    PubMed Central

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-01-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities. PMID:24086091

  17. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    PubMed

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  18. Randomized comparison of renal denervation versus intensified pharmacotherapy including spironolactone in true-resistant hypertension: six-month results from the Prague-15 study.

    PubMed

    Rosa, Ján; Widimský, Petr; Toušek, Petr; Petrák, Ondřej; Čurila, Karol; Waldauf, Petr; Bednář, František; Zelinka, Tomáš; Holaj, Robert; Štrauch, Branislav; Šomlóová, Zuzana; Táborský, Miloš; Václavík, Jan; Kociánová, Eva; Branny, Marian; Nykl, Igor; Jiravský, Otakar; Widimský, Jiří

    2015-02-01

    This prospective, randomized, open-label multicenter trial evaluated the efficacy of catheter-based renal denervation (Symplicity, Medtronic) versus intensified pharmacological treatment including spironolactone (if tolerated) in patients with true-resistant hypertension. This was confirmed by 24-hour ambulatory blood pressure monitoring after excluding secondary hypertension and confirmation of adherence to therapy by measurement of plasma antihypertensive drug levels before enrollment. One-hundred six patients were randomized to renal denervation (n=52), or intensified pharmacological treatment (n=54) with baseline systolic blood pressure of 159±17 and 155±17 mm Hg and average number of drugs 5.1 and 5.4, respectively. A significant reduction in 24-hour average systolic blood pressure after 6 months (-8.6 [95% cofidence interval: -11.8, -5.3] mm Hg; P<0.001 in renal denervation versus -8.1 [95% cofidence interval: -12.7, -3.4] mm Hg; P=0.001 in pharmacological group) was observed, which was comparable in both groups. Similarly, a significant reduction in systolic office blood pressure (-12.4 [95% cofidence interval: -17.0, -7.8] mm Hg; P<0.001 in renal denervation versus -14.3 [95% cofidence interval: -19.7, -8.9] mm Hg; P<0.001 in pharmacological group) was present. Between-group differences in change were not significant. The average number of antihypertensive drugs used after 6 months was significantly higher in the pharmacological group (+0.3 drugs; P<0.001). A significant increase in serum creatinine and a parallel decrease of creatinine clearance were observed in the pharmacological group; between-group difference were borderline significant. The 6-month results of this study confirmed the safety of renal denervation. In conclusion, renal denervation achieved reduction of blood pressure comparable with intensified pharmacotherapy. © 2014 American Heart Association, Inc.

  19. General practice performance in referral for suspected cancer: influence of number of cases and case-mix on publicly reported data.

    PubMed

    Murchie, P; Chowdhury, A; Smith, S; Campbell, N C; Lee, A J; Linden, D; Burton, C D

    2015-05-26

    Publicly available data show variation in GPs' use of urgent suspected cancer (USC) referral pathways. We investigated whether this could be due to small numbers of cancer cases and random case-mix, rather than due to true variation in performance. We analysed individual GP practice USC referral detection rates (proportion of the practice's cancer cases that are detected via USC) and conversion rates (proportion of the practice's USC referrals that prove to be cancer) in routinely collected data from GP practices in all of England (over 4 years) and northeast Scotland (over 7 years). We explored the effect of pooling data. We then modelled the effects of adding random case-mix to practice variation. Correlations between practice detection rate and conversion rate became less positive when data were aggregated over several years. Adding random case-mix to between-practice variation indicated that the median proportion of poorly performing practices correctly identified after 25 cancer cases were examined was 20% (IQR 17 to 24) and after 100 cases was 44% (IQR 40 to 47). Much apparent variation in GPs' use of suspected cancer referral pathways can be attributed to random case-mix. The methods currently used to assess the quality of GP-suspected cancer referral performance, and to compare individual practices, are misleading. These should no longer be used, and more appropriate and robust methods should be developed.

  20. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    PubMed Central

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  1. An experiment to assess the cost-benefits of code inspections in large scale software development

    NASA Technical Reports Server (NTRS)

    Porter, A.; Siy, H.; Toman, C. A.; Votta, L. G.

    1994-01-01

    This experiment (currently in progress) is designed to measure costs and benefits of different code inspection methods. It is being performed with a real development team writing software for a commercial product. The dependent variables for each code unit's inspection are the elapsed time and the number of defects detected. We manipulate the method of inspection by randomly assigning reviewers, varying the number of reviewers and the number of teams, and, when using more than one team, randomly assigning author repair and non-repair of detected defects between code inspections. After collecting and analyzing the first 17 percent of the data, we have discovered several interesting facts about reviewers, about the defects recorded during reviewer preparation and during the inspection collection meeting, and about the repairs that are eventually made. (1) Only 17 percent of the defects that reviewers record in their preparations are true defects that are later repaired. (2) Defects recorded at the inspection meetings fall into three categories: 18 percent false positives requiring no author repair, 57 percent soft maintenance where the author makes changes only for readability or code standard enforcement, and 25 percent true defects requiring repair. (3) The median elapsed calendar time for code inspections is 10 working days - 8 working days before the collection meeting and 2 after. (4) In the collection meetings, 31 percent of the defects discovered by reviewers during preparation are suppressed. (5) Finally, 33 percent of the true defects recorded are discovered at the collection meetings and not during any reviewer's preparation. The results to date suggest that inspections with two sessions (two different teams) of two reviewers per session (2sX2p) are the most effective. These two-session inspections may be performed with author repair or with no author repair between the two sessions. We are finding that the two-session, two-person with repair (2sX2pR) inspections are the most expensive, taking 15 working days of calendar time from the time the code is ready for review until author repair is complete, whereas two-session, two-person with no repair (2sX2pN) inspections take only 10 working days, but find about 10 percent fewer defects.

  2. Methods for obtaining true particle size distributions from cross section measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, Kristina Alyse

    2013-01-01

    Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a planemore » section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.« less

  3. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  4. SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pahlka, R; Kappadath, S; Mawlawi, O

    2016-06-15

    Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less

  5. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  6. Predicting within-herd prevalence of infection with bovine leukemia virus using bulk-tank milk antibody levels.

    PubMed

    Nekouei, Omid; Stryhn, Henrik; VanLeeuwen, John; Kelton, David; Hanna, Paul; Keefe, Greg

    2015-11-01

    Enzootic bovine leukosis (EBL) is an economically important infection of dairy cattle caused by bovine leukemia virus (BLV). Estimating the prevalence of BLV within dairy herds is a fundamental step towards pursuing efficient control programs. The objectives of this study were: (1) to determine the prevalence of BLV infection at the herd level using a bulk-tank milk (BTM) antibody ELISA in the Maritime region of Canada (3 provinces); and (2) to develop appropriate statistical models for predicting within-herd prevalence of BLV infection using BTM antibody ELISA titers. During 2013, three monthly BTM samples were collected from all dairy farms in the Maritime region of Canada (n=623) and tested for BLV milk antibodies using a commercial indirect ELISA. Based on the mean of the 3 BTM titers, 15 strata of herds (5 per province) were defined. From each stratum, 6 herds were randomly selected for a total of 90 farms. Within every selected herd, an additional BTM sample was taken (round 4), approximately 2 months after the third round. On the same day of BTM sampling, all cows that contributed milk to the fourth BTM sample were individually tested for BLV milk antibodies (n=6111) to estimate the true within-herd prevalence for the 90 herds. The association between true within-herd prevalence of BLV and means of various combinations of the BTM titers was assessed using linear regression models, adjusting for the stratified random sampling design. Herd level prevalence of BLV in the region was 90.8%. In the individual testing, 30.4% of cows were positive. True within-herd prevalences ranged from 0 to 94%. All linear regression models were able to predict the true within-herd prevalence of BLV reasonably well (R(2)>0.69). Predictions from the models were particularly accurate for low-to-medium spectrums of the BTM titers. In general, as a greater number of the four repeated BTM titers were incorporated in the models, narrower confidence intervals around the prediction lines were achieved. The model including all 4 BTM tests as the predictor had the best fit, although the models using 2 and 3 BTM tests provided similar results to 4 repeated tests. Therefore, testing two or three BTM samples with approximately two-month intervals would provide relatively precise estimates for the potential number of infected cows in a herd. The developed models in this study could be applied to control and eradication programs for BLV as cost-effective tools. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Prevention of Hypoglycemia With Predictive Low Glucose Insulin Suspension in Children With Type 1 Diabetes: A Randomized Controlled Trial.

    PubMed

    Battelino, Tadej; Nimri, Revital; Dovc, Klemen; Phillip, Moshe; Bratina, Natasa

    2017-06-01

    To investigate whether predictive low glucose management (PLGM) of the MiniMed 640G system significantly reduces the rate of hypoglycemia compared with the sensor-augmented insulin pump in children with type 1 diabetes. This randomized, two-arm, parallel, controlled, two-center open-label study included 100 children and adolescents with type 1 diabetes and glycated hemoglobin A 1c ≤10% (≤86 mmol/mol) and using continuous subcutaneous insulin infusion. Patients were randomly assigned to either an intervention group with PLGM features enabled (PLGM ON) or a control group (PLGM OFF), in a 1:1 ratio, all using the same type of sensor-augmented insulin pump. The primary end point was the number of hypoglycemic events below 65 mg/dL (3.6 mmol/L), based on sensor glucose readings, during a 14-day study treatment. The analysis was performed by intention to treat for all randomized patients. The number of hypoglycemic events below 65 mg/dL (3.6 mmol/L) was significantly smaller in the PLGM ON compared with the PLGM OFF group (mean ± SD 4.4 ± 4.5 and 7.4 ± 6.3, respectively; P = 0.008). This was also true when calculated separately for night ( P = 0.025) and day ( P = 0.022). No severe hypoglycemic events occurred; however, there was a significant increase in time spent above 140 mg/dL (7.8 mmol/L) in the PLGM ON group ( P = 0.0165). The PLGM insulin suspension was associated with a significantly reduced number of hypoglycemic events. Although this was achieved at the expense of increased time in moderate hyperglycemia, there were no serious adverse effects in young patients with type 1 diabetes. © 2017 by the American Diabetes Association.

  8. Progression-free survival as surrogate and as true end point: insights from the breast and colorectal cancer literature.

    PubMed

    Saad, E D; Katz, A; Hoff, P M; Buyse, M

    2010-01-01

    Significant achievements in the systemic treatment of both advanced breast cancer and advanced colorectal cancer over the past 10 years have led to a growing number of drugs, combinations, and sequences to be tested. The choice of surrogate and true end points has become a critical issue and one that is currently the subject of much debate. Many recent randomized trials in solid tumor oncology have used progression-free survival (PFS) as the primary end point. PFS is an attractive end point because it is available earlier than overall survival (OS) and is not influenced by second-line treatments. PFS is now undergoing validation as a surrogate end point in various disease settings. The question of whether PFS can be considered an acceptable surrogate end point depends not only on formal validation studies but also on a standardized definition and unbiased ascertainment of disease progression in clinical trials. In advanced breast cancer, formal validation of PFS as a surrogate for OS has so far been unsuccessful. In advanced colorectal cancer, in contrast, current evidence indicates that PFS is a valid surrogate for OS after first-line treatment with chemotherapy. The other question is whether PFS sufficiently reflects clinical benefit to be considered a true end point in and of itself.

  9. Exposure measurement error in PM2.5 health effects studies: A pooled analysis of eight personal exposure validation studies

    PubMed Central

    2014-01-01

    Background Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typically available surrogate exposures. Methods Daily personal and ambient PM2.5, and when available sulfate, measurements were compiled from nine cities, over 2 to 12 days. True exposure was defined as personal exposure to PM2.5 of ambient origin. Since PM2.5 of ambient origin could only be determined for five cities, personal exposure to total PM2.5 was also considered. Surrogate exposures were estimated as ambient PM2.5 at the nearest monitor or predicted outside subjects’ homes. We estimated calibration coefficients by regressing true on surrogate exposures in random effects models. Results When monthly-averaged personal PM2.5 of ambient origin was used as the true exposure, calibration coefficients equaled 0.31 (95% CI:0.14, 0.47) for nearest monitor and 0.54 (95% CI:0.42, 0.65) for outdoor home predictions. Between-city heterogeneity was not found for outdoor home PM2.5 for either true exposure. Heterogeneity was significant for nearest monitor PM2.5, for both true exposures, but not after adjusting for city-average motor vehicle number for total personal PM2.5. Conclusions Calibration coefficients were <1, consistent with previously reported chronic health risks using nearest monitor exposures being under-estimated when ambient concentrations are the exposure of interest. Calibration coefficients were closer to 1 for outdoor home predictions, likely reflecting less spatial error. Further research is needed to determine how our findings can be incorporated in future health studies. PMID:24410940

  10. Evaluating image reconstruction methods for tumor detection performance in whole-body PET oncology imaging

    NASA Astrophysics Data System (ADS)

    Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard

    2000-04-01

    This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.

  11. How to Fully Represent Expert Information about Imprecise Properties in a Computer System – Random Sets, Fuzzy Sets, and Beyond: An Overview

    PubMed Central

    Nguyen, Hung T.; Kreinovich, Vladik

    2014-01-01

    To help computers make better decisions, it is desirable to describe all our knowledge in computer-understandable terms. This is easy for knowledge described in terms on numerical values: we simply store the corresponding numbers in the computer. This is also easy for knowledge about precise (well-defined) properties which are either true or false for each object: we simply store the corresponding “true” and “false” values in the computer. The challenge is how to store information about imprecise properties. In this paper, we overview different ways to fully store the expert information about imprecise properties. We show that in the simplest case, when the only source of imprecision is disagreement between different experts, a natural way to store all the expert information is to use random sets; we also show how fuzzy sets naturally appear in such random-set representation. We then show how the random-set representation can be extended to the general (“fuzzy”) case when, in addition to disagreements, experts are also unsure whether some objects satisfy certain properties or not. PMID:25386045

  12. Point Processes.

    DTIC Science & Technology

    1987-05-01

    O and N(B) < - a.s. for each BEMA. (ii) N(U Bn) = N(Bn) a.s. for any disjoint BI.B 2 .. in ’ R . n n The random variable N(A) represents the number...P(N’(B) = 0). B E o . (C) If (v.X1 X2 ....) (v.Xi.X .. ). then N d N’. The converse is true when E = R + or R and the X ’s are the ordered T ’s.+ n n... R + that are continuous and such that (x: f(x)> O ) is a bounded set. Theorem 1.4. Suppose N and N’ are point processes on E. The following statements

  13. Impact of including or excluding both-armed zero-event studies on using standard meta-analysis methods for rare event outcome: a simulation study

    PubMed Central

    Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana

    2016-01-01

    Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725

  14. Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data

    USGS Publications Warehouse

    Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.

    2001-01-01

    Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).

  15. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  16. Rare Event Simulation in Radiation Transport

    NASA Astrophysics Data System (ADS)

    Kollman, Craig

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.

  17. Methodological characteristics and treatment effect sizes in oral health randomised controlled trials: Is there a relationship? Protocol for a meta-epidemiological study.

    PubMed

    Saltaji, Humam; Armijo-Olivo, Susan; Cummings, Greta G; Amin, Maryam; Flores-Mir, Carlos

    2014-02-25

    It is fundamental that randomised controlled trials (RCTs) are properly conducted in order to reach well-supported conclusions. However, there is emerging evidence that RCTs are subject to biases which can overestimate or underestimate the true treatment effect, due to flaws in the study design characteristics of such trials. The extent to which this holds true in oral health RCTs, which have some unique design characteristics compared to RCTs in other health fields, is unclear. As such, we aim to examine the empirical evidence quantifying the extent of bias associated with methodological and non-methodological characteristics in oral health RCTs. We plan to perform a meta-epidemiological study, where a sample size of 60 meta-analyses (MAs) including approximately 600 RCTs will be selected. The MAs will be randomly obtained from the Oral Health Database of Systematic Reviews using a random number table; and will be considered for inclusion if they include a minimum of five RCTs, and examine a therapeutic intervention related to one of the recognised dental specialties. RCTs identified in selected MAs will be subsequently included if their study design includes a comparison between an intervention group and a placebo group or another intervention group. Data will be extracted from selected trials included in MAs based on a number of methodological and non-methodological characteristics. Moreover, the risk of bias will be assessed using the Cochrane Risk of Bias tool. Effect size estimates and measures of variability for the main outcome will be extracted from each RCT included in selected MAs, and a two-level analysis will be conducted using a meta-meta-analytic approach with a random effects model to allow for intra-MA and inter-MA heterogeneity. The intended audiences of the findings will include dental clinicians, oral health researchers, policymakers and graduate students. The aforementioned will be introduced to the findings through workshops, seminars, round table discussions and targeted individual meetings. Other opportunities for knowledge transfer will be pursued such as key dental conferences. Finally, the results will be published as a scientific report in a dental peer-reviewed journal.

  18. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions.

    PubMed

    Ndase, Patrick; Celum, Connie; Kidoguchi, Lara; Ronald, Allan; Fife, Kenneth H; Bukusi, Elizabeth; Donnell, Deborah; Baeten, Jared M

    2015-01-01

    Rapid HIV assays are the mainstay of HIV testing globally. Delivery of effective biomedical HIV prevention strategies such as antiretroviral pre-exposure prophylaxis (PrEP) requires periodic HIV testing. Because rapid tests have high (>95%) but imperfect specificity, they are expected to generate some false positive results. We assessed the frequency of true and false positive rapid results in the Partners PrEP Study, a randomized, placebo-controlled trial of PrEP. HIV testing was performed monthly using 2 rapid tests done in parallel with HIV enzyme immunoassay (EIA) confirmation following all positive rapid tests. A total of 99,009 monthly HIV tests were performed; 98,743 (99.7%) were dual-rapid HIV negative. Of the 266 visits with ≥1 positive rapid result, 99 (37.2%) had confirmatory positive EIA results (true positives), 155 (58.3%) had negative EIA results (false positives), and 12 (4.5%) had discordant EIA results. In the active PrEP arms, over two-thirds of visits with positive rapid test results were false positive results (69.2%, 110 of 159), although false positive results occurred at <1% (110/65,945) of total visits. When HIV prevalence or incidence is low due to effective HIV prevention interventions, rapid HIV tests result in a high number of false relative to true positive results, although the absolute number of false results will be low. Program roll-out for effective interventions should plan for quality assurance of HIV testing, mechanisms for confirmatory HIV testing, and counseling strategies for persons with positive rapid test results.

  19. Hankin and Reeves' approach to estimating fish abundance in small streams: Limitations and alternatives

    USGS Publications Warehouse

    Thompson, W.L.

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled stream units. Violations of these assumptions may produce suspect results. To determine possible sources of the assumption violations, I used data on the abundance of steelhead Oncorhynchus mykiss from Hankin and Reeves' (1988) in a simulation composed of 50,000 repeated, stratified systematic random samples from a spatially clustered distribution. The simulation was used to investigate effects of a range of removal estimates, from 75% to 100% of true fish abundance, on overall stream fish population estimates. The effects of various categories of removal-estimates-to-snorkel-count correlation levels (r = 0.75-1.0) on fish population estimates were also explored. Simulation results indicated that Hankin and Reeves' approach may produce poor results unless removal estimates exceed at least 85% of the true number of fish within sampled units and unless correlations between removal estimates and snorkel counts are at least 0.90. A potential modification to Hankin and Reeves' approach is the inclusion of environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative approach is to use snorkeling combined with line transect sampling to estimate fish densities within stream units. As with any method of population estimation, a pilot study should be conducted to evaluate its usefulness, which requires a known (or nearly so) population of fish to serve as a benchmark for evaluating bias and precision of estimators.

  20. Accuracies of the synthesized monochromatic CT numbers and effective atomic numbers obtained with a rapid kVp switching dual energy CT scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Larson, Sandra C.

    2011-04-15

    Purpose: This study was performed to investigate the accuracies of the synthesized monochromatic images and effective atomic number maps obtained with the new GE Discovery CT750 HD CT scanner. Methods: A Gammex-RMI model 467 tissue characterization phantom and the CT number linearity section of a Phantom Laboratory Catphan 600 phantom were scanned using the dual energy (DE) feature on the GE CT750 HD scanner. Synthesized monochromatic images at various energies between 40 and 120 keV and effective atomic number (Z{sub eff}) maps were generated. Regions of interest were placed within these images/maps to measure the average monochromatic CT numbers andmore » average Z{sub eff} of the materials within these phantoms. The true Z{sub eff} values were either supplied by the phantom manufacturer or computed using Mayneord's equation. The linear attenuation coefficients for the true CT numbers were computed using the NIST XCOM program with the input of manufacturer supplied elemental compositions and densities. The effects of small variations in the assumed true densities of the materials were also investigated. Finally, the effect of body size on the accuracies of the synthesized monochromatic CT numbers was investigated using a custom lumbar section phantom with and without an external fat-mimicking ring. Results: Other than the Z{sub eff} of the simulated lung inserts in the tissue characterization phantom, which could not be measured by DECT, the Z{sub eff} values of all of the other materials in the tissue characterization and Catphan phantoms were accurate to 15%. The accuracies of the synthesized monochromatic CT numbers of the materials in both phantoms varied with energy and material. For the 40-120 keV range, RMS errors between the measured and true CT numbers in the Catphan are 8-25 HU when the true CT numbers were computed using the nominal plastic densities. These RMS errors improve to 3-12 HU for assumed true densities within the nominal density {+-}0.02 g/cc range. The RMS errors between the measured and true CT numbers of the tissue mimicking materials in the tissue characterization phantom over the 40-120 keV range varied from about 6 HU-248 HU and did not improve as dramatically with small changes in assumed true density. Conclusions: Initial tests indicate that the Z{sub eff} values computed with DECT on this scanner are reasonably accurate; however, the synthesized monochromatic CT numbers can be very inaccurate, especially for dense tissue mimicking materials at low energies. Furthermore, the synthesized monochromatic CT numbers of materials still depend on the amount of the surrounding tissues especially at low keV, demonstrating that the numbers are not truly monochromatic. Further research is needed to develop DE methods that produce more accurate synthesized monochromatic CT numbers.« less

  1. The Effects of School Gardens on Children's Science Knowledge: A Randomized Controlled Trial of Low-Income Elementary Schools

    ERIC Educational Resources Information Center

    Wells, Nancy M.; Myers, Beth M.; Todd, Lauren E.; Barale, Karen; Gaolach, Brad; Ferenz, Gretchen; Aitken, Martha; Henderson, Charles R.; Tse, Caroline; Pattison, Karen Ostlie; Taylor, Cayla; Connerly, Laura; Carson, Janet B.; Gensemer, Alexandra Z.; Franz, Nancy K.; Falk, Elizabeth

    2015-01-01

    This randomized controlled trial or "true experiment" examines the effects of a school garden intervention on the science knowledge of elementary school children. Schools were randomly assigned to a group that received the garden intervention (n?=?25) or to a waitlist control group that received the garden intervention at the end of the…

  2. Preliminary results of neural networks and zernike polynomials for classification of videokeratography maps.

    PubMed

    Carvalho, Luis Alberto

    2005-02-01

    Our main goal in this work was to develop an artificial neural network (NN) that could classify specific types of corneal shapes using Zernike coefficients as input. Other authors have implemented successful NN systems in the past and have demonstrated their efficiency using different parameters. Our claim is that, given the increasing popularity of Zernike polynomials among the eye care community, this may be an interesting choice to add complementing value and precision to existing methods. By using a simple and well-documented corneal surface representation scheme, which relies on corneal elevation information, one can generate simple NN input parameters that are independent of curvature definition and that are also efficient. We have used the Matlab Neural Network Toolbox (MathWorks, Natick, MA) to implement a three-layer feed-forward NN with 15 inputs and 5 outputs. A database from an EyeSys System 2000 (EyeSys Vision, Houston, TX) videokeratograph installed at the Escola Paulista de Medicina-Sao Paulo was used. This database contained an unknown number of corneal types. From this database, two specialists selected 80 corneas that could be clearly classified into five distinct categories: (1) normal, (2) with-the-rule astigmatism, (3) against-the-rule astigmatism, (4) keratoconus, and (5) post-laser-assisted in situ keratomileusis. The corneal height (SAG) information of the 80 data files was fit with the first 15 Vision Science and it Applications (VSIA) standard Zernike coefficients, which were individually used to feed the 15 neurons of the input layer. The five output neurons were associated with the five typical corneal shapes. A group of 40 cases was randomly selected from the larger group of 80 corneas and used as the training set. The NN responses were statistically analyzed in terms of sensitivity [true positive/(true positive + false negative)], specificity [true negative/(true negative + false positive)], and precision [(true positive + true negative)/total number of cases]. The mean values for these parameters were, respectively, 78.75, 97.81, and 94%. Although we have used a relatively small training and testing set, results presented here should be considered promising. They are certainly an indication of the potential of Zernike polynomials as reliable parameters, at least in the cases presented here, as input data for artificial intelligence automation of the diagnosis process of videokeratography examinations. This technique should facilitate the implementation and add value to the classification methods already available. We also discuss briefly certain special properties of Zernike polynomials that are what we think make them suitable as NN inputs for this type of application.

  3. Feeding, Swimming and Navigation of Colonial Microorganisms

    NASA Astrophysics Data System (ADS)

    Kirkegaard, Julius; Bouillant, Ambre; Marron, Alan; Leptos, Kyriacos; Goldstein, Raymond

    2016-11-01

    Animals are multicellular in nature, but evolved from unicellular organisms. In the closest relatives of animals, the choanoflagellates, the unicellular species Salpincgoeca rosetta has the ability to form colonies, resembling true multicellularity. In this work we use a combination of experiments, theory, and simulations to understand the physical differences that arise from feeding, swimming and navigating as colonies instead of as single cells. We show that the feeding efficiency decreases with colony size for distinct reasons in the small and large Péclet number limits, and we find that swimming as a colony changes the conventional active random walks of microorganism to stochastic helices, but that this does not hinder effective navigation towards chemoattractants.

  4. Tinnitus, a military epidemic: is hyperbaric oxygen therapy the answer?

    PubMed

    Baldwin, Thomas M

    2009-01-01

    Tinnitus is the phantom perception of sound in the absence of overt acoustic stimulation. Its impact on the military population is alarming. Annually, tinnitus is the most prevalent disability among new cases added to the Veterans Affairs numbers. Also, it is currently the most common disability from the War on Terror. Conventional medical treatments for tinnitus are well documented, but prove to be unsatisfying. Hyperbaric oxygen (HBO2) therapy may improve tinnitus, but the significance of the level of improvement is not clear. There is a case for large randomized trials of high methodological rigor in order to define the true extent of the benefit with the administration of HBO2 therapy for tinnitus.

  5. Temporal trends in inflammatory bowel disease publications over a 19-years period.

    PubMed

    Weintraub, Yael; Mimouni, Francis B; Cohen, Shlomi

    2014-11-28

    To determine whether temporal changes occurred in the pediatric vs adult inflammatory bowel disease (IBD), both in terms of number and type of yearly published articles. We aimed to evaluate all PubMed-registered articles related to the field of IBD from January 1, 1993 and until December 31, 2011. We searched for articles using the key words "inflammatory bowel disease" or "Crohn's disease" or "ulcerative colitis" or "undetermined colitis", using the age filters of "child" or "adult". We repeated the search according to the total number per year of articles per type of article, for each year of the specified period. We studied randomized controlled trials, clinical trials, case reports, meta-analyses, letters to the editor, reviews, systematic reviews, practice guidelines, and editorials. We identified 44645 articles over the 19 year-period. There were 8687 pediatric-tagged articles vs 19750 adult-tagged articles. Thus 16208 articles were unaccounted and not assigned a "pediatric" or "adult" tag by PubMed. There was an approximately 3-fold significant increase in all articles recorded both in pediatric and adult articles. This significant increase was true for nearly every category of article but the number of clinical trials, meta-analysis, and randomized controlled trials increased proportionally more than the number of "lower quality" articles such as editorials or letters to the editor. Very few guidelines were published every year. There is a yearly linear increase in publications related to IBD. Relatively, there are more and more clinical trials and higher quality articles.

  6. Validity as Process: A Construct Driven Measure of Fidelity of Implementation

    ERIC Educational Resources Information Center

    Jones, Ryan Seth

    2013-01-01

    Estimates of fidelity of implementation are essential to interpret the effects of educational interventions in randomized controlled trials (RCTs). While random assignment protects against many threats to validity, and therefore provides the best approximation to a true counterfactual condition, it does not ensure that the treatment condition…

  7. Is a 'convenience' sample useful for estimating immunization coverage in a small population?

    PubMed

    Weir, Jean E; Jones, Carrie

    2008-01-01

    Rapid survey methodologies are widely used for assessing immunization coverage in developing countries, approximating true stratified random sampling. Non-random ('convenience') sampling is not considered appropriate for estimating immunization coverage rates but has the advantages of low cost and expediency. We assessed the validity of a convenience sample of children presenting to a travelling clinic by comparing the coverage rate in the convenience sample to the true coverage established by surveying each child in three villages in rural Papua New Guinea. The rate of DTF immunization coverage as estimated by the convenience sample was within 10% of the true coverage when the proportion of children in the sample was two-thirds or when only children over the age of one year were counted, but differed by 11% when the sample included only 53% of the children and when all eligible children were included. The convenience sample may be sufficiently accurate for reporting purposes and is useful for identifying areas of low coverage.

  8. Design and evaluation of an aerial spray trial with true replicates to test the efficacy of Bacillus thuringiensis insecticide in a boreal forest.

    PubMed

    Cadogan, Beresford L; Scharbach, Roger D

    2003-04-01

    A field trial using true replicates was conducted successfully in a boreal forest in 1996 to evaluate the efficacy of two aerially applied Bacillus thuringiensis formulations, ABG 6429 and ABG 6430. A complete randomized design with four replicates per treatment was chosen. Twelve to 15 balsam fir (Abies balsamea [L.] Mill.) per plot were randomly selected as sample trees. Interplot buffer zones, > or = 200 m wide, adequately prevented cross contamination from sprays that were atomized with four rotary atomizers (volume median diameters ranging from 64.6 to 139.4 microm) and released approximately 30 m above the ground. The B. thuringiensis formulations were not significantly different (P > 0.05) from each other in reducing spruce budworm (Choristoneura fumiferana [Clem.]) populations and protecting balsam trees from defoliation but both formulations were significantly more efficacious than the controls. The results suggest that true replicates are a feasible alternative to pseudoreplication in experimental forest aerial applications.

  9. Simple chained guide trees give high-quality protein multiple sequence alignments

    PubMed Central

    Boyce, Kieran; Sievers, Fabian; Higgins, Desmond G.

    2014-01-01

    Guide trees are used to decide the order of sequence alignment in the progressive multiple sequence alignment heuristic. These guide trees are often the limiting factor in making large alignments, and considerable effort has been expended over the years in making these quickly or accurately. In this article we show that, at least for protein families with large numbers of sequences that can be benchmarked with known structures, simple chained guide trees give the most accurate alignments. These also happen to be the fastest and simplest guide trees to construct, computationally. Such guide trees have a striking effect on the accuracy of alignments produced by some of the most widely used alignment packages. There is a marked increase in accuracy and a marked decrease in computational time, once the number of sequences goes much above a few hundred. This is true, even if the order of sequences in the guide tree is random. PMID:25002495

  10. General statistics of stochastic process of gene expression in eukaryotic cells.

    PubMed Central

    Kuznetsov, V A; Knott, G D; Bonner, R F

    2002-01-01

    Thousands of genes are expressed at such very low levels (< or =1 copy per cell) that global gene expression analysis of rarer transcripts remains problematic. Ambiguity in identification of rarer transcripts creates considerable uncertainty in fundamental questions such as the total number of genes expressed in an organism and the biological significance of rarer transcripts. Knowing the distribution of the true number of genes expressed at each level and the corresponding gene expression level probability function (GELPF) could help resolve these uncertainties. We found that all observed large-scale gene expression data sets in yeast, mouse, and human cells follow a Pareto-like distribution model skewed by many low-abundance transcripts. A novel stochastic model of the gene expression process predicts the universality of the GELPF both across different cell types within a multicellular organism and across different organisms. This model allows us to predict the frequency distribution of all gene expression levels within a single cell and to estimate the number of expressed genes in a single cell and in a population of cells. A random "basal" transcription mechanism for protein-coding genes in all or almost all eukaryotic cell types is predicted. This fundamental mechanism might enhance the expression of rarely expressed genes and, thus, provide a basic level of phenotypic diversity, adaptability, and random monoallelic expression in cell populations. PMID:12136033

  11. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  12. Randomized controlled trial of foot reflexology for patients with symptomatic idiopathic detrusor overactivity.

    PubMed

    Mak, Ho-Leung Jimmy; Cheon, Willy Cecilia; Wong, To; Liu, Yu Sun John; Tong, Wai Mei Anny

    2007-06-01

    The aim of this study was to examine whether foot reflexology has beneficial effects on patients with idiopathic detrusor overactivity. One hundred and nine women with symptomatic idiopathic detrusor overactivity were randomized into either foot reflexology treatment group or nonspecific foot massage control group. The primary outcome measure was the change in the diurnal micturition frequency. There was significant change in the number of daytime frequency in the reflexology group when compared with the massage group (-1.90 vs -0.55, p = 0.029). There was also a decrease in the 24-h micturition frequency in both groups, but the change was not statistically significant (-2.80 vs -1.04 p = 0.055). In the reflexology group, more patients believed to have received "true" reflexology (88.9 vs 67.4%, p = 0.012). This reflects the difficulty of blinding in trials of reflexology. Larger scale studies with a better-designed control group and an improved blinding are required to examine if reflexology is effective in improving patients' overall outcome.

  13. Unenhanced respiratory-gated magnetic resonance angiography (MRA) of renal artery in hypertensive patients using true fast imaging with steady-state precession technique compared with contrast-enhanced MRA.

    PubMed

    Zhang, Weisheng; Lin, Jiang; Wang, Shaowu; Lv, Peng; Wang, Lili; Liu, Hao; Chen, Caizhong; Zeng, Mengsu

    2014-01-01

    This study was aimed to evaluate the accuracy of "True Fast Imaging with Steady-State Precession" (TrueFISP) MR angiography (MRA) for diagnosis of renal arterial stenosis (RAS) in hypertensive patients. Twenty-two patients underwent both TrueFISP MRA and contrast-enhanced MRA (CE-MRA) on a 1.5-T MR imager. Volume of main renal arteries, length of maximal visible renal arteries, number of visualized branches, stenotic grade, and subjective quality were compared. Paired 2-tailed Student t test and Wilcoxon signed rank test were applied to evaluate the significance of these variables. Volume of main renal arteries, length of maximal visible renal arteries, and number of branches indicated no significant difference between the 2 techniques (P > 0.05). Stenotic degree of 10 RAS was greater on CE-MRA than on TrueFISP MRA. Qualitative scores from TrueFISP MRA were higher than those from CE-MRA (P < 0.05). TrueFISP MRA is a reliable and accurate method for evaluating RAS.

  14. Required number of records for ASCE/SEI 7 ground-motion scaling procedure

    USGS Publications Warehouse

    Reyes, Juan C.; Kalkan, Erol

    2011-01-01

    The procedures and criteria in 2006 IBC (International Council of Building Officials, 2006) and 2007 CBC (International Council of Building Officials, 2007) for the selection and scaling ground-motions for use in nonlinear response history analysis (RHA) of structures are based on ASCE/SEI 7 provisions (ASCE, 2005, 2010). According to ASCE/SEI 7, earthquake records should be selected from events of magnitudes, fault distance, and source mechanisms that comply with the maximum considered earthquake, and then scaled so that the average value of the 5-percent-damped response spectra for the set of scaled records is not less than the design response spectrum over the period range from 0.2Tn to 1.5Tn sec (where Tn is the fundamental vibration period of the structure). If at least seven ground-motions are analyzed, the design values of engineering demand parameters (EDPs) are taken as the average of the EDPs determined from the analyses. If fewer than seven ground-motions are analyzed, the design values of EDPs are taken as the maximum values of the EDPs. ASCE/SEI 7 requires a minimum of three ground-motions. These limits on the number of records in the ASCE/SEI 7 procedure are based on engineering experience, rather than on a comprehensive evaluation. This study statistically examines the required number of records for the ASCE/SEI 7 procedure, such that the scaled records provide accurate, efficient, and consistent estimates of" true" structural responses. Based on elastic-perfectly-plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI 7 scaling procedure is applied to 480 sets of ground-motions. The number of records in these sets varies from three to ten. The records in each set were selected either (i) randomly, (ii) considering their spectral shapes, or (iii) considering their spectral shapes and design spectral-acceleration value, A(Tn). As compared to benchmark (that is, "true") responses from unscaled records using a larger catalog of ground-motions, it is demonstrated that the ASCE/SEI 7 scaling procedure is overly conservative if fewer than seven ground-motions are employed. Utilizing seven or more randomly selected records provides a more accurate estimate of the EDPs accompanied by reduced record-to-record variability of the responses. Consistency in accuracy and efficiency is achieved only if records are selected on the basis of their spectral shape and A(Tn).

  15. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  16. Randomized Item Response Theory Models

    ERIC Educational Resources Information Center

    Fox, Jean-Paul

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…

  17. Empirical constrained Bayes predictors accounting for non-detects among repeated measures.

    PubMed

    Moore, Reneé H; Lyles, Robert H; Manatunga, Amita K

    2010-11-10

    When the prediction of subject-specific random effects is of interest, constrained Bayes predictors (CB) have been shown to reduce the shrinkage of the widely accepted Bayes predictor while still maintaining desirable properties, such as optimizing mean-square error subsequent to matching the first two moments of the random effects of interest. However, occupational exposure and other epidemiologic (e.g. HIV) studies often present a further challenge because data may fall below the measuring instrument's limit of detection. Although methodology exists in the literature to compute Bayes estimates in the presence of non-detects (Bayes(ND)), CB methodology has not been proposed in this setting. By combining methodologies for computing CBs and Bayes(ND), we introduce two novel CBs that accommodate an arbitrary number of observable and non-detectable measurements per subject. Based on application to real data sets (e.g. occupational exposure, HIV RNA) and simulation studies, these CB predictors are markedly superior to the Bayes predictor and to alternative predictors computed using ad hoc methods in terms of meeting the goal of matching the first two moments of the true random effects distribution. Copyright © 2010 John Wiley & Sons, Ltd.

  18. The effect of nanowire length and diameter on the properties of transparent, conducting nanowire films

    NASA Astrophysics Data System (ADS)

    Bergin, Stephen M.; Chen, Yu-Hui; Rathmell, Aaron R.; Charbonneau, Patrick; Li, Zhi-Yuan; Wiley, Benjamin J.

    2012-03-01

    This article describes how the dimensions of nanowires affect the transmittance and sheet resistance of a random nanowire network. Silver nanowires with independently controlled lengths and diameters were synthesized with a gram-scale polyol synthesis by controlling the reaction temperature and time. Characterization of films composed of nanowires of different lengths but the same diameter enabled the quantification of the effect of length on the conductance and transmittance of silver nanowire films. Finite-difference time-domain calculations were used to determine the effect of nanowire diameter, overlap, and hole size on the transmittance of a nanowire network. For individual nanowires with diameters greater than 50 nm, increasing diameter increases the electrical conductance to optical extinction ratio, but the opposite is true for nanowires with diameters less than this size. Calculations and experimental data show that for a random network of nanowires, decreasing nanowire diameter increases the number density of nanowires at a given transmittance, leading to improved connectivity and conductivity at high transmittance (>90%). This information will facilitate the design of transparent, conducting nanowire films for flexible displays, organic light emitting diodes and thin-film solar cells.This article describes how the dimensions of nanowires affect the transmittance and sheet resistance of a random nanowire network. Silver nanowires with independently controlled lengths and diameters were synthesized with a gram-scale polyol synthesis by controlling the reaction temperature and time. Characterization of films composed of nanowires of different lengths but the same diameter enabled the quantification of the effect of length on the conductance and transmittance of silver nanowire films. Finite-difference time-domain calculations were used to determine the effect of nanowire diameter, overlap, and hole size on the transmittance of a nanowire network. For individual nanowires with diameters greater than 50 nm, increasing diameter increases the electrical conductance to optical extinction ratio, but the opposite is true for nanowires with diameters less than this size. Calculations and experimental data show that for a random network of nanowires, decreasing nanowire diameter increases the number density of nanowires at a given transmittance, leading to improved connectivity and conductivity at high transmittance (>90%). This information will facilitate the design of transparent, conducting nanowire films for flexible displays, organic light emitting diodes and thin-film solar cells. Electronic supplementary information (ESI) available: Includes methods and transmission spectra of nanowire films. See DOI: 10.1039/c2nr30126a

  19. The variability of software scoring of the CDMAM phantom associated with a limited number of images

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying J.; Van Metter, Richard

    2007-03-01

    Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.

  20. Statistical design of quantitative mass spectrometry-based proteomic experiments.

    PubMed

    Oberg, Ann L; Vitek, Olga

    2009-05-01

    We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.

  1. Voluntary, Randomized, Student Drug-Testing: Impact in a Rural, Low-Income, Community

    ERIC Educational Resources Information Center

    Barrington, Kyle D.

    2008-01-01

    Illegal drug use and abuse by the nation's secondary school students is a continuing public health issue and this is especially true for students living in rural, low-income areas where access to intervention and treatment services is often limited. To address this issue, some school districts have implemented voluntary, randomized, student …

  2. Using Small-Scale Randomized Controlled Trials to Evaluate the Efficacy of New Curricular Materials

    ERIC Educational Resources Information Center

    Drits-Esser, Dina; Bass, Kristin M.; Stark, Louisa A.

    2014-01-01

    How can researchers in K-12 contexts stay true to the principles of rigorous evaluation designs within the constraints of classroom settings and limited funding? This paper explores this question by presenting a small-scale randomized controlled trial (RCT) designed to test the efficacy of curricular supplemental materials on epigenetics. The…

  3. Towards Cluster-Assembled Materials of True Monodispersity in Size and Chemical Environment: Synthesis, Dynamics and Activity

    DTIC Science & Technology

    2016-10-27

    AFRL-AFOSR-UK-TR-2016-0037 Towards cluster-assembled materials of true monodispersity in size and chemical environment: Synthesis, Dynamics and...Towards cluster-assembled materials of true monodispersity in size and chemical environment: synthesis, dynamics and activity 5a.  CONTRACT NUMBER 5b...report Towards cluster-assembled materials of true monodispersity in size and chemical environment: Synthesis, Dynamics and Activity Ulrich Heiz

  4. Methodological characteristics and treatment effect sizes in oral health randomised controlled trials: Is there a relationship? Protocol for a meta-epidemiological study

    PubMed Central

    Saltaji, Humam; Armijo-Olivo, Susan; Cummings, Greta G; Amin, Maryam; Flores-Mir, Carlos

    2014-01-01

    Introduction It is fundamental that randomised controlled trials (RCTs) are properly conducted in order to reach well-supported conclusions. However, there is emerging evidence that RCTs are subject to biases which can overestimate or underestimate the true treatment effect, due to flaws in the study design characteristics of such trials. The extent to which this holds true in oral health RCTs, which have some unique design characteristics compared to RCTs in other health fields, is unclear. As such, we aim to examine the empirical evidence quantifying the extent of bias associated with methodological and non-methodological characteristics in oral health RCTs. Methods and analysis We plan to perform a meta-epidemiological study, where a sample size of 60 meta-analyses (MAs) including approximately 600 RCTs will be selected. The MAs will be randomly obtained from the Oral Health Database of Systematic Reviews using a random number table; and will be considered for inclusion if they include a minimum of five RCTs, and examine a therapeutic intervention related to one of the recognised dental specialties. RCTs identified in selected MAs will be subsequently included if their study design includes a comparison between an intervention group and a placebo group or another intervention group. Data will be extracted from selected trials included in MAs based on a number of methodological and non-methodological characteristics. Moreover, the risk of bias will be assessed using the Cochrane Risk of Bias tool. Effect size estimates and measures of variability for the main outcome will be extracted from each RCT included in selected MAs, and a two-level analysis will be conducted using a meta-meta-analytic approach with a random effects model to allow for intra-MA and inter-MA heterogeneity. Ethics and dissemination The intended audiences of the findings will include dental clinicians, oral health researchers, policymakers and graduate students. The aforementioned will be introduced to the findings through workshops, seminars, round table discussions and targeted individual meetings. Other opportunities for knowledge transfer will be pursued such as key dental conferences. Finally, the results will be published as a scientific report in a dental peer-reviewed journal. PMID:24568962

  5. QTL mapping of flowering time, fruit size and number in populations involving andromonoecious true lemon cucumber

    USDA-ARS?s Scientific Manuscript database

    Andromonoecious sex expression in cucumber is controlled by the m locus, which encodes the 1-aminocyclopropane-1 –carboxylic acid synthase (ACS) in the ethylene biosynthesis pathway. This gene seems to have pleotropic effects on fruit size and number, but the genetic basis is unknown. The True Lemon...

  6. The effectiveness of foot reflexology in inducing ovulation: a sham-controlled randomized trial.

    PubMed

    Holt, Jane; Lord, Jonathan; Acharya, Umesh; White, Adrian; O'Neill, Nyree; Shaw, Steve; Barton, Andy

    2009-06-01

    To determine whether foot reflexology, a complementary therapy, has an effect greater than sham reflexology on induction of ovulation. Sham-controlled randomized trial with patients and statistician blinded. Infertility clinic in Plymouth, United Kingdom. Forty-eight women attending the clinic with anovulation. Women were randomized to receive eight sessions of either genuine foot reflexology or sham reflexology with gentle massage over 10 weeks. The primary outcome was ovulation detected by serum progesterone level of >30 nmol/L during the study period. Twenty-six patients were randomized to genuine reflexology and 22 to sham (one randomized patient was withdrawn). Patients remained blinded throughout the trial. The rate of ovulation during true reflexology was 11 out of 26 (42%), and during sham reflexology it was 10 out of 22 (46%). Pregnancy rates were 4 out of 26 in the true group and 2 out of 22 in the control group. Because of recruitment difficulties, the required sample size of 104 women was not achieved. Patient blinding of reflexology studies is feasible. Although this study was too small to reach a definitive conclusion on the specific effect of foot reflexology, the results suggest that any effect on ovulation would not be clinically relevant. Sham reflexology may have a beneficial general effect, which this study was not designed to detect.

  7. Baryon number, lepton number, and operator dimension in the Standard Model

    DOE PAGES

    Kobach, Andrew

    2016-05-19

    In this study, we prove that for a given operator in the Standard Model (SM) with baryon number ΔB and lepton number ΔL, that the operator's dimension is even (odd) if (ΔB - ΔL)/2 is even (odd). Consequently, this establishes the veracity of statements that were long observed or expected to be true, but not proven, e.g., operators with ΔB - ΔL = 0 are of even dimension, ΔB - ΔL must be an even number, etc. These results remain true even if the SM is augmented by any number of right-handed neutrinos with ΔL = 1.

  8. A Maximum NEC Criterion for Compton Collimation to Accurately Identify True Coincidences in PET

    PubMed Central

    Chinn, Garry; Levin, Craig S.

    2013-01-01

    In this work, we propose a new method to increase the accuracy of identifying true coincidence events for positron emission tomography (PET). This approach requires 3-D detectors with the ability to position each photon interaction in multi-interaction photon events. When multiple interactions occur in the detector, the incident direction of the photon can be estimated using the Compton scatter kinematics (Compton Collimation). If the difference between the estimated incident direction of the photon relative to a second, coincident photon lies within a certain angular range around colinearity, the line of response between the two photons is identified as a true coincidence and used for image reconstruction. We present an algorithm for choosing the incident photon direction window threshold that maximizes the noise equivalent counts of the PET system. For simulated data, the direction window removed 56%–67% of random coincidences while retaining > 94% of true coincidences from image reconstruction as well as accurately extracted 70% of true coincidences from multiple coincidences. PMID:21317079

  9. Increased cognitive load enables unlearning in procedural category learning.

    PubMed

    Crossley, Matthew J; Maddox, W Todd; Ashby, F Gregory

    2018-04-19

    Interventions for drug abuse and other maladaptive habitual behaviors may yield temporary success but are often fragile and relapse is common. This implies that current interventions do not erase or substantially modify the representations that support the underlying addictive behavior-that is, they do not cause true unlearning. One example of an intervention that fails to induce true unlearning comes from Crossley, Ashby, and Maddox (2013, Journal of Experimental Psychology: General), who reported that a sudden shift to random feedback did not cause unlearning of category knowledge obtained through procedural systems, and they also reported results suggesting that this failure is because random feedback is noncontingent on behavior. These results imply the existence of a mechanism that (a) estimates feedback contingency and (b) protects procedural learning from modification when feedback contingency is low (i.e., during random feedback). This article reports the results of an experiment in which increasing cognitive load via an explicit dual task during the random feedback period facilitated unlearning. This result is consistent with the hypothesis that the mechanism that protects procedural learning when feedback contingency is low depends on executive function. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. No consistent effect of plant diversity on productivity

    USGS Publications Warehouse

    Huston, M.A.; Aarssen, L.W.; Austin, M.P.; Cade, B.S.; Fridley, J.D.; Garnier, E.; Grime, J.P.; Hodgson, J.; Lauenroth, W.K.; Thompson, K.; Vandermeer, J.H.; Wardle, D.A.

    2000-01-01

    Hector et al. (1) reported on BIODEPTH, a major international experiment on the response of plant productivity to variation in the number of plant species. They found “an overall log-linear reduction of average aboveground biomass with loss of species,” leading to what the accompanying Perspective (2) described as “a rule of thumb—that each halving of diversity leads to a 10 to 20% reduction in productivity.” These conclusions, if true, imply that the continuing high rate of plant extinction threatens the future productivity of Earth's natural and managed ecosystems and could impair their ability to produce resources essential for human survival and to regulate the concentration of atmospheric CO2.The three sites with proper experimental design (Portugal, Sweden, and Sheffield) all showed significant positive regressions of productivity across two or three doublings of species richness [Fig. 1; (12)]. This is the pattern expected from random selection from a set of objects with different properties (13–15), because the probability of including any specific member of the set—such as a plant species that grows rapidly or fixes nitrogen—increases with the number of objects selected. Such a pattern, found consistently in randomly assembled experimental plant communities but only rarely in natural plant communities (4, 5,13–15), has been identified as a statistical artifact of experimental design (5, 13, 14). Although one study (15) suggested that the pattern constitutes a natural mechanism by which diversity affects productivity, this requires the biologically unrealistic assumption that plant communities are randomly assembled with respect to productivity (5).

  11. Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading

    NASA Technical Reports Server (NTRS)

    Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.

    1998-01-01

    The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.

  12. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  13. A statistical treatment of bioassay pour fractions

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack; Hughes, David

    A bioassay is a method for estimating the number of bacterial spores on a spacecraft surface for the purpose of demonstrating compliance with planetary protection (PP) requirements (Ref. 1). The details of the process may be seen in the appropriate PP document (e.g., for NASA, Ref. 2). In general, the surface is mechanically sampled with a damp sterile swab or wipe. The completion of the process is colony formation in a growth medium in a plate (Petri dish); the colonies are counted. Consider a set of samples from randomly selected, known areas of one spacecraft surface, for simplicity. One may calculate the mean and standard deviation of the bioburden density, which is the ratio of counts to area sampled. The standard deviation represents an estimate of the variation from place to place of the true bioburden density commingled with the precision of the individual sample counts. The accuracy of individual sample results depends on the equipment used, the collection method, and the culturing method. One aspect that greatly influences the result is the pour fraction, which is the quantity of fluid added to the plates divided by the total fluid used in extracting spores from the sampling equipment. In an analysis of a single sample’s counts due to the pour fraction, one seeks to answer the question: What is the probability that if a certain number of spores are counted with a known pour fraction, that there are an additional number of spores in the part of the rinse not poured. This is given for specific values by the binomial distribution density, where detection (of culturable spores) is success and the probability of success is the pour fraction. A special summation over the binomial distribution, equivalent to adding for all possible values of the true total number of spores, is performed. This distribution when normalized will almost yield the desired quantity. It is the probability that the additional number of spores does not exceed a certain value. Of course, for a desired value of uncertainty, one must invert the calculation. However, this probability of finding exactly the number of spores in the poured part is correct only in the case where all values of the true number of spores greater than or equal to the adjusted count are equally probable. This is not realistic, of course, but the result can only overestimate the uncertainty. So it is useful. In probability speak, one has the conditional probability given any true total number of spores. Therefore one must multiply it by the probability of each possible true count, before the summation. If the counts for a sample set (of which this is one sample) are available, one may use the calculated variance and the normal probability distribution. In this approach, one assumes a normal distribution and neglects the contribution from spatial variation. The former is a common assumption. The latter can only add to the conservatism (over estimate the number of spores at some level of confidence). A more straightforward approach is to assume a Poisson probability distribution for the measured total sample set counts, and use the product of the number of samples and the mean number of counts per sample as the mean of the Poisson distribution. It is necessary to set the total count to 1 in the Poisson distribution when actual total count is zero. Finally, even when the planetary protection requirements for spore burden refer only to the mean values, they require an adjustment for pour fraction and method efficiency (a PP specification based on independent data). The adjusted mean values are a 50/50 proposition (e.g., the probability of the true total counts in the sample set exceeding the estimate is 0.50). However, this is highly unconservative when the total counts are zero. No adjustment to the mean values occurs for either pour fraction or efficiency. The recommended approach is once again to set the total counts to 1, but now applied to the mean values. Then one may apply the corrections to the revised counts. It can be shown by the methods developed in this work that this change is usually conservative enough to increase the level of confidence in the estimate to 0.5. 1. NASA. (2005) Planetary protection provisions for robotic extraterrestrial missions. NPR 8020.12C, April 2005, National Aeronautics and Space Administration, Washington, DC. 2. NASA. (2010) Handbook for the Microbiological Examination of Space Hardware, NASA-HDBK-6022, National Aeronautics and Space Administration, Washington, DC.

  14. Multiwavelength ytterbium-Brillouin random Rayleigh feedback fiber laser

    NASA Astrophysics Data System (ADS)

    Wu, Han; Wang, Zinan; Fan, Mengqiu; Li, Jiaqi; Meng, Qingyang; Xu, Dangpeng; Rao, Yunjiang

    2018-03-01

    In this letter, we experimentally demonstrate the multiwavelength ytterbium-Brillouin random fiber laser for the first time, in the half-open cavity formed by a fiber loop mirror and randomly distributed Rayleigh mirrors. With a cladding-pumped ytterbium-doped fiber and a long TrueWave fiber, the narrow linewidth Brillouin pump can generate multiple Brillouin Stokes lines with hybrid ytterbium-Brillouin gain. Up to six stable channels with a spacing of about 0.06 nm are obtained. This work extends the operation wavelength of the multiwavelength Brillouin random fiber laser to the 1 µm band, and has potential in various applications.

  15. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.

  16. Psychometric Functioning of the MMPI-2-RF VRIN-r and TRIN-r Scales with Varying Degrees of Randomness, Acquiescence, and Counter-Acquiescence

    ERIC Educational Resources Information Center

    Handel, Richard W.; Ben-Porath, Yossef S.; Tellegen, Auke; Archer, Robert P.

    2010-01-01

    In the present study, the authors evaluated the effects of increasing degrees of simulated non-content-based (random or fixed) responding on scores on the newly developed Variable Response Inconsistency-Revised (VRIN-r) and True Response Inconsistency-Revised (TRIN-r) scales of the Minnesota Multiphasic Personality Inventory-2 Restructured Form…

  17. A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes

    Treesearch

    Andrew D. Richardson; David Y. Hollinger; George G. Burba; Kenneth J. Davis; Lawrence B. Flanagan; Gabriel G. Katul; J. William Munger; Daniel M. Ricciuto; Paul C. Stoy; Andrew E. Suyker; Shashi B. Verma; Steven C. Wofsy; Steven C. Wofsy

    2006-01-01

    Measured surface-atmosphere fluxes of energy (sensible heat, H, and latent heat, LE) and CO2 (FCO2) represent the ``true?? flux plus or minus potential random and systematic measurement errors. Here, we use data from seven sites in the AmeriFlux network, including five forested sites (two of which include ``tall tower?? instrumentation), one grassland site, and one...

  18. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  19. Recursive regularization for inferring gene networks from time-course gene expression profiles

    PubMed Central

    Shimamura, Teppei; Imoto, Seiya; Yamaguchi, Rui; Fujita, André; Nagasaki, Masao; Miyano, Satoru

    2009-01-01

    Background Inferring gene networks from time-course microarray experiments with vector autoregressive (VAR) model is the process of identifying functional associations between genes through multivariate time series. This problem can be cast as a variable selection problem in Statistics. One of the promising methods for variable selection is the elastic net proposed by Zou and Hastie (2005). However, VAR modeling with the elastic net succeeds in increasing the number of true positives while it also results in increasing the number of false positives. Results By incorporating relative importance of the VAR coefficients into the elastic net, we propose a new class of regularization, called recursive elastic net, to increase the capability of the elastic net and estimate gene networks based on the VAR model. The recursive elastic net can reduce the number of false positives gradually by updating the importance. Numerical simulations and comparisons demonstrate that the proposed method succeeds in reducing the number of false positives drastically while keeping the high number of true positives in the network inference and achieves two or more times higher true discovery rate (the proportion of true positives among the selected edges) than the competing methods even when the number of time points is small. We also compared our method with various reverse-engineering algorithms on experimental data of MCF-7 breast cancer cells stimulated with two ErbB ligands, EGF and HRG. Conclusion The recursive elastic net is a powerful tool for inferring gene networks from time-course gene expression profiles. PMID:19386091

  20. Utility of the Conners' Adult ADHD Rating Scale validity scales in identifying simulated attention-deficit hyperactivity disorder and random responding.

    PubMed

    Walls, Brittany D; Wallace, Elizabeth R; Brothers, Stacey L; Berry, David T R

    2017-12-01

    Recent concern about malingered self-report of symptoms of attention-deficit hyperactivity disorder (ADHD) in college students has resulted in an urgent need for scales that can detect feigning of this disorder. The present study provided further validation data for a recently developed validity scale for the Conners' Adult ADHD Rating Scale (CAARS), the CAARS Infrequency Index (CII), as well as for the Inconsistency Index (INC). The sample included 139 undergraduate students: 21 individuals with diagnoses of ADHD, 29 individuals responding honestly, 54 individuals responding randomly (full or half), and 35 individuals instructed to feign. Overall, the INC showed moderate sensitivity to random responding (.44-.63) and fairly high specificity to ADHD (.86-.91). The CII demonstrated modest sensitivity to feigning (.31-.46) and excellent specificity to ADHD (.91-.95). Sequential application of validity scales had correct classification rates of honest (93.1%), ADHD (81.0%), feigning (57.1%), half random (42.3%), and full random (92.9%). The present study suggests that the CII is modestly sensitive (true positive rate) to feigned ADHD symptoms, and highly specific (true negative rate) to ADHD. Additionally, this study highlights the utility of applying the CAARS validity scales in a sequential manner for identifying feigning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Polynomial order selection in random regression models via penalizing adaptively the likelihood.

    PubMed

    Corrales, J D; Munilla, S; Cantet, R J C

    2015-08-01

    Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates. © 2015 Blackwell Verlag GmbH.

  2. Turbulence modeling and combustion simulation in porous media under high Peclet number

    NASA Astrophysics Data System (ADS)

    Moiseev, Andrey A.; Savin, Andrey V.

    2018-05-01

    Turbulence modelling in porous flows and burning still remains not completely clear until now. Undoubtedly, conventional turbulence models must work well under high Peclet numbers when porous channels shape is implemented in details. Nevertheless, the true turbulent mixing takes place at micro-scales only, and the dispersion mixing works at macro-scales almost independent from true turbulence. The dispersion mechanism is characterized by the definite space scale (scale of the porous structure) and definite velocity scale (filtration velocity). The porous structure is stochastic one usually, and this circumstance allows applying the analogy between space-time-stochastic true turbulence and the dispersion flow which is stochastic in space only, when porous flow is simulated at the macro-scale level. Additionally, the mentioned analogy allows applying well-known turbulent combustion models in simulations of porous combustion under high Peclet numbers.

  3. The study designed by a committee: design of the Multisite Violence Prevention Project.

    PubMed

    Henry, David B; Farrell, Albert D

    2004-01-01

    This article describes the research design of the Multisite Violence Prevention Project (MVPP), organized and funded by the National Center for Injury Prevention and Control (NCIPC) at the Centers for Disease Control and Prevention (CDC). CDC's objectives, refined in the course of collaboration among investigators, were to evaluate the efficacy of universal and targeted interventions designed to produce change at the school level. The project's design was developed collaboratively, and is a 2 x 2 cluster-randomized true experimental design in which schools within four separate sites were assigned randomly to four conditions: (1) no-intervention control group, (2) universal intervention, (3) targeted intervention, and (4) combined universal and targeted interventions. A total of 37 schools are participating in this study with 8-12 schools per site. The impact of the interventions on two successive cohorts of sixth-grade students will be assessed based on multiple waves of data from multiple sources of information, including teachers, students, parents, and archival data. The nesting of students within teachers, families, schools and sites created a number of challenges for designing and implementing the study. The final design represents both resolution and compromise on a number of creative tensions existing in large-scale prevention trials, including tensions between cost and statistical power, and between internal and external validity. Strengths and limitations of the final design are discussed.

  4. The Study Designed by a Committee

    PubMed Central

    Henry, David B.; Farrell, Albert D.

    2009-01-01

    This article describes the research design of the Multisite Violence Prevention Project (MVPP), organized and funded by the National Center for Injury Prevention and Control (NCIPC) at the Centers for Disease Control and Prevention (CDC). CDC's objectives, refined in the course of collaboration among investigators, were to evaluate the efficacy of universal and targeted interventions designed to produce change at the school level. The project's design was developed collaboratively, and is a 2 × 2 cluster-randomized true experimental design in which schools within four separate sites were assigned randomly to four conditions: (1) no-intervention control group, (2) universal intervention, (3) targeted intervention, and (4) combined universal and targeted interventions. A total of 37 schools are participating in this study with 8–12 schools per site. The impact of the interventions on two successive cohorts of sixth-grade students will be assessed based on multiple waves of data from multiple sources of information, including teachers, students, parents, and archival data. The nesting of students within teachers, families, schools and sites created a number of challenges for designing and implementing the study. The final design represents both resolution and compromise on a number of creative tensions existing in large-scale prevention trials, including tensions between cost and statistical power, and between internal and external validity. Strengths and limitations of the final design are discussed. PMID:14732183

  5. Was RA Fisher Right?

    PubMed

    Srivastava, Ayush; Srivastava, Anurag; Pandey, Ravindra M

    2017-10-01

    Randomized controlled trials have become the most respected scientific tool to measure the effectiveness of a medical therapy. The design, conduct and analysis of randomized controlled trials were developed by Sir Ronald A. Fisher, a mathematician in Great Britain. Fisher propounded that the process of randomization would equally distribute all the known and even unknown covariates in the two or more comparison groups, so that any difference observed could be ascribed to treatment effect. Today, we observe that in many situations, this prediction of Fisher does not stand true; hence, adaptive randomization schedules have been designed to adjust for major imbalance in important covariates. Present essay unravels some weaknesses inherent in Fisherian concept of randomized controlled trial.

  6. Nuclear data correlation between different isotopes via integral information

    NASA Astrophysics Data System (ADS)

    Rochman, Dimitri A.; Bauge, Eric; Vasiliev, Alexander; Ferroukhi, Hakim; Perret, Gregory

    2018-05-01

    This paper presents a Bayesian approach based on integral experiments to create correlations between different isotopes which do not appear with differential data. A simple Bayesian set of equations is presented with random nuclear data, similarly to the usual methods applied with differential data. As a consequence, updated nuclear data (cross sections, ν, fission neutron spectra and covariance matrices) are obtained, leading to better integral results. An example for 235U and 238U is proposed taking into account the Bigten criticality benchmark.

  7. Effect of acupuncture for radioactive-iodine-induced anorexia in thyroid cancer patients: a randomized, double-blinded, sham-controlled pilot study.

    PubMed

    Jeon, Ju-Hyun; Yoon, Jeungwon; Cho, Chong-Kwan; Jung, In-Chul; Kim, Sungchul; Lee, Suk-Hoon; Yoo, Hwa-Seung

    2015-05-01

    The aim of this study is to evaluate the efficacy and safety of acupuncture for radioactive iodine (RAI)-induced anorexia in thyroid cancer patients. Fourteen thyroid cancer patients with RAI-induced anorexia were randomized to a true acupuncture or sham acupuncture group. Both groups were given 6 true or sham acupuncture treatments in 2 weeks. Outcome measures included the change of the Functional Assessment of Anorexia and Cachexia Treatment (FAACT; Anorexia/Cachexia Subscale [ACS], Functional Assessment of Cancer Therapy-General [FACT-G]), Visual Analogue Scale (VAS), weight, body mass index (BMI), ACTH, and cortisol levels. The mean FAACT ACS scores of the true and sham acupuncture groups increased from baseline to exit in intention-to-treat (ITT) and per protocol (PP) analyses; the true acupuncture group showed higher increase but with no statistical significance. Between groups, from baseline to the last treatment, statistically significant differences were found in ITT analysis of the Table of Index (TOI) score (P = .034) and in PP analysis of the TOI (P = .016), FACT-G (P = .045), FAACT (P = .037) scores. There was no significant difference in VAS, weight, BMI, ACTH, and cortisol level changes between groups. Although the current study is based on a small sample of participants, our findings support the safety and potential use of acupuncture for RAI-induced anorexia and quality of life in thyroid cancer patients. © The Author(s) 2015.

  8. Classifier performance prediction for computer-aided diagnosis using a limited dataset.

    PubMed

    Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir

    2008-04-01

    In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.

  9. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  10. A new approach to evaluate gamma-ray measurements

    NASA Technical Reports Server (NTRS)

    Dejager, O. C.; Swanepoel, J. W. H.; Raubenheimer, B. C.; Vandervalt, D. J.

    1985-01-01

    Misunderstandings about the term random samples its implications may easily arise. Conditions under which the phases, obtained from arrival times, do not form a random sample and the dangers involved are discussed. Watson's U sup 2 test for uniformity is recommended for light curves with duty cycles larger than 10%. Under certain conditions, non-parametric density estimation may be used to determine estimates of the true light curve and its parameters.

  11. Prevalence and risk factors of Coxiella burnetii seropositivity in Danish beef and dairy cattle at slaughter adjusted for test uncertainty.

    PubMed

    Paul, Suman; Agger, Jens F; Agerholm, Jørgen S; Markussen, Bo

    2014-03-01

    Antibodies to Coxiella burnetii have been found in the Danish dairy cattle population with high levels of herd and within herd seroprevalences. However, the prevalence of antibodies to C. burnetii in Danish beef cattle remains unknown. The objectives of this study were to (1) estimate the prevalence and (2) identify risk factors associated with C. burnetii seropositivity in Danish beef and dairy cattle based on sampling at slaughter. Eight hundred blood samples from slaughtered cattle were collected from six Danish slaughter houses from August to October 2012 following a random sampling procedure. Blood samples were tested by a commercially available C. burnetii antibody ELISA kit. A sample was defined positive if the sample-to-positive ratio was greater than or equal to 40. Animal and herd information were extracted from the Danish Cattle Database. Apparent (AP) and true prevalences (TPs) specific for breed, breed groups, gender and herd type; and breed-specific true prevalences with a random effect of breed was estimated in a Bayesian framework. A Bayesian logistic regression model was used to identify risk factors of C. burnetii seropositivity. Test sensitivity and specificity estimates from a previous study involving Danish dairy cattle were used to generate prior information. The prevalence was significantly higher in dairy breeds (AP=9.11%; TP=9.45%) than in beef breeds (AP=4.32%; TP=3.54%), in females (AP=9.10%; TP=9.40%) than in males (AP=3.62%; TP=2.61%) and in dairy herds (AP=15.10%; TP=16.67%) compared to beef herds (AP=4.54%; TP=3.66%). The Bayesian logistic regression model identified breed group along with age, and number of movements as contributors for C. burnetii seropositivity. The risk of seropositivity increased with age and increasing number of movements between herds. Results indicate that seroprevalence of C. burnetii is lower in cattle sent for slaughter than in Danish dairy cows in production units. A greater proportion of this prevalence is attributed to slaughtered cattle of dairy breeds or cattle raised in dairy herds rather than beef breeds. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Identifying factors influencing contraceptive use in Bangladesh: evidence from BDHS 2014 data.

    PubMed

    Hossain, M B; Khan, M H R; Ababneh, F; Shaw, J E H

    2018-01-30

    Birth control is the conscious control of the birth rate by methods which temporarily prevent conception by interfering with the normal process of ovulation, fertilization, and implantation. High contraceptive prevalence rate is always expected for controlling births for those countries that are experiencing high population growth rate. The factors that influence contraceptive prevalence are also important to know for policy implication purposes in Bangladesh. This study aims to explore the socio-economic, demographic and others key factors that influence the use of contraception in Bangladesh. The contraception data are extracted from the 2014 Bangladesh Demographic and Health Survey (BDHS) data which were collected by using a two stage stratified random sampling technique that is a source of nested variability. The nested sources of variability must be incorporated in the model using random effects in order to model the actual parameter effects on contraceptive prevalence. A mixed effect logistic regression model has been implemented for the binary contraceptive data, where parameters are estimated through generalized estimating equation by assuming exchangeable correlation structure to explore and identify the factors that truly affect the use of contraception in Bangladesh. The prevalence of contraception use by currently married 15-49 years aged women or their husbands is 62.4%. Our study finds that administrative division, place of residence, religion, number of household members, woman's age, occupation, body mass index, breastfeeding practice, husband's education, wish for children, living status with wife, sexual activity in past year, women amenorrheic status, abstaining status, number of children born in last five years and total children ever died were significantly associated with contraception use in Bangladesh. The odds of women experiencing the outcome of interest are not independent due to the nested structure of the data. As a result, a mixed effect model is implemented for the binary variable 'contraceptive use' to produce true estimates for the significant determinants of contraceptive use in Bangladesh. Knowing such true estimates is important for attaining future goals including increasing contraception use from 62 to 75% by 2020 by the Bangladesh government's Health, Population & Nutrition Sector Development Program (HPNSDP).

  13. Content analysis of false and misleading claims in television advertising for prescription and nonprescription drugs.

    PubMed

    Faerber, Adrienne E; Kreling, David H

    2014-01-01

    False and misleading advertising for drugs can harm consumers and the healthcare system, and previous research has demonstrated that physician-targeted drug advertisements may be misleading. However, there is a dearth of research comparing consumer-targeted drug advertising to evidence to evaluate whether misleading or false information is being presented in these ads. To compare claims in consumer-targeted television drug advertising to evidence, in order to evaluate the frequency of false or misleading television drug advertising targeted to consumers. A content analysis of a cross-section of television advertisements for prescription and nonprescription drugs aired from 2008 through 2010. We analyzed commercial segments containing prescription and nonprescription drug advertisements randomly selected from the Vanderbilt Television News Archive, a census of national news broadcasts. For each advertisement, the most-emphasized claim in each ad was identified based on claim iteration, mode of communication, duration and placement. This claim was then compared to evidence by trained coders, and categorized as being objectively true, potentially misleading, or false. Potentially misleading claims omitted important information, exaggerated information, made lifestyle associations, or expressed opinions. False claims were factually false or unsubstantiated. Of the most emphasized claims in prescription (n = 84) and nonprescription (n = 84) drug advertisements, 33 % were objectively true, 57 % were potentially misleading and 10 % were false. In prescription drug ads, there were more objectively true claims (43 %) and fewer false claims (2 %) than in nonprescription drug ads (23 % objectively true, 7 % false). There were similar numbers of potentially misleading claims in prescription (55 %) and nonprescription (61 %) drug ads. Potentially misleading claims are prevalent throughout consumer-targeted prescription and nonprescription drug advertising on television. These results are in conflict with proponents who argue the social value of drug advertising is found in informing consumers about drugs.

  14. True orbit simulation of piecewise linear and linear fractional maps of arbitrary dimension using algebraic numbers

    NASA Astrophysics Data System (ADS)

    Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji

    2015-06-01

    We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.

  15. Empirical likelihood inference in randomized clinical trials.

    PubMed

    Zhang, Biao

    2017-01-01

    In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.

  16. Ranked solutions to a class of combinatorial optimizations—with applications in mass spectrometry based peptide sequencing and a variant of directed paths in random media

    NASA Astrophysics Data System (ADS)

    Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo

    2005-08-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.

  17. Large N Limits in Tensor Models: Towards More Universality Classes of Colored Triangulations in Dimension d≥2

    NASA Astrophysics Data System (ADS)

    Bonzom, Valentin

    2016-07-01

    We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum field theory.

  18. Efficient prediction designs for random fields.

    PubMed

    Müller, Werner G; Pronzato, Luc; Rendas, Joao; Waldl, Helmut

    2015-03-01

    For estimation and predictions of random fields, it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging (EK) are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the EK variance when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, whereas the second uses the surrogate criteria as local heuristic to choose the points at which the (costly) true EK variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset. © 2014 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.

  19. Automated detection of masses on whole breast volume ultrasound scanner: false positive reduction using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Yuya; Muramatsu, Chisako; Kobayashi, Hironobu; Hara, Takeshi; Fujita, Hiroshi

    2017-03-01

    Breast cancer screening with mammography and ultrasonography is expected to improve sensitivity compared with mammography alone, especially for women with dense breast. An automated breast volume scanner (ABVS) provides the operator-independent whole breast data which facilitate double reading and comparison with past exams, contralateral breast, and multimodality images. However, large volumetric data in screening practice increase radiologists' workload. Therefore, our goal is to develop a computer-aided detection scheme of breast masses in ABVS data for assisting radiologists' diagnosis and comparison with mammographic findings. In this study, false positive (FP) reduction scheme using deep convolutional neural network (DCNN) was investigated. For training DCNN, true positive and FP samples were obtained from the result of our initial mass detection scheme using the vector convergence filter. Regions of interest including the detected regions were extracted from the multiplanar reconstraction slices. We investigated methods to select effective FP samples for training the DCNN. Based on the free response receiver operating characteristic analysis, simple random sampling from the entire candidates was most effective in this study. Using DCNN, the number of FPs could be reduced by 60%, while retaining 90% of true masses. The result indicates the potential usefulness of DCNN for FP reduction in automated mass detection on ABVS images.

  20. A digital boxcar integrator for IMS spectra

    NASA Technical Reports Server (NTRS)

    Cohen, Martin J.; Stimac, Robert M.; Wernlund, Roger F.; Parker, Donald C.

    1995-01-01

    When trying to detect or quantify a signal at or near the limit of detectability, it is invariably embeded in the noise. This statement is true for nearly all detectors of any physical phenomena and the limit of detectability, hopefully, occurs at very low signal-to-noise levels. This is particularly true of IMS (Ion Mobility Spectrometers) spectra due to the low vapor pressure of several chemical compounds of great interest and the small currents associated with the ionic detection process. Gated Integrators and Boxcar Integrators or Averagers are designed to recover fast, repetitive analog signals. In a typical application, a time 'Gate' or 'Window' is generated, characterized by a set delay from a trigger or gate pulse and a certain width. A Gated Integrator amplifies and integrates the signal that is present during the time the gate is open, ignoring noise and interference that may be present at other times. Boxcar Integration refers to the practice of averaging the output of the Gated Integrator over many sweeps of the detector. Since any signal present during the gate will add linearly, while noise will add in a 'random walk' fashion as the square root of the number of sweeps, averaging N sweeps will improve the 'Signal-to-Noise Ratio' by a factor of the square root of N.

  1. Prior robust empirical Bayes inference for large-scale data by conditioning on rank with application to microarray data

    PubMed Central

    Liao, J. G.; Mcmurry, Timothy; Berg, Arthur

    2014-01-01

    Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072

  2. Worldwide Survey of Alcohol and Nonmedical Drug Use among Military Personnel: 1982,

    DTIC Science & Technology

    1983-01-01

    cell . The first number is an estimate of the percentage of the population with the characteristics that define the cell . The second number, in...multiplying 1.96 times the standard error for that cell . (Obviously, for very small or very large estimates, the respective smallest or largest value in...that the cell proportions estimate the true population value more precisely, and larger standard errors indicate that the true population value is

  3. Reliability Evaluation for Clustered WSNs under Malware Propagation

    PubMed Central

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  4. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    PubMed

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  5. Introducing Bayesian thinking to high-throughput screening for false-negative rate estimation.

    PubMed

    Wei, Xin; Gao, Lin; Zhang, Xiaolei; Qian, Hong; Rowan, Karen; Mark, David; Peng, Zhengwei; Huang, Kuo-Sen

    2013-10-01

    High-throughput screening (HTS) has been widely used to identify active compounds (hits) that bind to biological targets. Because of cost concerns, the comprehensive screening of millions of compounds is typically conducted without replication. Real hits that fail to exhibit measurable activity in the primary screen due to random experimental errors will be lost as false-negatives. Conceivably, the projected false-negative rate is a parameter that reflects screening quality. Furthermore, it can be used to guide the selection of optimal numbers of compounds for hit confirmation. Therefore, a method that predicts false-negative rates from the primary screening data is extremely valuable. In this article, we describe the implementation of a pilot screen on a representative fraction (1%) of the screening library in order to obtain information about assay variability as well as a preliminary hit activity distribution profile. Using this training data set, we then developed an algorithm based on Bayesian logic and Monte Carlo simulation to estimate the number of true active compounds and potential missed hits from the full library screen. We have applied this strategy to five screening projects. The results demonstrate that this method produces useful predictions on the numbers of false negatives.

  6. A Feasibility Study of Moxibustion for Treating Anorexia and Improving Quality of Life in Patients With Metastatic Cancer: A Randomized Sham-Controlled Trial.

    PubMed

    Jeon, Ju-Hyun; Cho, Chong-Kwan; Park, So-Jung; Kang, Hwi-Joong; Kim, Kyungmin; Jung, In-Chul; Kim, Young-Il; Lee, Suk-Hoon; Yoo, Hwa-Seung

    2017-03-01

    The aim of this study was to determine the feasibility, acceptability, and safety of using moxibustion for treating anorexia and improving quality of life in patients with metastatic cancer. We conducted a randomized sham-controlled trial of moxibustion. Sixteen patients with metastatic cancer were recruited from Daejeon, South Korea. The patients were randomly placed into a true or a sham moxibustion group and received 10 true or sham moxibustion treatments administered to the abdomen (CV12, CV8, CV4) and legs (ST36) over a 2-week period. Outcome measures included interest in participating in the trial, identification of successful recruitment strategies, the appropriateness of eligibility criteria, and compliance with the treatment plan (ie, attendance at treatment sessions). Clinical outcomes included results of the Functional Assessment of Anorexia/Cachexia Therapy (FAACT), answers on the European Organization for Research and Treatment of Cancer 30-item core quality of life (EORTC QLQ-C30) questionnaires, scores on the visual analogue scale (VAS), and the results from blood tests and a safety evaluation. Moxibustion was an acceptable intervention in patients with metastatic cancer. Compliance with the treatment protocol was high, with 11 patients completing all 10 treatments. No serious adverse events related to moxibustion occurred, but 4 patients in the true moxibustion group reported mild rubefaction, which disappeared in a few hours. This study suggests that moxibustion may be safely used to treat anorexia and improve quality of life in patients with metastatic cancer. However, further research is needed to confirm this result.

  7. Estimating factors influencing the detection probability of semiaquatic freshwater snails using quadrat survey methods

    USGS Publications Warehouse

    Roesler, Elizabeth L.; Grabowski, Timothy B.

    2018-01-01

    Developing effective monitoring methods for elusive, rare, or patchily distributed species requires extra considerations, such as imperfect detection. Although detection is frequently modeled, the opportunity to assess it empirically is rare, particularly for imperiled species. We used Pecos assiminea (Assiminea pecos), an endangered semiaquatic snail, as a case study to test detection and accuracy issues surrounding quadrat searches. Quadrats (9 × 20 cm; n = 12) were placed in suitable Pecos assiminea habitat and randomly assigned a treatment, defined as the number of empty snail shells (0, 3, 6, or 9). Ten observers rotated through each quadrat, conducting 5-min visual searches for shells. The probability of detecting a shell when present was 67.4 ± 3.0%, but it decreased with the increasing litter depth and fewer number of shells present. The mean (± SE) observer accuracy was 25.5 ± 4.3%. Accuracy was positively correlated to the number of shells in the quadrat and negatively correlated to the number of times a quadrat was searched. The results indicate quadrat surveys likely underrepresent true abundance, but accurately determine the presence or absence. Understanding detection and accuracy of elusive, rare, or imperiled species improves density estimates and aids in monitoring and conservation efforts.

  8. Predicting Treatment Effect from Surrogate Endpoints and Historical Trials | Division of Cancer Prevention

    Cancer.gov

    By Stuart G. Baker, 2017 Introduction This software fits a zero-intercept random effects linear model to data on surrogate and true endpoints in previous trials. Requirement:  Mathematica Version 11 or later. |

  9. Measuring the jitter of ring oscillators by means of information theory quantifiers

    NASA Astrophysics Data System (ADS)

    Antonelli, M.; De Micco, L.; Larrondo, H. A.

    2017-02-01

    Ring oscillators (RO's) are elementary blocks widely used in digital design. Jitter is unavoidable in RO's, its presence is an undesired behavior in many applications, as clock generators. On the contrary, jitter may be used as the noise source in RO-based true-random numbers generators (TRNG). Consequently, jitter measure is a relevant issue to characterize a RO, and it is the subject of this paper. The main contribution is the use of Information Theory Quantifiers (ITQ) as measures of RO's jitter. It is shown that among several ITQ evaluated, two of them emerge as good measures because they are independent of parameters used for their statistical determination. They turned out to be robust and may be implemented experimentally. We encountered that a dual entropy plane allows a visual comparison of results.

  10. The effect of mood on false memory for emotional DRM word lists.

    PubMed

    Zhang, Weiwei; Gross, Julien; Hayne, Harlene

    2017-04-01

    In the present study, we investigated the effect of participants' mood on true and false memories of emotional word lists in the Deese-Roediger-McDermott (DRM) paradigm. In Experiment 1, we constructed DRM word lists in which all the studied words and corresponding critical lures reflected a specified emotional valence. In Experiment 2, we used these lists to assess mood-congruent true and false memory. Participants were randomly assigned to one of three induced-mood conditions (positive, negative, or neutral) and were presented with word lists comprised of positive, negative, or neutral words. For both true and false memory, there was a mood-congruent effect in the negative mood condition; this effect was due to a decrease in true and false recognition of the positive and neutral words. These findings are consistent with both spreading-activation and fuzzy-trace theories of DRM performance and have practical implications for our understanding of the effect of mood on memory.

  11. True cadence and step accumulation are not equivalent: the effect of intermittent claudication on free-living cadence.

    PubMed

    Stansfield, B; Clarke, C; Dall, P; Godwin, J; Holdsworth, R; Granat, M

    2015-02-01

    'True cadence' is the rate of stepping during the period of stepping. 'Step accumulation' is the steps within an epoch of time (e.g. 1min). These terms have been used interchangeably in the literature. These outcomes are compared within a population with intermittent claudication (IC). Multiday, 24h stepping activity of those with IC (30) and controls (30) was measured objectively using the activPAL physical activity monitor. 'True cadence' and 'step accumulation' outcomes were calculated. Those with IC took fewer steps/d 6531±2712 than controls 8692±2945 (P=0.003). However, these steps were taken within approximately the same number of minute epochs (IC 301±100min/d; controls 300±70min/d, P=0.894) with only slightly lower true cadence (IC 69 (IQ 66,72) steps/min; controls 72 (IQ 68,76) steps/min, P=0.026), giving substantially lower step accumulation (IC 22 (IQ 19,24) steps/min; controls 30 (IQ 23,34) steps/min) (P<0.001). However, the true cadence of stepping within the blocks of the 1, 5, 20, 30 and 60min with the maximum number of steps accumulated was lower for those with IC than controls (P<0.05). Those with IC took 1300 steps fewer per day above a true cadence of 90 steps/min. True cadence and step accumulation outcomes were radically different for the outcomes examined. 'True cadence' and 'step accumulation' were not equivalent in those with IC or controls. The measurement of true cadence in the population of people with IC provides information about their stepping rate during the time they are stepping. True cadence should be used to correctly describe the rate of stepping as performed. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Data Assimilation by Ensemble Kalman Filter during One-Dimensional Nonlinear Consolidation in Randomly Heterogeneous Highly Compressible Aquitards

    NASA Astrophysics Data System (ADS)

    Zapata Norberto, B.; Morales-Casique, E.; Herrera, G. S.

    2017-12-01

    Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. We explore the effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards by means of 1-D Monte Carlo numerical simulations. 2000 realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (Cc) and void ratio (e). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system. Random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady state conditions. We further propose a data assimilation scheme by means of ensemble Kalman filter to estimate the ensemble mean distribution of K, pore-pressure and total settlement. We consider the case where pore-pressure measurements are available at given time intervals. We test our approach by generating a 1-D realization of K with exponential spatial correlation, and solving the nonlinear flow and consolidation problem. These results are taken as our "true" solution. We take pore-pressure "measurements" at different times from this "true" solution. The ensemble Kalman filter method is then employed to estimate ensemble mean distribution of K, pore-pressure and total settlement based on the sequential assimilation of these pore-pressure measurements. The ensemble-mean estimates from this procedure closely approximate those from the "true" solution. This procedure can be easily extended to other random variables such as compression index and void ratio.

  13. Phase Diagram of a Three-Dimensional Antiferromagnet with Random Magnetic Anisotropy

    DOE PAGES

    Perez, Felio A.; Borisov, Pavel; Johnson, Trent A.; ...

    2015-03-04

    Three-dimensional (3D) antiferromagnets with random magnetic anisotropy (RMA) that were experimentally studied to date have competing two-dimensional and three-dimensional exchange interactions which can obscure the authentic effects of RMA. The magnetic phase diagram of Fe xNi 1-xF 2 epitaxial thin films with true random single-ion anisotropy was deduced from magnetometry and neutron scattering measurements and analyzed using mean field theory. Regions with uniaxial, oblique and easy plane anisotropies were identified. A RMA-induced glass region was discovered where a Griffiths-like breakdown of long-range spin order occurs.

  14. Training for Rapid Interpretation of Voluminous Multimodal Data

    DTIC Science & Technology

    2008-04-01

    determined after the reserach that the corresponding FAC for the randomness and oversensitivity biases could be reasonably construed as true incidents...and the low overall susceptibility to errors in Experiment 4 made control comparisons irrelevant. Further research might employ similar methodology

  15. The efficacy and cost of alternative strategies for systematic screening for type 2 diabetes in the U.S. population 45-74 years of age.

    PubMed

    Johnson, Susan L; Tabaei, Bahman P; Herman, William H

    2005-02-01

    To simulate the outcomes of alternative strategies for screening the U.S. population 45-74 years of age for type 2 diabetes. We simulated screening with random plasma glucose (RPG) and cut points of 100, 130, and 160 mg/dl and a multivariate equation including RPG and other variables. Over 15 years, we simulated screening at intervals of 1, 3, and 5 years. All positive screening tests were followed by a diagnostic fasting plasma glucose or an oral glucose tolerance test. Outcomes include the numbers of false-negative, true-positive, and false-positive screening tests and the direct and indirect costs. At year 15, screening every 3 years with an RPG cut point of 100 mg/dl left 0.2 million false negatives, an RPG of 130 mg/dl or the equation left 1.3 million false negatives, and an RPG of 160 mg/dl left 2.8 million false negatives. Over 15 years, the absolute difference between the most sensitive and most specific screening strategy was 4.5 million true positives and 476 million false-positives. Strategies using RPG cut points of 130 mg/dl or the multivariate equation every 3 years identified 17.3 million true positives; however, the equation identified fewer false-positives. The total cost of the most sensitive screening strategy was $42.7 billion and that of the most specific strategy was $6.9 billion. Screening for type 2 diabetes every 3 years with an RPG cut point of 130 mg/dl or the multivariate equation provides good yield and minimizes false-positive screening tests and costs.

  16. The probability of being identified as an outlier with commonly used funnel plot control limits for the standardised mortality ratio.

    PubMed

    Seaton, Sarah E; Manktelow, Bradley N

    2012-07-16

    Emphasis is increasingly being placed on the monitoring of clinical outcomes for health care providers. Funnel plots have become an increasingly popular graphical methodology used to identify potential outliers. It is assumed that a provider only displaying expected random variation (i.e. 'in-control') will fall outside a control limit with a known probability. In reality, the discrete count nature of these data, and the differing methods, can lead to true probabilities quite different from the nominal value. This paper investigates the true probability of an 'in control' provider falling outside control limits for the Standardised Mortality Ratio (SMR). The true probabilities of an 'in control' provider falling outside control limits for the SMR were calculated and compared for three commonly used limits: Wald confidence interval; 'exact' confidence interval; probability-based prediction interval. The probability of falling above the upper limit, or below the lower limit, often varied greatly from the nominal value. This was particularly apparent when there were a small number of expected events: for expected events ≤ 50 the median probability of an 'in-control' provider falling above the upper 95% limit was 0.0301 (Wald), 0.0121 ('exact'), 0.0201 (prediction). It is important to understand the properties and probability of being identified as an outlier by each of these different methods to aid the correct identification of poorly performing health care providers. The limits obtained using probability-based prediction limits have the most intuitive interpretation and their properties can be defined a priori. Funnel plot control limits for the SMR should not be based on confidence intervals.

  17. A robust method using propensity score stratification for correcting verification bias for binary tests

    PubMed Central

    He, Hua; McDermott, Michael P.

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650

  18. An investigation into online videos as a source of safety hazard reports.

    PubMed

    Nasri, Leila; Baghersad, Milad; Gruss, Richard; Marucchi, Nico Sung Won; Abrahams, Alan S; Ehsani, Johnathon P

    2018-06-01

    Despite the advantages of video-based product reviews relative to text-based reviews in detecting possible safety hazard issues, video-based product reviews have received no attention in prior literature. This study focuses on online video-based product reviews as possible sources to detect safety hazards. We use two common text mining methods - sentiment and smoke words - to detect safety issues mentioned in videos on the world's most popular video sharing platform, YouTube. 15,402 product review videos from YouTube were identified as containing either negative sentiment or smoke words, and were carefully manually viewed to verify whether hazards were indeed mentioned. 496 true safety issues (3.2%) were found. Out of 9,453 videos that contained smoke words, 322 (3.4%) mentioned safety issues, vs. only 174 (2.9%) of the 5,949 videos with negative sentiment words. Only 1% of randomly-selected videos mentioned safety hazards. Comparing the number of videos with true safety issues that contain sentiment words vs. smoke words in their title or description, we show that smoke words are a more accurate predictor of safety hazards in video-based product reviews than sentiment words. This research also discovers words that are indicative of true hazards versus false positives in online video-based product reviews. Practical applications: The smoke words lists and word sub-groups generated in this paper can be used by manufacturers and consumer product safety organizations to more efficiently identify product safety issues from online videos. This project also provides realistic baselines for resource estimates for future projects that aim to discover safety issues from online videos or reviews. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  19. The true invariant of an arbitrage free portfolio

    NASA Astrophysics Data System (ADS)

    Schmidt, Anatoly B.

    2003-03-01

    It is shown that the arbitrage free portfolio paradigm being applied to a portfolio with an arbitrary number of shares N allows for the extended solution in which the option price F depends on N. However the resulting stock hedging expense Q= MF (where M is the number of options in the portfolio) does not depend on whether N is treated as an independent variable or as a parameter. Therefore the stock hedging expense is the true invariant of the arbitrage free portfolio paradigm.

  20. Laparoscopic Pediatric Inguinal Hernia Repair: Overview of "True Herniotomy" Technique and Review of Current Evidence.

    PubMed

    Feehan, Brendan P; Fromm, David S

    2017-05-01

    Inguinal hernia repair is one of the most commonly performed operations in the pediatric population. While the majority of pediatric surgeons routinely use laparoscopy in their practices, a relatively small number prefer a laparoscopic inguinal hernia repair over the traditional open repair. This article provides an overview of the three port laparoscopic technique for inguinal hernia repair, as well as a review of the current evidence with respect to visualization and identification of hernias, recurrence rates, operative times, complication rates, postoperative pain, and cosmesis. The laparoscopic repair presents a viable alternative to open repair and offers a number of benefits over the traditional approach. These include superior visualization of the relevant anatomy, ability to assess and repair a contralateral hernia, lower rates of metachronous hernia, shorter operative times in bilateral hernia, and the potential for lower complication rates and improved cosmesis. This is accomplished without increasing recurrence rates or postoperative pain. Further research comparing the different approaches, including standardization of techniques and large randomized controlled trials, will be needed to definitively determine which is superior. Copyright© South Dakota State Medical Association.

  1. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  2. 7 CFR 1782.100 - OMB Control Number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB Control Number. 1782.100 Section 1782.100... AGRICULTURE (CONTINUED) SERVICING OF WATER AND WASTE PROGRAMS § 1782.100 OMB Control Number. The information... OMB Control Number 0572-0137. ...

  3. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    PubMed

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  4. Individual Differences in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. Joe; Ronald L. Boring

    2014-06-01

    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research hasmore » shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.« less

  5. Mysid (Mysidopsis bahia) life-cycle test: Design comparisons and assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lussier, S.M.; Champlin, D.; Kuhn, A.

    1996-12-31

    This study examines ASTM Standard E1191-90, ``Standard Guide for Conducting Life-cycle Toxicity Tests with Saltwater Mysids,`` 1990, using Mysidopsis bahia, by comparing several test designs to assess growth, reproduction, and survival. The primary objective was to determine the most labor efficient and statistically powerful test design for the measurement of statistically detectable effects on biologically sensitive endpoints. Five different test designs were evaluated varying compartment size, number of organisms per compartment and sex ratio. Results showed that while paired organisms in the ASTM design had the highest rate of reproduction among designs tested, no individual design had greater statistical powermore » to detect differences in reproductive effects. Reproduction was not statistically different between organisms paired in the ASTM design and those with randomized sex ratios using larger test compartments. These treatments had numerically higher reproductive success and lower within tank replicate variance than treatments using smaller compartments where organisms were randomized, or had a specific sex ratio. In this study, survival and growth were not statistically different among designs tested. Within tank replicate variability can be reduced by using many exposure compartments with pairs, or few compartments with many organisms in each. While this improves variance within replicate chambers, it does not strengthen the power of detection among treatments in the test. An increase in the number of true replicates (exposure chambers) to eight will have the effect of reducing the percent detectable difference by a factor of two.« less

  6. Measurements of True Leak Rates of MEMS Packages

    PubMed Central

    Han, Bongtae

    2012-01-01

    Gas transport mechanisms that characterize the hermetic behavior of MEMS packages are fundamentally different depending upon which sealing materials are used in the packages. In metallic seals, gas transport occurs through a few nanoscale leak channels (gas conduction) that are produced randomly during the solder reflow process, while gas transport in polymeric seals occurs through the bulk material (gas diffusion). In this review article, the techniques to measure true leak rates of MEMS packages with the two sealing materials are described and discussed: a Helium mass spectrometer based technique for metallic sealing and a gas diffusion based model for polymeric sealing. PMID:22736994

  7. How to measure a-few-nanometer-small LER occurring in EUV lithography processed feature

    NASA Astrophysics Data System (ADS)

    Kawada, Hiroki; Kawasaki, Takahiro; Kakuta, Junichi; Ikota, Masami; Kondo, Tsuyoshi

    2018-03-01

    For EUV lithography features we want to decrease the dose and/or energy of CD-SEM's probe beam because LER decreases with severe resist-material's shrink. Under such conditions, however, measured LER increases from true LER, due to LER bias that is fake LER caused by random noise in SEM image. A gap error occurs between the right and the left LERs. In this work we propose new procedures to obtain true LER by excluding the LER bias from the measured LER. To verify it we propose a LER's reference-metrology using TEM.

  8. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  9. 7 CFR 1775.9 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1775.9 Section 1775.9 Agriculture... (CONTINUED) TECHNICAL ASSISTANCE GRANTS General Provisions § 1775.9 OMB control number. The information... have been assigned OMB control number 0572-0112. ...

  10. 1 CFR 21.35 - OMB control numbers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 1 General Provisions 1 2014-01-01 2012-01-01 true OMB control numbers. 21.35 Section 21.35 General... DOCUMENTS PREPARATION OF DOCUMENTS SUBJECT TO CODIFICATION General Omb Control Numbers § 21.35 OMB control numbers. To display OMB control numbers in agency regulations, those numbers shall be placed...

  11. 1 CFR 21.35 - OMB control numbers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 1 General Provisions 1 2013-01-01 2012-01-01 true OMB control numbers. 21.35 Section 21.35 General... DOCUMENTS PREPARATION OF DOCUMENTS SUBJECT TO CODIFICATION General Omb Control Numbers § 21.35 OMB control numbers. To display OMB control numbers in agency regulations, those numbers shall be placed...

  12. Misinformation, partial knowledge and guessing in true/false tests.

    PubMed

    Burton, Richard F

    2002-09-01

    Examiners disagree on whether or not multiple choice and true/false tests should be negatively marked. Much of the debate has been clouded by neglect of the role of misinformation and by vagueness regarding both the specification of test types and "partial knowledge" in relation to guessing. Moreover, variations in risk-taking in the face of negative marking have too often been treated in absolute terms rather than in relation to the effect of guessing on test unreliability. This paper aims to clarify these points and to compare the ill-effects on test reliability of guessing and of variable risk-taking. Three published studies on medical students are examined. These compare responses in true/false tests obtained with both negative marking and number-right scoring. The studies yield data on misinformation and on the extent to which students may fail to benefit from distrusted partial knowledge when there is negative marking. A simple statistical model is used to compare variations in risk-taking with test unreliability due to blind guessing under number-right scoring conditions. Partial knowledge should be least problematic with independent true/false items. The effect on test reliability of blind guessing under number-right conditions is generally greater than that due to the over-cautiousness of some students when there is negative marking.

  13. Associations between dairy cow inter-service interval and probability of conception.

    PubMed

    Remnant, J G; Green, M J; Huxley, J N; Hudson, C D

    2018-07-01

    Recent research has indicated that the interval between inseminations in modern dairy cattle is often longer than the commonly accepted cycle length of 18-24 days. This study analysed 257,396 inseminations in 75,745 cows from 312 herds in England and Wales. The interval between subsequent inseminations in the same cow in the same lactation (inter-service interval, ISI) were calculated and inseminations categorised as successful or unsuccessful depending on whether there was a corresponding calving event. Conception risk was calculated for each individual ISI between 16 and 28 days. A random effects logistic regression model was fitted to the data with pregnancy as the outcome variable and ISI (in days) included in the model as a categorical variable. The modal ISI was 22 days and the peak conception risk was 44% for ISIs of 21 days rising from 27% at 16 days. The logistic regression model revealed significant associations of conception risk with ISI as well as 305 day milk yield, insemination number, parity and days in milk. Predicted conception risk was lower for ISIs of 16, 17 and 18 days and higher for ISIs of 20, 21 and 22 days compared to 25 day ISIs. A mixture model was specified to identify clusters in insemination frequency and conception risk for ISIs between 3 and 50 days. A "high conception risk, high insemination frequency" cluster was identified between 19 and 26 days which indicated that this time period was the true latent distribution for ISI with optimal reproductive outcome. These findings suggest that the period of increased numbers of inseminations around 22 days identified in existing work coincides with the period of increased probability of conception and therefore likely represents true return estrus events. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Efficacy of single versus three sessions of high rate repetitive transcranial magnetic stimulation in chronic migraine and tension-type headache.

    PubMed

    Kalita, Jayantee; Laskar, Sanghamitra; Bhoi, Sanjeev Kumar; Misra, Usha Kant

    2016-11-01

    We report the efficacy of three versus single session of 10 Hz repetitive transcranial magnetic stimulation (rTMS) in chronic migraine (CM) and chronic tension-type headache (CTTH). Ninety-eight patients with CM or CTTH were included and their headache frequency, severity, functional disability and number of abortive medications were noted. Fifty-two patients were randomly assigned to group I (three true sessions) and 46 to group II (one true and two sham rTMS sessions) treatment. 10 Hz rTMS comprising 600 pulses was delivered in 412.4 s on the left frontal cortex. Outcomes were noted at 1, 2 and 3 months. The primary outcome was 50 % reduction in headache frequency, and secondary outcomes were improvement in severity, functional disability, abortive drugs and side effects. The baseline headache characteristics were similar between the two groups. Follow up at different time points revealed significant improvement in headache frequency, severity, functional disability and number of abortive drugs compared to baseline in both group I and group II patients, although these parameters were not different between the two groups. In group I, 31 (79.4 %) had reduction of headache frequency and 29 (74.4 %) converted to episodic headache. In group II, these were 24 (64.8 %) and 22 (59.2 %), respectively. In chronic migraine, the severity of headache at 2 months reduced in group I compared to group II (62.5 vs 35.3 %; P = 0.01). Both single and three sessions of 10 Hz rTMS were found to be equally effective in CM and CTTH, and resulted in conversion of chronic to episodic headache in 67.1 % patients.

  15. Self-correcting random number generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Pooser, Raphael C.

    2016-09-06

    A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, tomore » provide a random number according to one or more performance criteria.« less

  16. Soft Drinks, Mind Reading, and Number Theory

    ERIC Educational Resources Information Center

    Schultz, Kyle T.

    2009-01-01

    Proof is a central component of mathematicians' work, used for verification, explanation, discovery, and communication. Unfortunately, high school students' experiences with proof are often limited to verifying mathematical statements or relationships that are already known to be true. As a result, students often fail to grasp the true nature of…

  17. TEMPORAL CORRELATION OF CLASSIFICATIONS IN REMOTE SENSING

    EPA Science Inventory

    A bivariate binary model is developed for estimating the change in land cover from satellite images obtained at two different times. The binary classifications of a pixel at the two times are modeled as potentially correlated random variables, conditional on the true states of th...

  18. Experimental and Quasi-Experimental Design.

    ERIC Educational Resources Information Center

    Cottrell, Edward B.

    With an emphasis on the problems of control of extraneous variables and threats to internal and external validity, the arrangement or design of experiments is discussed. The purpose of experimentation in an educational institution, and the principles governing true experimentation (randomization, replication, and control) are presented, as are…

  19. Menopausal Symptoms and Complementary Health Practices: What the Science Says

    MedlinePlus

    ... had significantly greater levels of satisfaction than the control group. A 2008 randomized trial found that hypnosis appears ... that acupuncture was no better than sham acupuncture (control) for the treatment of hot flashes. Both groups (sham acupuncture and true acupuncture) had a significant ...

  20. 7 CFR 1778.100 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1778.100 Section 1778.100... AGRICULTURE (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.100 OMB control number... Management and Budget and assigned OMB control number 0572-0110. ...

  1. On-chip photonic synapse.

    PubMed

    Cheng, Zengguang; Ríos, Carlos; Pernice, Wolfram H P; Wright, C David; Bhaskaran, Harish

    2017-09-01

    The search for new "neuromorphic computing" architectures that mimic the brain's approach to simultaneous processing and storage of information is intense. Because, in real brains, neuronal synapses outnumber neurons by many orders of magnitude, the realization of hardware devices mimicking the functionality of a synapse is a first and essential step in such a search. We report the development of such a hardware synapse, implemented entirely in the optical domain via a photonic integrated-circuit approach. Using purely optical means brings the benefits of ultrafast operation speed, virtually unlimited bandwidth, and no electrical interconnect power losses. Our synapse uses phase-change materials combined with integrated silicon nitride waveguides. Crucially, we can randomly set the synaptic weight simply by varying the number of optical pulses sent down the waveguide, delivering an incredibly simple yet powerful approach that heralds systems with a continuously variable synaptic plasticity resembling the true analog nature of biological synapses.

  2. Ovine Paratuberculosis: A Seroprevalence Study in Dairy Flocks Reared in the Marche Region, Italy

    PubMed Central

    Anna Rita, Attili; Victor, Ngu Ngwa; Silvia, Preziuso; Luciana, Pacifici; Anastasia, Domesi; Vincenzo, Cuteri

    2011-01-01

    In order to fulfil the seroprevalence gap on Mycobacterium avium subsp. paratuberculosis infection in ovine dairy farms of Marche region (central Italy), a stratified study was carried out on 2086 adult female sheep randomly chosen from 38 herds selected in Ancona and Macerata provinces. 73.7% flocks resulted infected by a commercial ELISA test (Pourquier, France), with a mean seroprevalence of 6.29% of sampled sheep in both provinces. A higher number of MAP seropositive ewes was recorded in the large herds' consistence than in the small and medium herds' consistence (P = 0.0269), and a greater percentage of infected sheep was obtained among female at early/late than in peak lactation stage (P = 0.0237). MAP infection was confirmed in 12.6% of infected farms by faecal culture. The true sheep-level seroprevalence was 15.1% ± 7.3%. PMID:21876850

  3. Mid-infrared interferometry of AGNs: A statistical view into the dusty nuclear environment of the Seyfert Galaxies.

    NASA Astrophysics Data System (ADS)

    Lopez-Gonzaga, N.

    2015-09-01

    The high resolution achieved by the instrument MIDI at the VLTI allowed to obtain more detail information about the geometry and structure of the nuclear mid-infrared emission of AGNs, but due to the lack of real images, the interpretation of the results is not an easy task. To profit more from the high resolution data, we developed a statistical tool that allows interpret these data using clumpy torus models. A statistical approach is needed to overcome effects such as, the randomness in the position of the clouds and the uncertainty of the true position angle on the sky. Our results, obtained by studying the mid-infrared emission at the highest resolution currently available, suggest that the dusty environment of Type I objects is formed by a lower number of clouds than Type II objects.

  4. Architecture and applications of a high resolution gated SPAD image sensor

    PubMed Central

    Burri, Samuel; Maruyama, Yuki; Michalet, Xavier; Regazzoni, Francesco; Bruschini, Claudio; Charbon, Edoardo

    2014-01-01

    We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers. PMID:25090572

  5. On-chip photonic synapse

    PubMed Central

    Cheng, Zengguang; Ríos, Carlos; Pernice, Wolfram H. P.; Wright, C. David; Bhaskaran, Harish

    2017-01-01

    The search for new “neuromorphic computing” architectures that mimic the brain’s approach to simultaneous processing and storage of information is intense. Because, in real brains, neuronal synapses outnumber neurons by many orders of magnitude, the realization of hardware devices mimicking the functionality of a synapse is a first and essential step in such a search. We report the development of such a hardware synapse, implemented entirely in the optical domain via a photonic integrated-circuit approach. Using purely optical means brings the benefits of ultrafast operation speed, virtually unlimited bandwidth, and no electrical interconnect power losses. Our synapse uses phase-change materials combined with integrated silicon nitride waveguides. Crucially, we can randomly set the synaptic weight simply by varying the number of optical pulses sent down the waveguide, delivering an incredibly simple yet powerful approach that heralds systems with a continuously variable synaptic plasticity resembling the true analog nature of biological synapses. PMID:28959725

  6. The use of nanomodified concrete in construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Prokhorov, Sergei

    2018-03-01

    Construction is one of the leading economy sectors. Currently, concrete is the basis of most of the structural elements, without which it is impossible to imagine the construction of a single building or facility. Their strength, reinforcement and the period of concrete lifetime are determined at the design stage, taking into account long-term operation. However, in real life, the number of impacts that affects the structural strength is pretty high. In some cases, they are random and do not have standardized values. This is especially true in the construction and exploitation of high-rise buildings and structures. Unlike the multi-storey buildings, they experience significant loads already at the stage of erection, as they support load-lifting mechanisms, formwork systems, workers, etc. The purpose of the presented article is to develop a methodology for estimating the internal fatigue of concrete structures based on changes in their electrical conductivity.

  7. True coronary bifurcation lesions: meta-analysis and review of literature.

    PubMed

    Athappan, Ganesh; Ponniah, Thirumalaikolundiusubramanian; Jeyaseelan, Lakshmanan

    2010-02-01

    Percutaneous intervention of true coronary bifurcation lesions is challenging. Based on the results of randomized trials and registry data, the approach of stenting of main vessel only with balloon dilatation of the side branch has become the default approach for false bifurcation lesions except when a complication occurs or in cases of suboptimal result. However, the optimal stenting strategy for true coronary bifurcation lesions - to stent or not to stent the side branch - is still a matter of debate. The purpose of this study was, therefore, to compare the clinical and angiographic outcomes of the double stent technique (stenting of the main branch and side branch) over the single stent technique (stenting of main vessel only with balloon dilatation of the side branch) for treatment of true coronary bifurcation lesions, with drug-eluting stents (DES). Comparative studies published between January 2000 and February 2009 of the double stent technique vs. single stent technique with DES for true coronary bifurcations were identified using an electronic search and reviewed using a random effects model. The primary endpoints of our study were side-branch and main-branch restenoses, all-cause mortality, myocardial infarction (MI) and target lesion revascularization (TLR) at longest available follow-up. The secondary endpoints of our analysis were postprocedural minimal luminal diameter (MLD) of the side branch and main branch, follow-up MLD of side branch and main branch and stent thrombosis. Heterogeneity was assessed and sensitivity analysis was performed to test the robustness of the overall summary odds ratios (ORs). Five studies comprising 1145 patients (616 single stent and 529 double stent) were included in the analysis. Three studies were randomized comparisons between the two techniques for true coronary bifurcation lesions. Incomplete reporting of data in the primary studies was common. The lengths of clinical and angiographic follow-up ranged between 6 and 12 months and 6 and 7 months, respectively. Postprocedural MLD of the side branch was significantly smaller in the single stent group [standardized mean difference (SMD) -0.71, 95% CI -0.88 to -0.54, P < 0.000, I2 = 0%]. The odds of side-branch restenosis (OR 1.11, 95% CI 0.47-2.67, P = 0.81, I2 = 76%), main-branch restenois (OR 0.88, 95% CI 0.56-1.39, P = 0.58, I = 0%), all-cause mortality (OR 0.52, 95% CI 0.11-2.45, P = 0.41, I2 = 0%), MI (OR 0.92, 95% CI 0.34-2.54, P = 0.87, I = 49%) and TLR (OR 0.87, 95% CI 0.46-1.65, P = 0.68, I2 = 0%) were similar between the two groups. Postprocedural MLD of the main branch [standardized mean difference (SMD) -0.08, 95% CI -0.42 to -0.26, P < 0.65, I2 = 67%], follow-up MLD of side branch (SMD -0.19, 95% CI -0.40 to 0.01, P < 0.31, I2 = 15%) and main branch MLD (SMD 0.17, 95% CI -0.18 to 0.542, P < 0.35, I2 = 65%) were also similar between the two groups. In patients undergoing percutaneous coronary intervention (PCI) for true coronary bifurcations, there is no added advantage of stenting both branches as compared with a conventional one-stent strategy. The results, however, need to be interpreted considering the poor study methods and/or poor quality of reporting in publications. We propose to move forward and consider the conduct of more systematic, well-designed and scientific trials to investigate the treatment of true coronary bifurcation lesions.

  8. 27 CFR 53.187 - OMB control numbers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 2 2012-04-01 2011-04-01 true OMB control numbers. 53.187... numbers. (a) Purpose. This section collects and displays the control numbers assigned to collections of... control numbers assigned by OMB to collections of information in the regulations in this part. (b) Display...

  9. 7 CFR 1774.100 - OMB Control Number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB Control Number. 1774.100 Section 1774.100...) Grant Application Processing § 1774.100 OMB Control Number. The information collection requirements in... the submission of a paperwork package to OMB and assigned an OMB Control Number. ...

  10. 7 CFR 1779.100 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1779.100 Section 1779.100... AGRICULTURE (CONTINUED) WATER AND WASTE DISPOSAL PROGRAMS GUARANTEED LOANS § 1779.100 OMB control number. The... Management and Budget and have been assigned OMB control number 0572-0122. ...

  11. Why Most Published Research Findings Are False

    PubMed Central

    Ioannidis, John P. A.

    2005-01-01

    Summary There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research. PMID:16060722

  12. Double Kissing Crush Versus Provisional Stenting for Left Main Distal Bifurcation Lesions: DKCRUSH-V Randomized Trial.

    PubMed

    Chen, Shao-Liang; Zhang, Jue-Jie; Han, Yaling; Kan, Jing; Chen, Lianglong; Qiu, Chunguang; Jiang, Tiemin; Tao, Ling; Zeng, Hesong; Li, Li; Xia, Yong; Gao, Chuanyu; Santoso, Teguh; Paiboon, Chootopol; Wang, Yan; Kwan, Tak W; Ye, Fei; Tian, Nailiang; Liu, Zhizhong; Lin, Song; Lu, Chengzhi; Wen, Shangyu; Hong, Lang; Zhang, Qi; Sheiban, Imad; Xu, Yawei; Wang, Lefeng; Rab, Tanveer S; Li, Zhanquan; Cheng, Guanchang; Cui, Lianqun; Leon, Martin B; Stone, Gregg W

    2017-11-28

    Provisional stenting (PS) is the most common technique used to treat distal left main (LM) bifurcation lesions in patients with unprotected LM coronary artery disease undergoing percutaneous coronary intervention. The double kissing (DK) crush planned 2-stent technique has been shown to improve clinical outcomes in non-LM bifurcations compared with PS, and in LM bifurcations compared with culotte stenting, but has never been compared with PS in LM bifurcation lesions. The authors sought to determine whether a planned DK crush 2-stent technique is superior to PS for patients with true distal LM bifurcation lesions. The authors randomized 482 patients from 26 centers in 5 countries with true distal LM bifurcation lesions (Medina 1,1,1 or 0,1,1) to PS (n = 242) or DK crush stenting (n = 240). The primary endpoint was the 1-year composite rate of target lesion failure (TLF): cardiac death, target vessel myocardial infarction, or clinically driven target lesion revascularization. Routine 13-month angiographic follow-up was scheduled after ascertainment of the primary endpoint. TLF within 1 year occurred in 26 patients (10.7%) assigned to PS, and in 12 patients (5.0%) assigned to DK crush (hazard ratio: 0.42; 95% confidence interval: 0.21 to 0.85; p = 0.02). Compared with PS, DK crush also resulted in lower rates of target vessel myocardial infarction I (2.9% vs. 0.4%; p = 0.03) and definite or probable stent thrombosis (3.3% vs. 0.4%; p = 0.02). Clinically driven target lesion revascularization (7.9% vs. 3.8%; p = 0.06) and angiographic restenosis within the LM complex (14.6% vs. 7.1%; p = 0.10) also tended to be less frequent with DK crush compared with PS. There was no significant difference in cardiac death between the groups. In the present multicenter randomized trial, percutaneous coronary intervention of true distal LM bifurcation lesions using a planned DK crush 2-stent strategy resulted in a lower rate of TLF at 1 year than a PS strategy. (Double Kissing and Double Crush Versus Provisional T Stenting Technique for the Treatment of Unprotected Distal Left Main True Bifurcation Lesions: A Randomized, International, Multi-Center Clinical Trial [DKCRUSH-V]; ChiCTR-TRC-11001213). Copyright © 2017 American College of Cardiology Foundation. All rights reserved.

  13. Effects of Sulpiride on True and False Memories of Thematically Related Pictures and Associated Words in Healthy Volunteers

    PubMed Central

    Guarnieri, Regina V.; Ribeiro, Rafaela L.; de Souza, Altay A. Lino; Galduróz, José Carlos F.; Covolan, Luciene; Bueno, Orlando F. A.

    2016-01-01

    Episodic memory, working memory, emotional memory, and attention are subject to dopaminergic modulation. However, the potential role of dopamine on the generation of false memories is unknown. This study defined the role of the dopamine D2 receptor on true and false recognition memories. Twenty-four young, healthy volunteers ingested a single dose of placebo or 400 mg oral sulpiride, a dopamine D2-receptor antagonist, just before starting the recognition memory task in a randomized, double-blind, and placebo-controlled trial. The sulpiride group presented more false recognitions during visual and verbal processing than the placebo group, although both groups had the same indices of true memory. These findings demonstrate that dopamine D2 receptors blockade in healthy volunteers can specifically increase the rate of false recognitions. The findings fit well the two-process view of causes of false memories, the activation/monitoring failures model. PMID:27047394

  14. Impact of study oximeter masking algorithm on titration of oxygen therapy in the Canadian oxygen trial.

    PubMed

    Schmidt, Barbara; Roberts, Robin S; Whyte, Robin K; Asztalos, Elizabeth V; Poets, Christian; Rabi, Yacov; Solimano, Alfonso; Nelson, Harvey

    2014-10-01

    To compare oxygen saturations as displayed to caregivers on offset pulse oximeters in the 2 groups of the Canadian Oxygen Trial. In 5 double-blind randomized trials of oxygen saturation targeting, displayed saturations between 88% and 92% were offset by 3% above or below the true values but returned to true values below 84% and above 96%. During the transition, displayed values remained static at 96% in the lower and at 84% in the higher target group during a 3% change in true saturations. In contrast, displayed values changed rapidly from 88% to 84% in the lower and from 92% to 96% in the higher target group during a 1% change in true saturations. We plotted the distributions of median displayed saturations on days with >12 hours of supplemental oxygen in 1075 Canadian Oxygen Trial participants to reconstruct what caregivers observed at the bedside. The oximeter masking algorithm was associated with an increase in both stability and instability of displayed saturations that occurred during the transition between offset and true displayed values at opposite ends of the 2 target ranges. Caregivers maintained saturations at lower displayed values in the higher than in the lower target group. This differential management reduced the separation between the median true saturations in the 2 groups by approximately 3.5%. The design of the oximeter masking algorithm may have contributed to the smaller-than-expected separation between true saturations in the 2 study groups of recent saturation targeting trials in extremely preterm infants. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Interpreting ecological diversity indices applied to terminal restriction fragment length polymorphism data: insights from simulated microbial communities.

    PubMed

    Blackwood, Christopher B; Hudleston, Deborah; Zak, Donald R; Buyer, Jeffrey S

    2007-08-01

    Ecological diversity indices are frequently applied to molecular profiling methods, such as terminal restriction fragment length polymorphism (T-RFLP), in order to compare diversity among microbial communities. We performed simulations to determine whether diversity indices calculated from T-RFLP profiles could reflect the true diversity of the underlying communities despite potential analytical artifacts. These include multiple taxa generating the same terminal restriction fragment (TRF) and rare TRFs being excluded by a relative abundance (fluorescence) threshold. True community diversity was simulated using the lognormal species abundance distribution. Simulated T-RFLP profiles were generated by assigning each species a TRF size based on an empirical or modeled TRF size distribution. With a typical threshold (1%), the only consistently useful relationship was between Smith and Wilson evenness applied to T-RFLP data (TRF-E(var)) and true Shannon diversity (H'), with correlations between 0.71 and 0.81. TRF-H' and true H' were well correlated in the simulations using the lowest number of species, but this correlation declined substantially in simulations using greater numbers of species, to the point where TRF-H' cannot be considered a useful statistic. The relationships between TRF diversity indices and true indices were sensitive to the relative abundance threshold, with greatly improved correlations observed using a 0.1% threshold, which was investigated for comparative purposes but is not possible to consistently achieve with current technology. In general, the use of diversity indices on T-RFLP data provides inaccurate estimates of true diversity in microbial communities (with the possible exception of TRF-E(var)). We suggest that, where significant differences in T-RFLP diversity indices were found in previous work, these should be reinterpreted as a reflection of differences in community composition rather than a true difference in community diversity.

  16. 4 CFR 83.9 - Social Security number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 4 Accounts 1 2014-01-01 2013-01-01 true Social Security number. 83.9 Section 83.9 Accounts GOVERNMENT ACCOUNTABILITY OFFICE RECORDS PRIVACY PROCEDURES FOR PERSONNEL RECORDS § 83.9 Social Security number. (a) GAO may not require individuals to disclose their Social Security Number (SSN) unless...

  17. 7 CFR 1944.700 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 13 2014-01-01 2013-01-01 true OMB control number. 1944.700 Section 1944.700...) PROGRAM REGULATIONS (CONTINUED) HOUSING Housing Preservation Grants § 1944.700 OMB control number... to a collection of information unless it displays a valid OMB control number. The valid OMB control...

  18. 1 CFR 21.12 - Reservation of numbers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 1 General Provisions 1 2014-01-01 2012-01-01 true Reservation of numbers. 21.12 Section 21.12 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER PREPARATION, TRANSMITTAL, AND... Reservation of numbers. In a case where related parts or related sections are grouped under a heading, numbers...

  19. 7 CFR 1944.100 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 13 2014-01-01 2013-01-01 true OMB control number. 1944.100 Section 1944.100...) PROGRAM REGULATIONS (CONTINUED) HOUSING Housing Application Packaging Grants § 1944.100 OMB control number... Office of Management and Budget and have been assigned OMB control number 0575-0157. Public reporting...

  20. 7 CFR 1944.450 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 13 2014-01-01 2013-01-01 true OMB control number. 1944.450 Section 1944.450... number. The reporting and recordkeeping requirements contained in this regulation have ben approved by the Office of Management and Budget and have been assigned OMB control number 0575-0043. Public...

  1. 1 CFR 21.12 - Reservation of numbers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 1 General Provisions 1 2013-01-01 2012-01-01 true Reservation of numbers. 21.12 Section 21.12 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER PREPARATION, TRANSMITTAL, AND... Reservation of numbers. In a case where related parts or related sections are grouped under a heading, numbers...

  2. Generation of physical random numbers by using homodyne detection

    NASA Astrophysics Data System (ADS)

    Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro

    2016-10-01

    Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.

  3. 41 CFR 101-30.101-3 - National stock number.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 2 2011-07-01 2007-07-01 true National stock number....1-General § 101-30.101-3 National stock number. The national stock number (NSN) is the identifying number assigned to each item of supply. The NSN consists of the 4-digit Federal Supply Classification...

  4. Potential autofertility in true hermaphrodites.

    PubMed

    Bayraktar, Zeki

    2018-02-01

    This article examines the studies on the pregnancies of true hermaphrodites and self-fertilization in hermaphrodite mammals that have been published in the last 40 years. The number of hermaphrodite pregnants reported in the literature since 1975 was 14, the number of pregnancies was 26 and the number of healthy born babies was 20. All of the babies that were born were male. The pregnancy developed following gonadectomy in seven cases (nine pregnancies). In some cases, either gonadectomy was not performed at all or it was performed after pregnancy (eight cases, 17 pregnancies). The karyotype was 46,XX in four of these eight cases that became pregnant despite in situ ovotestis while it was 46,XX/46,XY in the other four cases (chimera). In the literature, pregnancy cases that developed through self-fertilization were not reported in humans. However, self-fertilization was detected in mammals. Pregnancy through self-fertilization developed in 7 of over 250 hermaphrodite rabbits with ovotestis that were observed by being isolated and all of them gave birth to healthy rabbits. Furthermore, the ovarian tissues of true hermaphrodites were mainly functional and ovulatory. The testicular tissues were mainly immature. However, spermatogenesis was determined in some cases. In fact, both ovulation and spermatogenesis were detected in some cases. All of these findings show that true hermaphrodites with ovarian and testicular tissues are potentially autofertile.

  5. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  6. Juvenile Diversion: An Experimental Analysis of Effectiveness.

    ERIC Educational Resources Information Center

    Severy, Lawrence J.; Whitaker, J. Michael

    1982-01-01

    The desirability of combining tests of theory with evaluations of treatment modalities is argued in an investigation of the effectiveness of a juvenile diversion program. Using a true experimental design (with randomization), recidivism analyses dependent on court record data failed to demonstrate the relative superiority of any of three treatment…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giovannetti, Vittorio; Lloyd, Seth; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment.

  8. Assessing the statistical significance of the achieved classification error of classifiers constructed using serum peptide profiles, and a prescription for random sampling repeated studies for massive high-throughput genomic and proteomic studies.

    PubMed

    Lyons-Weiler, James; Pelikan, Richard; Zeh, Herbert J; Whitcomb, David C; Malehorn, David E; Bigbee, William L; Hauskrecht, Milos

    2005-01-01

    Peptide profiles generated using SELDI/MALDI time of flight mass spectrometry provide a promising source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error), that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE) on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error. We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  9. The evidence for immunotherapy in dermatomyositis and polymyositis: a systematic review.

    PubMed

    Vermaak, Erin; Tansley, Sarah L; McHugh, Neil J

    2015-12-01

    Dermatomyositis and polymyositis are rare chronic inflammatory disorders with significant associated morbidity and mortality despite treatment. High-dose corticosteroids in addition to other interventions such as immunosuppressants, immunomodulators, and more recently, biologics are commonly used in clinical practice; however, there are no clear guidelines directing their use. Our objective was to systematically review the evidence for immunotherapy in the treatment of dermatomyositis and polymyositis. Relevant studies were identified through Embase and PubMed database searches. Trials were selected using pre-determined selection criteria and then assessed for quality. Randomized controlled trials and experimental studies without true randomization and including adult patients with definite or probable dermatomyositis or polymyositis were evaluated. Any type of immunotherapy was considered. Clinical improvement, judged by assessment of muscle strength after 6 months, was the primary outcome. Secondary outcomes included IMACS definition of improvement, improvements in patient and physician global scores, physical function, and muscle enzymes. Twelve studies met eligibility criteria. Differences in trial design, quality, and variable reporting of baseline characteristics and outcomes made direct comparison impossible. Although no treatment can be recommended on the basis of this review, improved outcomes were demonstrated with a number of agents including methotrexate, azathioprine, ciclosporin, rituximab, and intravenous immunoglobulin. Plasmapheresis and leukapheresis were of no apparent benefit. More high-quality randomized controlled trials are needed to establish the role of immunosuppressive agents in the treatment of these conditions and the clinical context in which they are most likely to be beneficial.

  10. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  11. 32 CFR 635.21 - USACRC control numbers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true USACRC control numbers. 635.21 Section 635.21 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND CRIMINAL INVESTIGATIONS LAW ENFORCEMENT REPORTING Offense Reporting § 635.21 USACRC control numbers. (a) Case numbers to support reporting...

  12. 20 CFR 209.3 - Social security number required.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Social security number required. 209.3 Section... RAILROAD EMPLOYERS' REPORTS AND RESPONSIBILITIES § 209.3 Social security number required. Each employer shall furnish to the Board a social security number for each employee for whom any report is submitted...

  13. 20 CFR 209.3 - Social security number required.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Social security number required. 209.3 Section... RAILROAD EMPLOYERS' REPORTS AND RESPONSIBILITIES § 209.3 Social security number required. Each employer shall furnish to the Board a social security number for each employee for whom any report is submitted...

  14. 7 CFR 1940.1000 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 13 2014-01-01 2013-01-01 true OMB control number. 1940.1000 Section 1940.1000....1000 OMB control number. The collection of information requirements contained in this regulation has been approved by the Office of Management and Budget and assigned OMB control number 0575-0145. Public...

  15. 7 CFR 1924.50 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1924.50 Section 1924.50... control number. The reporting and recordkeeping requirements contained in this regulation have been approved by the Office of Management and Budget (OMB) and have been assigned OMB control number 0575-0042...

  16. 7 CFR 1902.50 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1902.50 Section 1902.50... control number. The reporting and recordkeeping requirements contained in this regulation have been approved by the OMB and have been assigned OMB Control Number 0575-0158. [70 FR 59228, Oct. 12, 2005] ...

  17. 7 CFR 1942.150 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 13 2014-01-01 2013-01-01 true OMB control number. 1942.150 Section 1942.150... § 1942.150 OMB control number. The collection of information requirements in this regulation have been approved by the Office of Management and Budget and have been assigned OMB control number 0575-0120. ...

  18. 7 CFR 1924.150 - OMB Control Number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB Control Number. 1924.150 Section 1924.150... Number. The reporting requirements contained in this subpart have been approved by the Office of Management and Budget (OMB) and have been assigned OMB control number 0575-0164. Public reporting burden for...

  19. 7 CFR 1924.300 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1924.300 Section 1924.300... control number. The reporting and recordkeeping requirements contained in this regulation have been approved by the Office of Management and Budget (OMB) and have been assigned OMB control number 0575-0082...

  20. 7 CFR 1927.100 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1927.100 Section 1927.100... control number. The reporting requirements contained in this regulation have been approved by the Office of Management and Budget and have been assigned OMB control number 0575-0147. Public reporting burden...

  1. 7 CFR 1942.50 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 13 2014-01-01 2013-01-01 true OMB control number. 1942.50 Section 1942.50...) PROGRAM REGULATIONS (CONTINUED) ASSOCIATIONS Community Facility Loans § 1942.50 OMB control number. The... Management and Budget (OMB) and have been assigned OMB control number 0575-0015. Public reporting burden for...

  2. 7 CFR 1777.100 - OMB control number.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true OMB control number. 1777.100 Section 1777.100... AGRICULTURE (CONTINUED) SECTION 306C WWD LOANS AND GRANTS § 1777.100 OMB control number. The reporting and... assigned OMB control number 0570-0001. Public reporting burden for this collection of information is...

  3. The origin and reduction of spurious extrahepatic counts observed in 90Y non-TOF PET imaging post radioembolization

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud

    2018-04-01

    Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.

  4. Acupuncture treatment for insulin sensitivity of women with polycystic ovary syndrome and insulin resistance: a study protocol for a randomized controlled trial.

    PubMed

    Li, Juan; Ng, Ernest Hung Yu; Stener-Victorin, Elisabet; Hu, Zhenxing; Shao, Xiaoguang; Wang, Haiyan; Li, Meifang; Lai, Maohua; Xie, Changcai; Su, Nianjun; Yu, Chuyi; Liu, Jia; Wu, Taixiang; Ma, Hongxia

    2017-03-09

    Our prospective pilot study of acupuncture affecting insulin sensitivity on polycystic ovary syndrome (PCOS) combined with insulin resistance (IR) showed that acupuncture had a significant effect on improving the insulin sensitivity of PCOS. But there is still no randomized controlled trial to determine the effect of acupuncture on the insulin sensitivity in women with PCOS and IR. In this article, we present the protocol of a randomized controlled trial to compare the effect of true acupuncture on the insulin sensitivity of these patients compared with metformin and sham acupuncture. Acupuncture may be an effective therapeutic alternative that is superior to metformin and sham acupuncture in improving the insulin sensitivity of PCOS combined with IR. This study is a multi-center, controlled, double-blind, and randomized clinical trial aiming to evaluate the effect of acupuncture on the insulin sensitivity in PCOS combined with IR. In total 342 patients diagnosed with PCOS and IR will be enrolled. Participants will be randomized to one of the three groups: (1) true acupuncture + metformin placebo; (2) sham acupuncture + metformin, and (3) sham acupuncture + metformin placebo. Participants and assessors will be blinded. The acupuncture intervention will be given 3 days per week for a total of 48 treatment sessions during 4 months. Metformin (0.5 g per pill) or placebo will be given, three times per day, and for 4 months. Primary outcome measures are changes in homeostasis model assessment of insulin resistance (HOMA-IR) and improvement rate of HOMA-IR by oral glucose tolerance test (OGTT) and insulin releasing test (Ins). Secondary outcome measures are homeostasis model assessment-β (HOMA-β), area under the curve for glucose and insulin, frequency of regular menstrual cycles and ovulation, body composition, metabolic profile, hormonal profile, questionnaires, side effect profile, and expectation and credibility of treatment. Outcome measures are collected at baseline, at the end of treatments, and 3 months after the last acupuncture treatment. On completion of the screening visit, randomization will be conducted using a central randomization system. This study will investigate the effects of acupuncture on the insulin sensitivity of PCOS and IR women compared with metformin and sham acupuncture. We will test whether true acupuncture with needles placed in skeletal muscles and stimulated manually and by electrical stimulation is more effective than metformin and sham acupuncture with superficial needle placement with no manual or electrical stimulation in improving the insulin sensitivity in PCOS women with IR. ClinicalTrials.gov, NCT02491333 ; Chinese Clinical Trial Registry, ChiCTR-ICR-15006639. Registered on 24 June 2015.

  5. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  6. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  7. Priming psychic and conjuring abilities of a magic demonstration influences event interpretation and random number generation biases

    PubMed Central

    Mohr, Christine; Koutrakis, Nikolaos; Kuhn, Gustav

    2015-01-01

    Magical ideation and belief in the paranormal is considered to represent a trait-like character; people either believe in it or not. Yet, anecdotes indicate that exposure to an anomalous event can turn skeptics into believers. This transformation is likely to be accompanied by altered cognitive functioning such as impaired judgments of event likelihood. Here, we investigated whether the exposure to an anomalous event changes individuals’ explicit traditional (religious) and non-traditional (e.g., paranormal) beliefs as well as cognitive biases that have previously been associated with non-traditional beliefs, e.g., repetition avoidance when producing random numbers in a mental dice task. In a classroom, 91 students saw a magic demonstration after their psychology lecture. Before the demonstration, half of the students were told that the performance was done respectively by a conjuror (magician group) or a psychic (psychic group). The instruction influenced participants’ explanations of the anomalous event. Participants in the magician, as compared to the psychic group, were more likely to explain the event through conjuring abilities while the reverse was true for psychic abilities. Moreover, these explanations correlated positively with their prior traditional and non-traditional beliefs. Finally, we observed that the psychic group showed more repetition avoidance than the magician group, and this effect remained the same regardless of whether assessed before or after the magic demonstration. We conclude that pre-existing beliefs and contextual suggestions both influence people’s interpretations of anomalous events and associated cognitive biases. Beliefs and associated cognitive biases are likely flexible well into adulthood and change with actual life events. PMID:25653626

  8. Cluster Stability Estimation Based on a Minimal Spanning Trees Approach

    NASA Astrophysics Data System (ADS)

    Volkovich, Zeev (Vladimir); Barzily, Zeev; Weber, Gerhard-Wilhelm; Toledano-Kitai, Dvora

    2009-08-01

    Among the areas of data and text mining which are employed today in science, economy and technology, clustering theory serves as a preprocessing step in the data analyzing. However, there are many open questions still waiting for a theoretical and practical treatment, e.g., the problem of determining the true number of clusters has not been satisfactorily solved. In the current paper, this problem is addressed by the cluster stability approach. For several possible numbers of clusters we estimate the stability of partitions obtained from clustering of samples. Partitions are considered consistent if their clusters are stable. Clusters validity is measured as the total number of edges, in the clusters' minimal spanning trees, connecting points from different samples. Actually, we use the Friedman and Rafsky two sample test statistic. The homogeneity hypothesis, of well mingled samples within the clusters, leads to asymptotic normal distribution of the considered statistic. Resting upon this fact, the standard score of the mentioned edges quantity is set, and the partition quality is represented by the worst cluster corresponding to the minimal standard score value. It is natural to expect that the true number of clusters can be characterized by the empirical distribution having the shortest left tail. The proposed methodology sequentially creates the described value distribution and estimates its left-asymmetry. Numerical experiments, presented in the paper, demonstrate the ability of the approach to detect the true number of clusters.

  9. Skyrme density functional description of the double magic 78Ni nucleus

    NASA Astrophysics Data System (ADS)

    Brink, D. M.; Stancu, Fl.

    2018-06-01

    We calculate the single-particle spectrum of the double magic nucleus 78Ni in a Hartree-Fock approach using the Skyrme density-dependent effective interaction containing central, spin-orbit, and tensor parts. We show that the tensor part has an important effect on the spin-orbit splitting of the proton 1 f orbit that may explain the survival of magicity so far from the stability valley. We confirm the inversion of the 1 f 5 /2 and 2 p 3 /2 levels at the neutron number 48 in the Ni isotopic chain expected from previous Monte Carlo shell-model calculations and supported by experimental observation.

  10. Lucky Belief in Science Education - Gettier Cases and the Value of Reliable Belief-Forming Processes

    NASA Astrophysics Data System (ADS)

    Brock, Richard

    2018-05-01

    The conceptualisation of knowledge as justified true belief has been shown to be, at the very least, an incomplete account. One challenge to the justified true belief model arises from the proposition of situations in which a person possesses a belief that is both justified and true which some philosophers intuit should not be classified as knowledge. Though situations of this type have been imagined by a number of writers, they have come to be labelled Gettier cases. Gettier cases arise when a fallible justification happens to lead to a true belief in one context, a case of `lucky belief'. In this article, it is argued that students studying science may make claims that resemble Gettier cases. In some contexts, a student may make a claim that is both justified and true but which arises from an alternative conception of a scientific concept. A number of instances of lucky belief in topics in science education are considered leading to an examination of the criteria teachers use to assess students' claims in different contexts. The possibility of lucky belief leads to the proposal that, in addition to the acquisition of justified true beliefs, the development of reliable belief-forming processes is a significant goal of science education. The pedagogic value of various kinds of claims is considered and, it is argued, the criteria used to judge claims may be adjusted to suit the context of assessment. It is suggested that teachers should be alert to instances of lucky belief that mask alternative conceptions.

  11. 7 CFR 1940.1000 - OMB control number.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 13 2010-01-01 2009-01-01 true OMB control number. 1940.1000 Section 1940.1000 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS....1000 OMB control number. The collection of information requirements contained in this regulation has...

  12. 22 CFR 1429.25 - Number of copies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Number of copies. 1429.25 Section 1429.25 Foreign Relations FOREIGN SERVICE LABOR RELATIONS BOARD; FEDERAL LABOR RELATIONS AUTHORITY; GENERAL... AND GENERAL REQUIREMENTS General Requirements § 1429.25 Number of copies. Unless otherwise provided by...

  13. Multi-dimensional Fokker-Planck equation analysis using the modified finite element method

    NASA Astrophysics Data System (ADS)

    Náprstek, J.; Král, R.

    2016-09-01

    The Fokker-Planck equation (FPE) is a frequently used tool for the solution of cross probability density function (PDF) of a dynamic system response excited by a vector of random processes. FEM represents a very effective solution possibility, particularly when transition processes are investigated or a more detailed solution is needed. Actual papers deal with single degree of freedom (SDOF) systems only. So the respective FPE includes two independent space variables only. Stepping over this limit into MDOF systems a number of specific problems related to a true multi-dimensionality must be overcome. Unlike earlier studies, multi-dimensional simplex elements in any arbitrary dimension should be deployed and rectangular (multi-brick) elements abandoned. Simple closed formulae of integration in multi-dimension domain have been derived. Another specific problem represents the generation of multi-dimensional finite element mesh. Assembling of system global matrices should be subjected to newly composed algorithms due to multi-dimensionality. The system matrices are quite full and no advantages following from their sparse character can be profited from, as is commonly used in conventional FEM applications in 2D/3D problems. After verification of partial algorithms, an illustrative example dealing with a 2DOF non-linear aeroelastic system in combination with random and deterministic excitations is discussed.

  14. What Is Better Than Coulomb Failure Stress? A Ranking of Scalar Static Stress Triggering Mechanisms from 105 Mainshock-Aftershock Pairs

    NASA Astrophysics Data System (ADS)

    Meade, Brendan J.; DeVries, Phoebe M. R.; Faller, Jeremy; Viegas, Fernanda; Wattenberg, Martin

    2017-11-01

    Aftershocks may be triggered by the stresses generated by preceding mainshocks. The temporal frequency and maximum size of aftershocks are well described by the empirical Omori and Bath laws, but spatial patterns are more difficult to forecast. Coulomb failure stress is perhaps the most common criterion invoked to explain spatial distributions of aftershocks. Here we consider the spatial relationship between patterns of aftershocks and a comprehensive list of 38 static elastic scalar metrics of stress (including stress tensor invariants, maximum shear stress, and Coulomb failure stress) from 213 coseismic slip distributions worldwide. The rates of true-positive and false-positive classification of regions with and without aftershocks are assessed with receiver operating characteristic analysis. We infer that the stress metrics that are most consistent with observed aftershock locations are maximum shear stress and the magnitude of the second and third invariants of the stress tensor. These metrics are significantly better than random assignment at a significance level of 0.005 in over 80% of the slip distributions. In contrast, the widely used Coulomb failure stress criterion is distinguishable from random assignment in only 51-64% of the slip distributions. These results suggest that a number of alternative scalar metrics are better predictors of aftershock locations than classic Coulomb failure stress change.

  15. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    NASA Astrophysics Data System (ADS)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  16. Estimated number of infants detected and missed by critical congenital heart defect screening.

    PubMed

    Ailes, Elizabeth C; Gilboa, Suzanne M; Honein, Margaret A; Oster, Matthew E

    2015-06-01

    In 2011, the US Secretary of Health and Human Services recommended universal screening of newborns for critical congenital heart defects (CCHDs), yet few estimates of the number of infants with CCHDs likely to be detected through universal screening exist. Our objective was to estimate the number of infants with nonsyndromic CCHDs in the United States likely to be detected (true positives) and missed (false negatives) through universal newborn CCHD screening. We developed a simulation model based on estimates of birth prevalence, prenatal diagnosis, late detection, and sensitivity of newborn CCHD screening through pulse oximetry to estimate the number of true-positive and false-negative nonsyndromic cases of the 7 primary and 5 secondary CCHD screening targets identified through screening. We estimated that 875 (95% uncertainty interval [UI]: 705-1060) US infants with nonsyndromic CCHDs, including 470 (95% UI: 360-585) infants with primary CCHD screening targets, will be detected annually through newborn CCHD screening. An additional 880 (UI: 700-1080) false-negative screenings, including 280 (95% UI: 195-385) among primary screening targets, are expected. We estimated that similar numbers of CCHDs would be detected under scenarios comparing "lower" (∼19%) and "higher" (∼41%) than current prenatal detection prevalences. A substantial number of nonsyndromic CCHD cases are likely to be detected through universal CCHD screening; however, an equal number of false-negative screenings, primarily among secondary targets of screening, are likely to occur. Future efforts should document the true impact of CCHD screening in practice. Copyright © 2015 by the American Academy of Pediatrics.

  17. 14 CFR 1212.604 - Social security numbers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Social security numbers. 1212.604 Section... REGULATIONS Instructions for NASA Employees § 1212.604 Social security numbers. (a) It is unlawful for NASA to...' refusal to disclose their social security numbers, except where: (1) The disclosure is required by law; or...

  18. True and sham acupuncture produced similar frequency of ovulation and improved LH to FSH ratios in women with polycystic ovary syndrome.

    PubMed

    Pastore, Lisa M; Williams, Christopher D; Jenkins, Jeffrey; Patrie, James T

    2011-10-01

    Acupuncture may represent a nonpharmaceutical treatment for women with polycystic ovary syndrome (PCOS), based on four studies. The objective of the study was to determine whether true, as compared with sham, acupuncture normalizes pituitary gonadotropin hormones and increases ovulatory frequency in women with PCOS. This was a randomized, double-blind, sham-controlled clinical trial (5 month protocol). The study was conducted in central Virginia. Eighty-four reproductive-aged women completed the intervention. Eligibility required a PCOS diagnosis and no hormonal intervention 60 d before enrollment. Intervention included 12 sessions of true or sham acupuncture (Park sham device) for 8 wk. Serum LH and FSH at baseline, after intervention, and 3 months later were measured. Ovulation was measured with weekly urine or blood samples. Both arms demonstrated a similar mean ovulation rate over the 5 months (0.37/month among n = 40 true acupuncture and 0.40/month among n = 44 sham participants, P = 0.6), similar LH to FSH ratio improvement (-0.5 and -0.8 true and sham, respectively, P < 0.04 after intervention vs. baseline) and a similar decline in LH over the 5-month protocol (P < 0.05). Neither arm experienced a change in FSH. There were seven pregnancies (no difference by intervention, P = 0.7). Lower fasting insulin and free testosterone were highly correlated with a higher ovulation rate within the true acupuncture group only (P = 0.03), controlling for prestudy menstrual frequency and body mass index. We were unable to discern a difference between the true and sham acupuncture protocols for these women with PCOS, and both groups had a similar improvement in their LH/FSH ratio.

  19. Analysis of Covariance: Is It the Appropriate Model to Study Change?

    ERIC Educational Resources Information Center

    Marston, Paul T., Borich, Gary D.

    The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…

  20. Thine Own Self: True Self-Concept Accessibility and Meaning in Life

    PubMed Central

    Schlegel, Rebecca J.; Hicks, Joshua A.; Arndt, Jamie; King, Laura A.

    2016-01-01

    A number of philosophical and psychological theories suggest the true self is an important contributor to well-being. The present research examined whether the cognitive accessibility of the true self-concept would predict the experience of meaning in life. To ensure that any observed effects were due to the true self-concept rather than the self-concept more generally, we utilized actual self-concept accessibility as a control variable in all studies. True and actual self-concepts were defined as including those traits which are enacted around close others versus most others (Studies 1 through 3) or as traits that refer to “who you really are” vs. “who you are during most of your activities” (Studies 4 and 5), respectively. Studies 1 and 2 showed that individual differences in true self-concept accessibility, but not differences in actual self-concept accessibility, predicted meaning in life. Study 3 showed that priming traits related to the true self led to enhanced meaning in life. Studies 4 and 5 provided correlational and experimental support for the role of true self-concept accessibility in meaning in life, even when traits were defined without reference to social relationships and when state self-esteem and self-reported authenticity were controlled. Implications for the study of the true self-concept and authenticity are discussed. PMID:19159144

  1. Increasing Text Comprehension and Graphic Note Taking Using a Partial Graphic Organizer

    ERIC Educational Resources Information Center

    Robinson, Daniel H.; Katayama, Andrew D.; Beth, Alicia; Odom, Susan; Hsieh, Ya-Ping; Vanderveen, Arthur

    2006-01-01

    In 3 quasi-experiments using intact classrooms and 1 true experiment using random assignment, students completed partially complete graphic organizers (GOs) or studied complete GOs that covered course content. The partial task led to increased overall examination performance in all experiments. Also, the authors measured students' note-taking…

  2. An Illustrative Example of Propensity Score Matching with Education Research

    ERIC Educational Resources Information Center

    Lane, Forrest C.; To, Yen M.; Shelley, Kyna; Henson, Robin K.

    2012-01-01

    Researchers may be interested in examining the impact of programs that prepare youth and adults for successful careers but unable to implement experimental designs with true randomization of participants. As a result, these studies can be compromised by underlying factors that impact group selection and thus lead to potentially biased results.…

  3. Cognitive Clozing To Teach Them To Think.

    ERIC Educational Resources Information Center

    Viaggio, Sergio

    A cloze-type procedure can be used effectively to teach interpreters how to anticipate what the speaker will say, inferring communicative intention. The exercise uses a text from which words are deleted, not randomly as in the true cloze procedure, but in significant locations or contexts. The words or groups of words suppressed are progressively…

  4. The Effects of Observation Errors on the Attack Vulnerability of Complex Networks

    DTIC Science & Technology

    2012-11-01

    more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I

  5. Just-in-Time Teaching Techniques through Web Technologies for Vocational Students' Reading and Writing Abilities

    ERIC Educational Resources Information Center

    Chantoem, Rewadee; Rattanavich, Saowalak

    2016-01-01

    This research compares the English language achievements of vocational students, their reading and writing abilities, and their attitudes towards learning English taught with just-in-time teaching techniques through web technologies and conventional methods. The experimental and control groups were formed, a randomized true control group…

  6. 21 CFR 601.15 - Foreign establishments and products: samples for each importation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Foreign establishments and products: samples for each importation. 601.15 Section 601.15 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF... establishments and products: samples for each importation. Random samples of each importation, obtained by the...

  7. Prevalence of paratuberculosis in the dairy goat and dairy sheep industries in Ontario, Canada.

    PubMed

    Bauman, Cathy A; Jones-Bitton, Andria; Menzies, Paula; Toft, Nils; Jansen, Jocelyn; Kelton, David

    2016-02-01

    A cross-sectional study was undertaken (October 2010 to August 2011) to estimate the prevalence of paratuberculosis in the small ruminant dairy industries in Ontario, Canada. Blood and feces were sampled from 580 goats and 397 sheep (lactating and 2 y of age or older) that were randomly selected from 29 randomly selected dairy goat herds and 21 convenience-selected dairy sheep flocks. Fecal samples were analyzed using bacterial culture (BD BACTEC MGIT 960) and polymerase chain reaction (Tetracore); serum samples were tested with the Prionics Parachek enzyme-linked immunosorbent assay (ELISA). Using 3-test latent class Bayesian models, true farm-level prevalence was estimated to be 83.0% [95% probability interval (PI): 62.6% to 98.1%] for dairy goats and 66.8% (95% PI: 41.6% to 91.4%) for dairy sheep. The within-farm true prevalence for dairy goats was 35.2% (95% PI: 23.0% to 49.8%) and for dairy sheep was 48.3% (95% PI: 27.6% to 74.3%). These data indicate that a paratuberculosis control program for small ruminants is needed in Ontario.

  8. Diffusion of a particle in the spatially correlated exponential random energy landscape: Transition from normal to anomalous diffusion.

    PubMed

    Novikov, S V

    2018-01-14

    Diffusive transport of a particle in a spatially correlated random energy landscape having exponential density of states has been considered. We exactly calculate the diffusivity in the nondispersive quasi-equilibrium transport regime for the 1D transport model and found that for slow decaying correlation functions the diffusivity becomes singular at some particular temperature higher than the temperature of the transition to the true non-equilibrium dispersive transport regime. It means that the diffusion becomes anomalous and does not follow the usual ∝ t 1/2 law. In such situation, the fully developed non-equilibrium regime emerges in two stages: first, at some temperature there is the transition from the normal to anomalous diffusion, and then at lower temperature the average velocity for the infinite medium goes to zero, thus indicating the development of the true dispersive regime. Validity of the Einstein relation is discussed for the situation where the diffusivity does exist. We provide also some arguments in favor of conservation of the major features of the new transition scenario in higher dimensions.

  9. Effects of video-game play on information processing: a meta-analytic investigation.

    PubMed

    Powers, Kasey L; Brooks, Patricia J; Aldrich, Naomi J; Palladino, Melissa A; Alfieri, Louis

    2013-12-01

    Do video games enhance cognitive functioning? We conducted two meta-analyses based on different research designs to investigate how video games impact information-processing skills (auditory processing, executive functions, motor skills, spatial imagery, and visual processing). Quasi-experimental studies (72 studies, 318 comparisons) compare habitual gamers with controls; true experiments (46 studies, 251 comparisons) use commercial video games in training. Using random-effects models, video games led to improved information processing in both the quasi-experimental studies, d = 0.61, 95% CI [0.50, 0.73], and the true experiments, d = 0.48, 95% CI [0.35, 0.60]. Whereas the quasi-experimental studies yielded small to large effect sizes across domains, the true experiments yielded negligible effects for executive functions, which contrasted with the small to medium effect sizes in other domains. The quasi-experimental studies appeared more susceptible to bias than were the true experiments, with larger effects being reported in higher-tier than in lower-tier journals, and larger effects reported by the most active research groups in comparison with other labs. The results are further discussed with respect to other moderators and limitations in the extant literature.

  10. Validation of Nimbus-7 temperature-humidity infrared radiometer estimates of cloud type and amount

    NASA Technical Reports Server (NTRS)

    Stowe, L. L.

    1982-01-01

    Estimates of clear and low, middle and high cloud amount in fixed geographical regions approximately (160 km) squared are being made routinely from 11.5 micron radiance measurements of the Nimbus-7 Temperature-Humidity Infrared Radiometer (THIR). The purpose of validation is to determine the accuracy of the THIR cloud estimates. Validation requires that a comparison be made between the THIR estimates of cloudiness and the 'true' cloudiness. The validation results reported in this paper use human analysis of concurrent but independent satellite images with surface meteorological and radiosonde observations to approximate the 'true' cloudiness. Regression and error analyses are used to estimate the systematic and random errors of THIR derived clear amount.

  11. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  12. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Fast P.; Kraus, M.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that thesemore » data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.« less

  13. Performance Characteristics of a New LSO PET/CT Scanner With Extended Axial Field-of-View and PSF Reconstruction

    NASA Astrophysics Data System (ADS)

    Jakoby, Bjoern W.; Bercier, Yanic; Watson, Charles C.; Bendriem, Bernard; Townsend, David W.

    2009-06-01

    A new combined lutetium oxyorthosilicate (LSO) PET/CT scanner with an extended axial field-of-view (FOV) of 21.8 cm has been developed (Biograph TruePoint PET/CT with TrueV; Siemens Molecular Imaging) and introduced into clinical practice. The scanner includes the recently announced point spread function (PSF) reconstruction algorithm. The PET components incorporate four rings of 48 detector blocks, 5.4 cm times 5.4 cm in cross-section. Each block comprises a 13 times 13 matrix of 4 times 4 times 20 mm3 elements. Data are acquired with a 4.5 ns coincidence time window and an energy window of 425-650 keV. The physical performance of the new scanner has been evaluated according to the recently revised National Electrical Manufacturers Association (NEMA) NU 2-2007 standard and the results have been compared with a previous PET/CT design that incorporates three rings of block detectors with an axial coverage of 16.2 cm (Biograph TruePoint PET/CT; Siemens Molecular Imaging). In addition to the phantom measurements, patient Noise Equivalent Count Rates (NECRs) have been estimated for a range of patients with different body weights (42-154 kg). The average spatial resolution is the same for both scanners: 4.4 mm (FWHM) and 5.0 mm (FWHM) at 1 cm and 10 cm respectively from the center of the transverse FOV. The scatter fractions of the Biograph TruePoint and Biograph TruePoint TrueV are comparable at 32%. Compared to the three ring design, the system sensitivity and peak NECR with smoothed randoms correction (1R) increase by 82% and 73%, respectively. The increase in sensitivity from the extended axial coverage of the Biograph TruePoint PET/CT with TrueV should allow a decrease in either scan time or injected dose without compromising diagnostic image quality. The contrast improvement with the PSF reconstruction potentially offers enhanced detectability for small lesions.

  14. 40 CFR 209.2 - Use of number and gender.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Use of number and gender. 209.2 Section... Hearings for Orders Issued Under Section 11(d) of the Noise Control Act § 209.2 Use of number and gender... masculine gender apply to the feminine and vice versa. ...

  15. A benefit-finding intervention for family caregivers of persons with Alzheimer disease: study protocol of a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Caregivers of relatives with Alzheimer’s disease are highly stressed and at risk for physical and psychiatric conditions. Interventions are usually focused on providing caregivers with knowledge of dementia, skills, and/or support, to help them cope with the stress. This model, though true to a certain extent, ignores how caregiver stress is construed in the first place. Besides burden, caregivers also report rewards, uplifts, and gains, such as a sense of purpose and personal growth. Finding benefits through positive reappraisal may offset the effect of caregiving on caregiver outcomes. Design Two randomized controlled trials are planned. They are essentially the same except that Trial 1 is a cluster trial (that is, randomization based on groups of participants) whereas in Trial 2, randomization is based on individuals. Participants are randomized into three groups - benefit finding, psychoeducation, and simplified psychoeducation. Participants in each group receive a total of approximately 12 hours of training either in group or individually at home. Booster sessions are provided at around 14 months after the initial treatment. The primary outcomes are caregiver stress (subjective burden, role overload, and cortisol), perceived benefits, subjective health, psychological well-being, and depression. The secondary outcomes are caregiver coping, and behavioral problems and functional impairment of the care-recipient. Outcome measures are obtained at baseline, post-treatment (2 months), and 6, 12, 18 and 30 months. Discussion The emphasis on benefits, rather than losses and difficulties, provides a new dimension to the way interventions for caregivers can be conceptualized and delivered. By focusing on the positive, caregivers may be empowered to sustain caregiving efforts in the long term despite the day-to-day challenges. The two parallel trials will provide an assessment of whether the effectiveness of the intervention depends on the mode of delivery. Trial registration Chinese Clinical Trial Registry (http://www.chictr.org/en/) identifier number ChiCTR-TRC-10000881. PMID:22747914

  16. On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai

    2007-01-01

    In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…

  17. Numbers Defy the Law of Large Numbers

    ERIC Educational Resources Information Center

    Falk, Ruma; Lann, Avital Lavie

    2015-01-01

    As the number of independent tosses of a fair coin grows, the rates of heads and tails tend to equality. This is misinterpreted by many students as being true also for the absolute numbers of the two outcomes, which, conversely, depart unboundedly from each other in the process. Eradicating that misconception, as by coin-tossing experiments,…

  18. 45 CFR 213.3 - Use of gender and number.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 2 2014-10-01 2012-10-01 true Use of gender and number. 213.3 Section 213.3... gender and number. As used in this part, words importing the singular number may extend and be applied to several persons or things, and vice versa. Words importing the masculine gender may be applied to females...

  19. 45 CFR 213.3 - Use of gender and number.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 2 2013-10-01 2012-10-01 true Use of gender and number. 213.3 Section 213.3... gender and number. As used in this part, words importing the singular number may extend and be applied to several persons or things, and vice versa. Words importing the masculine gender may be applied to females...

  20. 29 CFR 71.12 - Use and collection of social security numbers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 1 2014-07-01 2013-07-01 true Use and collection of social security numbers. 71.12 Section... UNDER THE PRIVACY ACT OF 1974 General § 71.12 Use and collection of social security numbers. (a) Each component unit that requests an individual to disclose his social security account number shall provide the...

  1. 29 CFR 71.12 - Use and collection of social security numbers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Use and collection of social security numbers. 71.12 Section... UNDER THE PRIVACY ACT OF 1974 General § 71.12 Use and collection of social security numbers. (a) Each component unit that requests an individual to disclose his social security account number shall provide the...

  2. 26 CFR 41.6109-1 - Identifying numbers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 16 2010-04-01 2010-04-01 true Identifying numbers. 41.6109-1 Section 41.6109-1... Application to Tax On Use of Certain Highway Motor Vehicles § 41.6109-1 Identifying numbers. Every person required under § 41.6011(a)-1 to make a return must provide the identifying number required by the...

  3. Using Computer-Generated Random Numbers to Calculate the Lifetime of a Comet.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1991-01-01

    An educational technique to calculate the lifetime of a comet using software-generated random numbers is introduced to undergraduate physiques and astronomy students. Discussed are the generation and eligibility of the required random numbers, background literature related to the problem, and the solution to the problem using random numbers.…

  4. Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time

    PubMed Central

    Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz

    2011-01-01

    SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732

  5. Protein Kinase Classification with 2866 Hidden Markov Models and One Support Vector Machine

    NASA Technical Reports Server (NTRS)

    Weber, Ryan; New, Michael H.; Fonda, Mark (Technical Monitor)

    2002-01-01

    The main application considered in this paper is predicting true kinases from randomly permuted kinases that share the same length and amino acid distributions as the true kinases. Numerous methods already exist for this classification task, such as HMMs, motif-matchers, and sequence comparison algorithms. We build on some of these efforts by creating a vector from the output of thousands of structurally based HMMs, created offline with Pfam-A seed alignments using SAM-T99, which then must be combined into an overall classification for the protein. Then we use a Support Vector Machine for classifying this large ensemble Pfam-Vector, with a polynomial and chisquared kernel. In particular, the chi-squared kernel SVM performs better than the HMMs and better than the BLAST pairwise comparisons, when predicting true from false kinases in some respects, but no one algorithm is best for all purposes or in all instances so we consider the particular strengths and weaknesses of each.

  6. Evaluation and comparison of the ability of online available prediction programs to predict true linear B-cell epitopes.

    PubMed

    Costa, Juan G; Faccendini, Pablo L; Sferco, Silvano J; Lagier, Claudia M; Marcipar, Iván S

    2013-06-01

    This work deals with the use of predictors to identify useful B-cell linear epitopes to develop immunoassays. Experimental techniques to meet this goal are quite expensive and time consuming. Therefore, we tested 5 free, online prediction methods (AAPPred, ABCpred, BcePred, BepiPred and Antigenic) widely used for predicting linear epitopes, using the primary structure of the protein as the only input. We chose a set of 65 experimentally well documented epitopes obtained by the most reliable experimental techniques as our true positive set. To compare the quality of the predictor methods we used their positive predictive value (PPV), i.e. the proportion of the predicted epitopes that are true, experimentally confirmed epitopes, in relation to all the epitopes predicted. We conclude that AAPPred and ABCpred yield the best results as compared with the other programs and with a random prediction procedure. Our results also indicate that considering the consensual epitopes predicted by several programs does not improve the PPV.

  7. Randomization of grab-sampling strategies for estimating the annual exposure of U miners to Rn daughters.

    PubMed

    Borak, T B

    1986-04-01

    Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.

  8. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  9. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  10. 48 CFR 1604.970 - Taxpayer Identification Number.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Taxpayer Identification Number. 1604.970 Section 1604.970 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT FEDERAL EMPLOYEES HEALTH BENEFITS ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Taxpayer...

  11. Disease invasion risk in a growing population.

    PubMed

    Yuan, Sanling; van den Driessche, P; Willeboordse, Frederick H; Shuai, Zhisheng; Ma, Junling

    2016-09-01

    The spread of an infectious disease may depend on the population size. For simplicity, classic epidemic models assume homogeneous mixing, usually standard incidence or mass action. For standard incidence, the contact rate between any pair of individuals is inversely proportional to the population size, and so the basic reproduction number (and thus the initial exponential growth rate of the disease) is independent of the population size. For mass action, this contact rate remains constant, predicting that the basic reproduction number increases linearly with the population size, meaning that disease invasion is easiest when the population is largest. In this paper, we show that neither of these may be true on a slowly evolving contact network: the basic reproduction number of a short epidemic can reach its maximum while the population is still growing. The basic reproduction number is proportional to the spectral radius of a contact matrix, which is shown numerically to be well approximated by the average excess degree of the contact network. We base our analysis on modeling the dynamics of the average excess degree of a random contact network with constant population input, proportional deaths, and preferential attachment for contacts brought in by incoming individuals (i.e., individuals with more contacts attract more incoming contacts). In addition, we show that our result also holds for uniform attachment of incoming contacts (i.e., every individual has the same chance of attracting incoming contacts), and much more general population dynamics. Our results show that a disease spreading in a growing population may evade control if disease control planning is based on the basic reproduction number at maximum population size.

  12. Reliability of Total Test Scores When Considered as Ordinal Measurements

    ERIC Educational Resources Information Center

    Biswas, Ajoy Kumar

    2006-01-01

    This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…

  13. Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.

    2014-01-01

    The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…

  14. The Effects of the ARC Organizational Intervention on Caseworker Turnover, Climate, and Culture in Children's Service Systems

    ERIC Educational Resources Information Center

    Glisson, Charles; Dukes, Denzel; Green, Philip

    2006-01-01

    Objective: This study examines the effects of the Availability, Responsiveness, and Continuity (ARC) organizational intervention strategy on caseworker turnover, climate, and culture in a child welfare and juvenile justice system. Method: Using a pre-post, randomized blocks, true experimental design, 10 urban and 16 rural case management teams…

  15. Simulation Platform for Vision Aided Inertial Navigation

    DTIC Science & Technology

    2014-09-18

    Brown , R. G., & Hwang , P. Y. (1992). Introduction to Random Signals and Applied Kalman Filtering (2nd ed.). New York: John Wiley & Son. Chowdhary, G...Parameters for Various Timing Standards ( Brown & Hwang , 1992...were then calculated using the true PVA information from the ASPN data. Next, a two-state clock from ( Brown & Hwang , 1992) was used to model the

  16. A Wilderness Adventure Program as an Alternative for Juvenile Probationers: An Evaluation.

    ERIC Educational Resources Information Center

    Winterdyk, John Albert

    A true experimental design with 60 male probationers, ages 13-16, was used to evaluate the viability of an Ontario-based 21-day wilderness adventure program as an alternative for adjudicated juveniles placed on probation. Participants were randomly assigned to a control group and an experimental group. The experimental group was subdivided into 3…

  17. Modeling Signal-Noise Processes Supports Student Construction of a Hierarchical Image of Sample

    ERIC Educational Resources Information Center

    Lehrer, Richard

    2017-01-01

    Grade 6 (modal age 11) students invented and revised models of the variability generated as each measured the perimeter of a table in their classroom. To construct models, students represented variability as a linear composite of true measure (signal) and multiple sources of random error. Students revised models by developing sampling…

  18. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  19. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  20. Redefining the Practice of Peer Review Through Intelligent Automation Part 2: Data-Driven Peer Review Selection and Assignment.

    PubMed

    Reiner, Bruce I

    2017-12-01

    In conventional radiology peer review practice, a small number of exams (routinely 5% of the total volume) is randomly selected, which may significantly underestimate the true error rate within a given radiology practice. An alternative and preferable approach would be to create a data-driven model which mathematically quantifies a peer review risk score for each individual exam and uses this data to identify high risk exams and readers, and selectively target these exams for peer review. An analogous model can also be created to assist in the assignment of these peer review cases in keeping with specific priorities of the service provider. An additional option to enhance the peer review process would be to assign the peer review cases in a truly blinded fashion. In addition to eliminating traditional peer review bias, this approach has the potential to better define exam-specific standard of care, particularly when multiple readers participate in the peer review process.

  1. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.

  2. The effects of divided attention at study and test on false recognition: a comparison of DRM and categorized lists.

    PubMed

    Knott, Lauren M; Dewhurst, Stephen A

    2007-12-01

    Three experiments investigated the effects of divided attention at encoding and retrieval on false recognition. In Experiment 1, participants studied word lists in either full or divided attention (random number generation) conditions and then took part in a recognition test with full attention. In Experiment 2, after studying word lists with full attention, participants carried out a recognition test with either full or divided attention. Experiment 3 manipulated attention at both study and test. We also compared Deese/Roediger-McDermott (DRM) and categorized lists, due to recent claims regarding the locus of false memories produced by such lists (Smith, Gerkens, Pierce, & Choi, 2002). With both list types, false "remember" responses were reduced by divided attention at encoding and increased by divided attention at retrieval. The findings suggest that the production of false memories occurs as a result of the generation of associates at encoding and failures of source monitoring retrieval. Crucially, this is true for both DRM and categorized lists.

  3. Alternative Measures of Between-Study Heterogeneity in Meta-Analysis: Reducing the Impact of Outlying Studies

    PubMed Central

    Lin, Lifeng; Chu, Haitao; Hodges, James S.

    2016-01-01

    Summary Meta-analysis has become a widely used tool to combine results from independent studies. The collected studies are homogeneous if they share a common underlying true effect size; otherwise, they are heterogeneous. A fixed-effect model is customarily used when the studies are deemed homogeneous, while a random-effects model is used for heterogeneous studies. Assessing heterogeneity in meta-analysis is critical for model selection and decision making. Ideally, if heterogeneity is present, it should permeate the entire collection of studies, instead of being limited to a small number of outlying studies. Outliers can have great impact on conventional measures of heterogeneity and the conclusions of a meta-analysis. However, no widely accepted guidelines exist for handling outliers. This article proposes several new heterogeneity measures. In the presence of outliers, the proposed measures are less affected than the conventional ones. The performance of the proposed and conventional heterogeneity measures are compared theoretically, by studying their asymptotic properties, and empirically, using simulations and case studies. PMID:27167143

  4. Exploring the effect of the spatial scale of fishery management.

    PubMed

    Takashina, Nao; Baskett, Marissa L

    2016-02-07

    For any spatially explicit management, determining the appropriate spatial scale of management decisions is critical to success at achieving a given management goal. Specifically, managers must decide how much to subdivide a given managed region: from implementing a uniform approach across the region to considering a unique approach in each of one hundred patches and everything in between. Spatially explicit approaches, such as the implementation of marine spatial planning and marine reserves, are increasingly used in fishery management. Using a spatially explicit bioeconomic model, we quantify how the management scale affects optimal fishery profit, biomass, fishery effort, and the fraction of habitat in marine reserves. We find that, if habitats are randomly distributed, the fishery profit increases almost linearly with the number of segments. However, if habitats are positively autocorrelated, then the fishery profit increases with diminishing returns. Therefore, the true optimum in management scale given cost to subdivision depends on the habitat distribution pattern. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. 40 CFR 211.103 - Number and gender.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Number and gender. 211.103 Section 211... PRODUCT NOISE LABELING General Provisions § 211.103 Number and gender. In this part, words in the singular will be understood to include the plural, and words in the masculine gender will be understood to...

  6. 40 CFR 204.3 - Number and gender.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Number and gender. 204.3 Section 204.3... STANDARDS FOR CONSTRUCTION EQUIPMENT General Provisions § 204.3 Number and gender. As used in this part, words in the singular shall be deemed to import the plural, and words in the masculine gender shall be...

  7. 40 CFR 205.3 - Number and gender.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Number and gender. 205.3 Section 205.3... EQUIPMENT NOISE EMISSION CONTROLS General Provisions § 205.3 Number and gender. As used in this part, words in the -singular shall be deemed to import -the plural, and words in the masculine -gender shall be...

  8. 48 CFR 1552.224-70 - Social security numbers of consultants and certain sole proprietors and Privacy Act statement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Social security numbers of... Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY CLAUSES AND FORMS SOLICITATION PROVISIONS AND CONTRACT CLAUSES Texts of Provisions and Clauses 1552.224-70 Social security numbers of consultants and...

  9. 7 CFR 1940.350 - Office of Management and Budget (OMB) control number.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 13 2010-01-01 2009-01-01 true Office of Management and Budget (OMB) control number....350 Office of Management and Budget (OMB) control number. The collection of information requirements in this regulation has been approved by the Office of Management and Budget and has been assigned OMB...

  10. Cosmic R-string, R-tube and vacuum instability

    NASA Astrophysics Data System (ADS)

    Eto, Minoru; Hamada, Yuta; Kamada, Kohei; Kobayashi, Tatsuo; Ohashi, Keisuke; Ookouchi, Yutaka

    2013-03-01

    We show that a cosmic string associated with spontaneous U(1) R symmetry breaking gives a constraint for supersymmetric model building. In some models, the string can be viewed as a tube-like domain wall with a winding number interpolating a false vacuum and a true vacuum. Such string causes inhomogeneous decay of the false vacuum to the true vacuum via rapid expansion of the radius of the tube and hence its formation would be inconsistent with the present Universe. However, we demonstrate that there exist metastable solutions which do not expand rapidly. Furthermore, when the true vacua are degenerate, the structure inside the tube becomes involved. As an example, we show a "bamboo"-like solution, which suggests a possibility observing an information of true vacua from outside of the tube through the shape and the tension of the tube.

  11. Secure uniform random-number extraction via incoherent strategies

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Zhu, Huangjun

    2018-01-01

    To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.

  12. Random numbers from vacuum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  13. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  14. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  15. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  16. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  17. Using electronic data to predict the probability of true bacteremia from positive blood cultures.

    PubMed

    Wang, S J; Kuperman, G J; Ohno-Machado, L; Onderdonk, A; Sandige, H; Bates, D W

    2000-01-01

    As part of a project to help physicians make more appropriate treatment decisions, we implemented a clinical prediction rule that computes the probability of true bacteremia for positive blood cultures and displays this information when culture results are viewed online. Prior to implementing the rule, we performed a revalidation study to verify the accuracy of the previously published logistic regression model. We randomly selected 114 cases of positive blood cultures from a recent one-year period and performed a paper chart review with the help of infectious disease experts to determine whether the cultures were true positives or contaminants. Based on the results of this revalidation study, we updated the probabilities reported by the model and made additional enhancements to improve the accuracy of the rule. Next, we implemented the rule into our hospital's laboratory computer system so that the probability information was displayed with all positive blood culture results. We displayed the prediction rule information on approximately half of the 2184 positive blood cultures at our hospital that were randomly selected during a 6-month period. During the study, we surveyed 54 housestaff to obtain their opinions about the usefulness of this intervention. Fifty percent (27/54) indicated that the information had influenced their belief of the probability of bacteremia in their patients, and in 28% (15/54) of cases it changed their treatment decision. Almost all (98% (53/54)) indicated that they wanted to continue receiving this information. We conclude that the probability information provided by this clinical prediction rule is considered useful to physicians when making treatment decisions.

  18. Comparison of Time-to-First Event and Recurrent Event Methods in Randomized Clinical Trials.

    PubMed

    Claggett, Brian; Pocock, Stuart; Wei, L J; Pfeffer, Marc A; McMurray, John J V; Solomon, Scott D

    2018-03-27

    Background -Most Phase-3 trials feature time-to-first event endpoints for their primary and/or secondary analyses. In chronic diseases where a clinical event can occur more than once, recurrent-event methods have been proposed to more fully capture disease burden and have been assumed to improve statistical precision and power compared to conventional "time-to-first" methods. Methods -To better characterize factors that influence statistical properties of recurrent-events and time-to-first methods in the evaluation of randomized therapy, we repeatedly simulated trials with 1:1 randomization of 4000 patients to active vs control therapy, with true patient-level risk reduction of 20% (i.e. RR=0.80). For patients who discontinued active therapy after a first event, we assumed their risk reverted subsequently to their original placebo-level risk. Through simulation, we varied a) the degree of between-patient heterogeneity of risk and b) the extent of treatment discontinuation. Findings were compared with those from actual randomized clinical trials. Results -As the degree of between-patient heterogeneity of risk was increased, both time-to-first and recurrent-events methods lost statistical power to detect a true risk reduction and confidence intervals widened. The recurrent-events analyses continued to estimate the true RR=0.80 as heterogeneity increased, while the Cox model produced estimates that were attenuated. The power of recurrent-events methods declined as the rate of study drug discontinuation post-event increased. Recurrent-events methods provided greater power than time-to-first methods in scenarios where drug discontinuation was ≤30% following a first event, lesser power with drug discontinuation rates of ≥60%, and comparable power otherwise. We confirmed in several actual trials in chronic heart failure that treatment effect estimates were attenuated when estimated via the Cox model and that increased statistical power from recurrent-events methods was most pronounced in trials with lower treatment discontinuation rates. Conclusions -We find that the statistical power of both recurrent-events and time-to-first methods are reduced by increasing heterogeneity of patient risk, a parameter not included in conventional power and sample size formulas. Data from real clinical trials are consistent with simulation studies, confirming that the greatest statistical gains from use of recurrent-events methods occur in the presence of high patient heterogeneity and low rates of study drug discontinuation.

  19. Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.

    DTIC Science & Technology

    The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)

  20. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  1. 27 CFR 479.35 - Employer identification number.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2012-04-01 2010-04-01 true Employer identification number. 479.35 Section 479.35 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES...

  2. 27 CFR 479.35 - Employer identification number.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2011-04-01 2010-04-01 true Employer identification number. 479.35 Section 479.35 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES...

  3. Micropropagation and assessment of genetic fidelity of Henckelia incana: an endemic and medicinal Gesneriad of South India.

    PubMed

    Prameela, J; Ramakrishnaiah, H; Krishna, V; Deepalakshmi, A P; Naveen Kumar, N; Radhika, R N

    2015-07-01

    Henckelia incana is an endemic medicinal plant used for the treatment of fever and skin allergy. In the present study shoot regeneration was evaluated on Murashige and Skoog's (MS) medium supplemented with auxins, Indole-3-acetic acid (IAA), Indole-3- butyric acid (IBA), 1-Naphthaleneacetic acid (NAA), 2, 4-Dichlorophenoxyacetic acid (2, 4-D) and cytokinins, 6-Benzylaminopurine (BAP) and Kinetin (Kn) at concentrations of 0.5, 1.0, 2.0, 3.0, 4.0 and 5.0 mgl(-1). MS medium with IBA (18.08), NAA (17.83) and IAA (17.58) at 0.5 mgl(-1) concentrations showed efficient regeneration. Regenerated shoots were rooted on half-strength MS medium with and without 0.5 mgl(-1) IBA or NAA. The plantlets were successfully hardened in rooting trays (peat, vermiculite and sand) and transferred to field mileu. The genetic fidelity of in vitro raised plants was assessed by using three different single primer amplification reaction (SPAR) markers namely random amplified polymorphic DNA (RAPD), inter-simple sequence repeat (ISSR) and direct amplification of mini-satellite DNA region (DAMD). The results consistently demonstrated true-to-true type propagation. This is the first report of in vitro propagation and establishment of true-to-true type genetic fidelity in H. incana.

  4. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  5. Quantum random number generation for loophole-free Bell tests

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan; Abellan, Carlos; Amaya, Waldimar

    2015-05-01

    We describe the generation of quantum random numbers at multi-Gbps rates, combined with real-time randomness extraction, to give very high purity random numbers based on quantum events at most tens of ns in the past. The system satisfies the stringent requirements of quantum non-locality tests that aim to close the timing loophole. We describe the generation mechanism using spontaneous-emission-driven phase diffusion in a semiconductor laser, digitization, and extraction by parity calculation using multi-GHz logic chips. We pay special attention to experimental proof of the quality of the random numbers and analysis of the randomness extraction. In contrast to widely-used models of randomness generators in the computer science literature, we argue that randomness generation by spontaneous emission can be extracted from a single source.

  6. The Effect of Map Boundary on Estimates of Landscape Resistance to Animal Movement

    PubMed Central

    Koen, Erin L.; Garroway, Colin J.; Wilson, Paul J.; Bowman, Jeff

    2010-01-01

    Background Artificial boundaries on a map occur when the map extent does not cover the entire area of study; edges on the map do not exist on the ground. These artificial boundaries might bias the results of animal dispersal models by creating artificial barriers to movement for model organisms where there are no barriers for real organisms. Here, we characterize the effects of artificial boundaries on calculations of landscape resistance to movement using circuit theory. We then propose and test a solution to artificially inflated resistance values whereby we place a buffer around the artificial boundary as a substitute for the true, but unknown, habitat. Methodology/Principal Findings We randomly assigned landscape resistance values to map cells in the buffer in proportion to their occurrence in the known map area. We used circuit theory to estimate landscape resistance to organism movement and gene flow, and compared the output across several scenarios: a habitat-quality map with artificial boundaries and no buffer, a map with a buffer composed of randomized habitat quality data, and a map with a buffer composed of the true habitat quality data. We tested the sensitivity of the randomized buffer to the possibility that the composition of the real but unknown buffer is biased toward high or low quality. We found that artificial boundaries result in an overestimate of landscape resistance. Conclusions/Significance Artificial map boundaries overestimate resistance values. We recommend the use of a buffer composed of randomized habitat data as a solution to this problem. We found that resistance estimated using the randomized buffer did not differ from estimates using the real data, even when the composition of the real data was varied. Our results may be relevant to those interested in employing Circuitscape software in landscape connectivity and landscape genetics studies. PMID:20668690

  7. Comparison of Effects of Teaching English to Thai Undergraduate Teacher-Students through Cross-Curricular Thematic Instruction Program Based on Multiple Intelligence Theory and Conventional Instruction

    ERIC Educational Resources Information Center

    Rattanavich, Saowalak

    2013-01-01

    This study is aimed at comparing the effects of teaching English to Thai undergraduate teacher-students through cross-curricular thematic instruction program based on multiple intelligence theory and through conventional instruction. Two experimental groups, which utilized Randomized True Control Group-Pretest-posttest Time Series Design and…

  8. Passive MIMO Radar Detection

    DTIC Science & Technology

    2013-09-01

    investigated using recent results from random matrix theory. Equivalence is established between PMR networks without direct-path signals and passive...approach ignores a potentially useful source of information about the unknown transmit signals . This is particularly true in high-DNR scenarios, in...which the direct-path signal provides a high-quality reference that can be used for (noisy) matched filtering, as in the conventional approach. Thus

  9. From Nonclinical Research to Clinical Trials and Patient-registries: Challenges and Opportunities in Biomedical Research

    PubMed Central

    de la Torre Hernández, José M.; Edelman, Elazer R.

    2018-01-01

    The most important challenge faced by human beings is health. The only way to provide better solutions for health care is innovation, true innovation. The only source of true innovation is research, good research indeed. The pathway from a basic science study to a randomized clinical trial is long and not free of bumps and even landmines. These are all the obstacles and barriers that limit the availability of resources, entangle administrative-regulatory processes, and restrain investigators’ initiatives. There is increasing demand for evidence to guide clinical practice but, paradoxically, biomedical research has become increasingly complex, expensive, and difficult to integrate into clinical care with increased barriers to performing the practical aspects of investigation. We face the challenge of increasing the volume of biomedical research and simultaneously improving the efficiency and output of this research. In this article, we review the main stages and methods of biomedical research, from nonclinical studies with animal and computational models to randomized trials and clinical registries, focusing on their limitations and challenges, but also providing alternative solutions to overcome them. Fortunately, challenges are always opportunities in disguise. PMID:28838647

  10. 22 CFR 308.7 - Use of social security account number in records systems. [Reserved

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Use of social security account number in records systems. [Reserved] 308.7 Section 308.7 Foreign Relations PEACE CORPS IMPLEMENTATION OF THE PRIVACY ACT OF 1974 § 308.7 Use of social security account number in records systems. [Reserved] ...

  11. Fractions: The New Frontier for Theories of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Fazio, Lisa K.; Bailey, Drew H.; Zhou, Xinlin

    2013-01-01

    Recent research on fractions has broadened and deepened theories of numerical development. Learning about fractions requires children to recognize that many properties of whole numbers are not true of numbers in general and also to recognize that the one property that unites all real numbers is that they possess magnitudes that can be ordered on…

  12. 26 CFR 1.4-1 - Number of exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Number of exemptions. 1.4-1 Section 1.4-1... and Surtaxes § 1.4-1 Number of exemptions. (a) For the purpose of determining the optional tax imposed... the taxpayer begins is less than the applicable amount determined pursuant to § 1.151-2. No exemption...

  13. 22 CFR 308.7 - Use of social security account number in records systems. [Reserved

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Use of social security account number in records systems. [Reserved] 308.7 Section 308.7 Foreign Relations PEACE CORPS IMPLEMENTATION OF THE PRIVACY ACT OF 1974 § 308.7 Use of social security account number in records systems. [Reserved] ...

  14. 22 CFR 308.7 - Use of social security account number in records systems. [Reserved

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Use of social security account number in records systems. [Reserved] 308.7 Section 308.7 Foreign Relations PEACE CORPS IMPLEMENTATION OF THE PRIVACY ACT OF 1974 § 308.7 Use of social security account number in records systems. [Reserved] ...

  15. 22 CFR 308.7 - Use of social security account number in records systems. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Use of social security account number in records systems. [Reserved] 308.7 Section 308.7 Foreign Relations PEACE CORPS IMPLEMENTATION OF THE PRIVACY ACT OF 1974 § 308.7 Use of social security account number in records systems. [Reserved] ...

  16. Translations on North Korea, Number 545

    DTIC Science & Technology

    1977-08-23

    States is plotted on this strategic coordinates of it [as received]. U.S. imperialism disguises its true colour with the mask of "protector" and...true colour of the aggressor and splittist. The article stressed: The United States does not today and will not to- morrow either want the...as a favourite maxim. Everywhere on the three continents—the Middle East area and western Sahara, Angola and the Congo plains, Southeast Asia and

  17. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  18. Why we should use simpler models if the data allow this: relevance for ANOVA designs in experimental biology.

    PubMed

    Lazic, Stanley E

    2008-07-21

    Analysis of variance (ANOVA) is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis - even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159). This will be true in general, and the reasons for this are discussed. There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.

  19. 7 CFR 1956.150 - OMB control number.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 14 2010-01-01 2009-01-01 true OMB control number. 1956.150 Section 1956.150 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS...) PROGRAM REGULATIONS (CONTINUED) DEBT SETTLEMENT Debt Settlement-Community and Business Programs § 1956.150...

  20. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  1. Dynamic Modelling for Planar Extensible Continuum Robot Manipulators

    DTIC Science & Technology

    2006-01-01

    5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7... octopus arm [18]. The OCTARM, shown in Figure 1, is a three-section robot with nine degrees of freedom. Aside from two axis bending with constant... octopus arm. However, while allowing extensibility, the model is based on an approximation (by a Þnite number of linear models) to the true continuum

  2. A hybrid-type quantum random number generator

    NASA Astrophysics Data System (ADS)

    Hai-Qiang, Ma; Wu, Zhu; Ke-Jin, Wei; Rui-Xue, Li; Hong-Wei, Liu

    2016-05-01

    This paper proposes a well-performing hybrid-type truly quantum random number generator based on the time interval between two independent single-photon detection signals, which is practical and intuitive, and generates the initial random number sources from a combination of multiple existing random number sources. A time-to-amplitude converter and multichannel analyzer are used for qualitative analysis to demonstrate that each and every step is random. Furthermore, a carefully designed data acquisition system is used to obtain a high-quality random sequence. Our scheme is simple and proves that the random number bit rate can be dramatically increased to satisfy practical requirements. Project supported by the National Natural Science Foundation of China (Grant Nos. 61178010 and 11374042), the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China, and the Fundamental Research Funds for the Central Universities of China (Grant No. bupt2014TS01).

  3. Deception in Program Evaluation Design

    DTIC Science & Technology

    2014-10-31

    CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Scott Cheney-Peters 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...stakeholders interested in assessments as a true reflection of a programâs state have a variety of methods at hand to mitigate their impacts. Even in...26. Cites attempts to manipulate the reception and understanding of findings on climate research and intelligence reports. 3 whether determining

  4. Doing better by getting worse: posthypnotic amnesia improves random number generation.

    PubMed

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation.

  5. Doing Better by Getting Worse: Posthypnotic Amnesia Improves Random Number Generation

    PubMed Central

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation. PMID:22195022

  6. Quantum random number generator

    DOEpatents

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  7. Competing risk bias was common in Kaplan-Meier risk estimates published in prominent medical journals.

    PubMed

    van Walraven, Carl; McAlister, Finlay A

    2016-01-01

    Risk estimates from Kaplan-Meier curves are well known to medical researchers, reviewers, and editors. In this study, we determined the proportion of Kaplan-Meier analyses published in prominent medical journals that are potentially biased because of competing events ("competing risk bias"). We randomly selected 100 studies that had at least one Kaplan-Meier analysis and were recently published in prominent medical journals. Susceptibility to competing risk bias was determined by examining the outcome and potential competing events. In susceptible studies, bias was quantified using a previously validated prediction model when the number of outcomes and competing events were given. Forty-six studies (46%) contained Kaplan-Meier analyses susceptible to competing risk bias. Sixteen studies (34.8%) susceptible to competing risk cited the number of outcomes and competing events; in six of these studies (6/16, 37.5%), the outcome risk from the Kaplan-Meier estimate (relative to the true risk) was biased upward by 10% or more. Almost half of Kaplan-Meier analyses published in medical journals are susceptible to competing risk bias and may overestimate event risk. This bias was found to be quantitatively important in a third of such studies. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Variations in Carboxyhaemoglobin Levels in Smokers

    PubMed Central

    Castleden, C. M.; Cole, P. V.

    1974-01-01

    Three experiments on smokers have been performed to determine variations in blood levels of carboxyhaemoglobin (COHb) throughout the day and night and whether a random measurement of COHb gives a true estimation of a smoker's mean COHb level. In the individual smoker the COHb level does not increase gradually during the day but is kept within relatively narrow limits. Moderately heavy smokers rise in the morning with a substantially raised COHb level because the half life of COHb is significantly longer during sleep than during the day. Women excrete their carbon monoxide faster than men. A random COHb estimation gives a good indication of the mean COHb level of an individual. PMID:4441877

  9. 45 CFR 205.52 - Furnishing of social security numbers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 2 2014-10-01 2012-10-01 true Furnishing of social security numbers. 205.52 Section 205.52 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES...

  10. 45 CFR 205.52 - Furnishing of social security numbers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 2 2013-10-01 2012-10-01 true Furnishing of social security numbers. 205.52 Section 205.52 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES...

  11. 21 CFR 710.6 - Notification of registrant; cosmetic product establishment registration number.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Notification of registrant; cosmetic product... OF HEALTH AND HUMAN SERVICES (CONTINUED) COSMETICS VOLUNTARY REGISTRATION OF COSMETIC PRODUCT ESTABLISHMENTS § 710.6 Notification of registrant; cosmetic product establishment registration number. The...

  12. Application of Extended Kalman Filter in Persistant Scatterer Interferometry to Enhace the Accuracy of Unwrapping Process

    NASA Astrophysics Data System (ADS)

    Tavakkoli Estahbanat, A.; Dehghani, M.

    2017-09-01

    In interferometry technique, phases have been modulated between 0-2π. Finding the number of integer phases missed when they were wrapped is the main goal of unwrapping algorithms. Although the density of points in conventional interferometry is high, this is not effective in some cases such as large temporal baselines or noisy interferograms. Due to existing noisy pixels, not only it does not improve results, but also it leads to some unwrapping errors during interferogram unwrapping. In PS technique, because of the sparse PS pixels, scientists are confronted with a problem to unwrap phases. Due to the irregular data separation, conventional methods are sterile. Unwrapping techniques are divided in to path-independent and path-dependent in the case of unwrapping paths. A region-growing method which is a path-dependent technique has been used to unwrap PS data. In this paper an idea of EKF has been generalized on PS data. This algorithm is applied to consider the nonlinearity of PS unwrapping problem as well as conventional unwrapping problem. A pulse-pair method enhanced with singular value decomposition (SVD) has been used to estimate spectral shift from interferometric power spectral density in 7*7 local windows. Furthermore, a hybrid cost-map is used to manage the unwrapping path. This algorithm has been implemented on simulated PS data. To form a sparse dataset, A few points from regular grid are randomly selected and the RMSE of results and true unambiguous phases in presented to validate presented approach. The results of this algorithm and true unwrapped phases were completely identical.

  13. Prediction of true test scores from observed item scores and ancillary data.

    PubMed

    Haberman, Shelby J; Yao, Lili; Sinharay, Sandip

    2015-05-01

    In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability. © 2015 The British Psychological Society.

  14. True rate of mineralocorticoid receptor antagonists-related hyperkalemia in placebo-controlled trials: A meta-analysis.

    PubMed

    Vukadinović, Davor; Lavall, Daniel; Vukadinović, Aleksandra Nikolovska; Pitt, Bertram; Wagenpfeil, Stefan; Böhm, Michael

    2017-06-01

    Mineralocorticoid receptor antagonists (MRA) improve survival in heart failure with reduced ejection fraction but are often underused, mostly due to concerns of hyperkalemia. Because hyperkalemia occurs also on placebo, we aimed to determine the truly MRA-related rate of hyperkalemia. We performed a meta-analysis including randomized, placebo-controlled trials reporting hyperkalemia on MRAs in patients after myocardial infarction or with chronic heart failure. We evaluated the truly MRA-related rate of hyperkalemia that represents hyperkalemia on MRA, corrected for hyperkalemia on placebo (Pla), according to the equation: True MRA (%)=(MRA (%) - Pla (%))/MRA (%). A total number of 16,065 patients from 7 trials were analyzed. Hyperkalemia was more frequently observed on MRA (9.3%) vs placebo (4.3%) (risk ratio 2.17, 95% CI 1.92-2.45, P<.0001). Truly MRA-related hyperkalemia was 54%, whereas 46% were non-MRA related. In trials using eplerenone, hyperkalemia was documented in 5.0% on eplerenone and in 2.6% on placebo (P<.0001). In spironolactone trials, hyperkalemia was documented in 17.5% and in 7.5% of patients on placebo (P=.0001). Hypokalemia occurred less frequently in patients on MRA (9.3%) compared with placebo (14.8%) (risk ratio 0.58, CI 0.47-0.72, P<.0001). This meta-analysis shows that in clinical trials, 54% of hyperkalemia cases were specifically related to the MRA treatment and 46% to other reasons. Therefore, non-MRA-related rises in potassium levels might be underestimated and should be rigorously explored before cessation of the evidence-based therapy with MRAs. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Efficient error correction for next-generation sequencing of viral amplicons

    PubMed Central

    2012-01-01

    Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430

  16. Beyond the hype and hope: Critical considerations for intranasal oxytocin research in autism spectrum disorder.

    PubMed

    Alvares, Gail A; Quintana, Daniel S; Whitehouse, Andrew J O

    2017-01-01

    Extensive research efforts in the last decade have been expended into understanding whether intranasal oxytocin may be an effective therapeutic in treating social communication impairments in individuals with autism spectrum disorder (ASD). After much hyped early findings, subsequent clinical trials of longer-term administration have yielded more conservative and mixed evidence. However, it is still unclear at this stage whether these more disappointing findings reflect a true null effect or are mitigated by methodological differences masking true effects. In this review, we comprehensively evaluate the rationale for oxytocin as a therapeutic, evaluating evidence from randomized controlled trials, case reports, and open-label studies of oxytocin administration in individuals with ASD. The evidence to date, including reviews of preregistered trials, suggests a number of critical considerations for the design and interpretation of research in this area. These include considering the choice of ASD outcome measures, dosing and nasal spray device issues, and participant selection. Despite these limitations in the field to date, there remains significant potential for oxytocin to ameliorate aspects of the persistent and debilitating social impairments in individuals with ASD. Given the considerable media hype around new treatments for ASD, as well as the needs of eager families, there is an urgent need for researchers to prioritise considering such factors when conducting well-designed and controlled studies to further advance this field. Autism Res 2017, 10: 25-41. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  17. Efficient error correction for next-generation sequencing of viral amplicons.

    PubMed

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  18. The Role of Binarity in the Angular Momentum Evolution of M Dwarfs

    NASA Astrophysics Data System (ADS)

    Stauffer, John; Rebull, Luisa; K2 clusters team

    2018-01-01

    We have analysed K2 light curves for of order a thousand low mass stars in each of the 8 Myr old Upper Sco association, the 125 Myr age Pleiades open cluster and the ~700 Myr old Praesepe cluster. A very large fraction of these stars show well-determined rotation periods with K2, and where the star is a binary, we usually are able to determine periods for both stars. In Upper Sco, where there are ~150 M dwarf binaries with K2 light curves, the binary stars have periods that are much shorter on average and much closer to each other than would be true if drawn at random from the Upper Sco M dwarf single stars. The same is true in the Pleiades,though the size of the differences from the single M dwarf population is smaller. By Praesepe age, the M dwarf binaries are still somewhat rapidly rotating but their period differences are not significantly different from what would be true if drawn by chance from the singles.

  19. p-Curve and p-Hacking in Observational Research.

    PubMed

    Bruns, Stephan B; Ioannidis, John P A

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable.

  20. 320-row CT renal perfusion imaging in patients with aortic dissection: A preliminary study.

    PubMed

    Liu, Dongting; Liu, Jiayi; Wen, Zhaoying; Li, Yu; Sun, Zhonghua; Xu, Qin; Fan, Zhanming

    2017-01-01

    To investigate the clinical value of renal perfusion imaging in patients with aortic dissection (AD) using 320-row computed tomography (CT), and to determine the relationship between renal CT perfusion imaging and various factors of aortic dissection. Forty-three patients with AD who underwent 320-row CT renal perfusion before operation were prospectively enrolled in this study. Diagnosis of AD was confirmed by transthoracic echocardiography. Blood flow (BF) of bilateral renal perfusion was measured and analyzed. CT perfusion imaging signs of AD in relation to the type of AD, number of entry tears and the false lumen thrombus were observed and compared. The BF values of patients with type A AD were significantly lower than those of patients with type B AD (P = 0.004). No significant difference was found in the BF between different numbers of intimal tears (P = 0.288), but BF values were significantly higher in cases with a false lumen without thrombus and renal arteries arising from the true lumen than in those with thrombus (P = 0.036). The BF values measured between the true lumen, false lumen and overriding groups were different (P = 0.02), with the true lumen group having the highest. Also, the difference in BF values between true lumen and false lumen groups was statistically significant (P = 0.016), while no statistical significance was found in the other two groups (P > 0.05). The larger the size of intimal entry tears, the greater the BF values (P = 0.044). This study shows a direct correlation between renal CT perfusion changes and AD, with the size, number of intimal tears, different types of AD, different renal artery origins and false lumen thrombosis, significantly affecting the perfusion values.

  1. Chronic bacterial osteomyelitis: prospective comparison of (18)F-FDG imaging with a dual-head coincidence camera and (111)In-labelled autologous leucocyte scintigraphy.

    PubMed

    Meller, J; Köster, G; Liersch, T; Siefker, U; Lehmann, K; Meyer, I; Schreiber, K; Altenvoerde, G; Becker, W

    2002-01-01

    Indium-111-labelled white blood cells ((111)In-WBCs) are currently considered the tracer of choice in the diagnostic work-up of suspected active chronic osteomyelitis (COM). Previous studies in a limited number of patients, performed with dedicated PET systems, have shown that [(18)F]2'-deoxy-2-fluoro- D-glucose (FDG) imaging may offer at least similar diagnostic accuracy. The aim of this prospective study was to compare FDG imaging with a dual-head coincidence camera (DHCC) and (111)In-WBC imaging in patients with suspected COM. Thirty consecutive non-diabetic patients with possible COM underwent combined skeletal scintigraphy (30/30 patients), (111)In-WBC imaging (28/30 patients) and FDG-PET with a DHCC (30/30 patients). During diagnostic work-up, COM was proven in 11/36 regions of suspected skeletal infection and subsequently excluded in 25/36 regions. In addition, soft tissue infection was present in five patients and septic arthritis in three. (111)In-WBC imaging in 28 patients was true positive in 2/11 regions with proven COM and true negative in 21/23 regions without further evidence of COM. False-positive results occurred in two regions and false-negative results in nine regions suspected for COM. Most of the false-negative results (7/9) occurred in the central skeleton. If the analysis was restricted to the 18 regions with available histology ( n=17) or culture ( n=1), (111)In-WBC imaging was true positive in 2/18 regions, true negative in 8/18 regions, false negative in 7/18 regions and false positive in 1/18 regions. FDG-DHCC imaging was true positive in 11/11 regions with proven COM and true negative in 23/25 regions without further evidence of COM. False-positive results occurred in two regions. If the analysis was restricted to the 19 regions with available histology ( n=18) or culture ( n=1), FDG-DHCC imaging was true positive in 9/9 regions with proven COM and true negative in 10/10 regions without further evidence of COM. It is concluded that FDG-DHCC imaging is superior to (111)In-WBC scintigraphy in the diagnosis of COM in the central skeleton and therefore should be considered the method of choice for this indication. This seems to hold true for peripheral lesions as well, but in our series the number of cases with proven infection was too small to permit a final conclusion.

  2. Achieving Nuclear Deterrence in the 21st Century

    DTIC Science & Technology

    2011-03-18

    and disease. If that all sounded too good to be true, it was too good to be true. While the Soviet Union did depart this vale of tears, Russia...necessary or there could be a large increase in the number of nuclear-armed states. The second nuclear age, per Levite , spanned the years 1968 to 1992...currently the international mood is pessimism with the world on the brink of widespread nuclear proliferation.37 Levite suggests that the emergence

  3. Using Latent Sleepiness to Evaluate an Important Effect of Promethazine

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Hayat, Matthew; Vksman, Zalman; Putcha, Laksmi

    2007-01-01

    Astronauts often use promethazine (PMZ) to counteract space motion sickness; however PMZ may cause drowsiness, which might impair cognitive function. In a NASA ground study, subjects received PMZ and their cognitive performance was then monitored over time. Subjects also reported sleepiness using the Karolinska Sleepiness Score (KSS), which ranges from 1 - 9. A problem arises when using KSS to establish an association between true sleepiness and performance because KSS scores tend to overly concentrate on the values 3 (fairly awake) and 7 (moderately tired). Therefore, we defined a latent sleepiness measure as a continuous random variable describing a subject s actual, but unobserved true state of sleepiness through time. The latent sleepiness and observed KSS are associated through a conditional probability model, which when coupled with demographic factors, predicts performance.

  4. True photographs and false memories.

    PubMed

    Lindsay, D Stephen; Hagen, Lisa; Read, J Don; Wade, Kimberley A; Garry, Maryanne

    2004-03-01

    Some trauma-memory-oriented psychotherapists advise clients to review old family photo albums to cue suspected "repressed" memories of childhood sexual abuse. Old photos might cue long-forgotten memories, but when combined with other suggestive influences they might also contribute to false memories. We asked 45 undergraduates to work at remembering three school-related childhood events (two true events provided by parents and one pseudoevent). By random assignment, 23 subjects were also given their school classes' group photos from the years of the to-be-recalled events as memory cues. As predicted, the rate of false-memory reports was dramatically higher in the photo condition than in the no-photo condition. Indeed, the rate of false-memory reports in the photo condition was substantially higher than the rate in any previously published study.

  5. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  6. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    PubMed

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. 1 CFR 304.31 - Use and collection of social security numbers and other information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 1 General Provisions 1 2013-01-01 2012-01-01 true Use and collection of social security numbers... Under the Privacy Act of 1974 § 304.31 Use and collection of social security numbers and other... individuals may not be denied any right, benefit, or privilege as a result of refusing to provide their social...

  8. 36 CFR § 1202.22 - Will NARA need my Social Security Number?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Will NARA need my Social... Information § 1202.22 Will NARA need my Social Security Number? (a) Before a NARA employee or NARA contractor asks you to provide your social security number (SSN), he or she will ensure that the disclosure is...

  9. 1 CFR 304.31 - Use and collection of social security numbers and other information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 1 General Provisions 1 2014-01-01 2012-01-01 true Use and collection of social security numbers... Under the Privacy Act of 1974 § 304.31 Use and collection of social security numbers and other... individuals may not be denied any right, benefit, or privilege as a result of refusing to provide their social...

  10. Towards a high-speed quantum random number generator

    NASA Astrophysics Data System (ADS)

    Stucki, Damien; Burri, Samuel; Charbon, Edoardo; Chunnilall, Christopher; Meneghetti, Alessio; Regazzoni, Francesco

    2013-10-01

    Randomness is of fundamental importance in various fields, such as cryptography, numerical simulations, or the gaming industry. Quantum physics, which is fundamentally probabilistic, is the best option for a physical random number generator. In this article, we will present the work carried out in various projects in the context of the development of a commercial and certified high speed random number generator.

  11. Self-balanced real-time photonic scheme for ultrafast random number generation

    NASA Astrophysics Data System (ADS)

    Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang

    2018-06-01

    We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.

  12. Number of Lymph Nodes Removed and Survival after Gastric Cancer Resection: An Analysis from the US Gastric Cancer Collaborative.

    PubMed

    Gholami, Sepideh; Janson, Lucas; Worhunsky, David J; Tran, Thuy B; Squires, Malcolm Hart; Jin, Linda X; Spolverato, Gaya; Votanopoulos, Konstantinos I; Schmidt, Carl; Weber, Sharon M; Bloomston, Mark; Cho, Clifford S; Levine, Edward A; Fields, Ryan C; Pawlik, Timothy M; Maithel, Shishir K; Efron, Bradley; Norton, Jeffrey A; Poultsides, George A

    2015-08-01

    Examination of at least 16 lymph nodes (LNs) has been traditionally recommended during gastric adenocarcinoma resection to optimize staging, but the impact of this strategy on survival is uncertain. Because recent randomized trials have demonstrated a therapeutic benefit from extended lymphadenectomy, we sought to investigate the impact of the number of LNs removed on prognosis after gastric adenocarcinoma resection. We analyzed patients who underwent gastrectomy for gastric adenocarcinoma from 2000 to 2012, at 7 US academic institutions. Patients with M1 disease or R2 resections were excluded. Disease-specific survival (DSS) was calculated using the Kaplan-Meier method and compared using log-rank and Cox regression analyses. Of 742 patients, 257 (35%) had 7 to 15 LNs removed and 485 (65%) had ≥16 LNs removed. Disease-specific survival was not significantly longer after removal of ≥16 vs 7 to 15 LNs (10-year survival, 55% vs 47%, respectively; p = 0.53) for the entire cohort, but was significantly improved in the subset of patients with stage IA to IIIA (10-year survival, 74% vs 57%, respectively; p = 0.018) or N0-2 disease (72% vs 55%, respectively; p = 0.023). Similarly, for patients who were classified to more likely be "true N0-2," based on frequentist analysis incorporating both the number of positive and of total LNs removed, the hazard ratio for disease-related death (adjusted for T stage, R status, grade, receipt of neoadjuvant and adjuvant therapy, and institution) significantly decreased as the number of LNs removed increased. The number of LNs removed during gastrectomy for adenocarcinoma appears itself to have prognostic implications for long-term survival. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Number of Lymph Nodes Removed and Survival after Gastric Cancer Resection: An Analysis from the U.S. Gastric Cancer Collaborative

    PubMed Central

    Gholami, Sepideh; Janson, Lucas; Worhunsky, David J.; Tran, Thuy B.; Squires, Malcolm Hart; Jin, Linda X.; Spolverato, Gaya; Votanopoulos, Konstantinos I.; Schmidt, Carl; Weber, Sharon M.; Bloomston, Mark; Cho, Clifford S.; Levine, Edward A.; Fields, Ryan C.; Pawlik, Timothy M.; Maithel, Shishir K.; Efron, Bradley; Norton, Jeffrey A.; Poultsides, George A.

    2015-01-01

    Background Examination of at least 16 lymph nodes (LNs) has been traditionally recommended during gastric adenocarcinoma (GAC) resection to optimize staging, but the impact of this strategy on survival is uncertain. As recent randomized trials have demonstrated a therapeutic benefit from extended lymphadenectomy, we sought to investigate the impact of the number of LNs removed on prognosis after GAC resection. Study Design Patients who underwent gastrectomy for GAC from 2000 to 2012 at seven US academic institutions were analyzed. Patients with M1 disease or R2 resections were excluded. Disease-specific survival (DSS) was calculated using the Kaplan-Meier method and compared using log-rank and Cox regression analyses. Results Of 742 patients, 257 (35%) had 7–15 LNs removed and 485 (65%) had ≥16 LNs removed. DSS was not significantly longer after removal of ≥16 versus 7–15 LNs (10-year, 55% versus 47%; P = 0.53) for the entire cohort, but was significantly improved in the subset of patients with stage IA-IIIA (10-year, 74% versus 57%; P = 0.018) or N0-2 disease (72% versus 55%, P = 0.023). Similarly, for patients who were classified to more likely be “true N0-2”, based on frequentist analysis incorporating both the number of positive and of total LNs removed, the hazard ratio for disease-related death (adjusted for T stage, R status, grade, receipt of neoadjuvant and adjuvant therapy, as well as institution) significantly decreased as the number of LNs removed increased. Conclusions The number of lymph nodes removed during gastrectomy for adenocarcinoma appears itself to have prognostic implications on long-term survival. PMID:26206635

  14. Graph reconstruction using covariance-based methods.

    PubMed

    Sulaimanov, Nurgazy; Koeppl, Heinz

    2016-12-01

    Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.

  15. The Gulliver Effect: The Impact of Error in an Elephantine Subpopulation on Estimates for Lilliputian Subpopulations

    ERIC Educational Resources Information Center

    Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene

    2009-01-01

    An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…

  16. The Effects of Reading from the Screen on the Reading Motivation Levels of Elementary 5th Graders

    ERIC Educational Resources Information Center

    Aydemir, Zeynep; Ozturk, Ergun

    2012-01-01

    This study aims to explore the effects of reading from the screen on elementary 5th grade students' reading motivation levels. It used the randomized control-group pretest-posttest model, which is a true experimental design. The study group consisted of 60 students, 30 experimental and 30 control, who were attending the 5th grade of a public…

  17. A Modified General Location Model for Noncompliance with Missing Data: Revisiting the New York City School Choice Scholarship Program Using Principal Stratification

    ERIC Educational Resources Information Center

    Jin, Hui; Barnard, John; Rubin, Donald B.

    2010-01-01

    Missing data, especially when coupled with noncompliance, are a challenge even in the setting of randomized experiments. Although some existing methods can address each complication, it can be difficult to handle both of them simultaneously. This is true in the example of the New York City School Choice Scholarship Program, where both the…

  18. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.

  19. 41 CFR 101-1.109 - Numbering in FPMR System.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Numbering in FPMR System. 101-1.109 Section 101-1.109 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS GENERAL 1-INTRODUCTION 1.1-Regulation System § 101...

  20. 41 CFR 101-30.101-3 - National stock number.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true National stock number. 101-30.101-3 Section 101-30.101-3 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30...

  1. Does an uneven sample size distribution across settings matter in cross-classified multilevel modeling? Results of a simulation study.

    PubMed

    Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C

    2018-06-06

    Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Neither fixed nor random: weighted least squares meta-regression.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2017-03-01

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Rationale and design of the MONITOR-ICD study: a randomized comparison of economic and clinical effects of automatic remote MONITORing versus control in patients with Implantable Cardioverter Defibrillators.

    PubMed

    Zabel, Markus; Müller-Riemenschneider, Falk; Geller, J Christoph; Brachmann, Johannes; Kühlkamp, Volker; Dissmann, Rüdiger; Reinhold, Thomas; Roll, Stephanie; Lüthje, Lars; Bode, Frank; Eckardt, Lars; Willich, Stefan N

    2014-10-01

    Implantable cardioverter defibrillator (ICD) remote follow-up and ICD remote monitoring (RM) are established means of ICD follow-up. The reduction of the number of in-office visits and the time to decision is proven, but the true clinical benefit is still unknown. Cost and cost-effectiveness of RM remain leading issues for its dissemination. The MONITOR-ICD study has been designed to assess costs, cost-effectiveness, and clinical benefits of RM versus standard-care follow-up in a prospective multicenter randomized controlled trial. Patients indicated for single- or dual-chamber ICD are eligible for the study and are implanted an RM-capable Biotronik ICD (Lumax VR-T or Lumax DR-T; Biotronik SE & Co KG, Berlin, Germany). Implantable cardioverter defibrillator programming and alert-based clinical responses in the RM group are highly standardized by protocol. As of December 2011, recruitment has been completed, and 416 patients have been enrolled. Subjects are followed-up for a minimum of 12months and a maximum of 24months, ending in January 2013. Disease-specific costs from a societal perspective have been defined as primary end point and will be compared between RM and standard-care groups. Secondary end points include ICD shocks (including appropriate and inappropriate shocks), cardiovascular hospitalizations and cardiovascular mortality, and additional health economic end points. The MONITOR-ICD study will be an important randomized RM study to report data on a primary economic end point in 2014. Its results on ICD shocks will add to the currently available evidence on clinical benefit of RM. Copyright © 2014 Mosby, Inc. All rights reserved.

  4. Biochemical Network Stochastic Simulator (BioNetS): software for stochastic modeling of biochemical networks.

    PubMed

    Adalsteinsson, David; McMillen, David; Elston, Timothy C

    2004-03-08

    Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  5. Impact of recombination on polymorphism of genes encoding Kunitz-type protease inhibitors in the genus Solanum.

    PubMed

    Speranskaya, Anna S; Krinitsina, Anastasia A; Kudryavtseva, Anna V; Poltronieri, Palmiro; Santino, Angelo; Oparina, Nina Y; Dmitriev, Alexey A; Belenikin, Maxim S; Guseva, Marina A; Shevelev, Alexei B

    2012-08-01

    The group of Kunitz-type protease inhibitors (KPI) from potato is encoded by a polymorphic family of multiple allelic and non-allelic genes. The previous explanations of the KPI variability were based on the hypothesis of random mutagenesis as a key factor of KPI polymorphism. KPI-A genes from the genomes of Solanum tuberosum cv. Istrinskii and the wild species Solanum palustre were amplified by PCR with subsequent cloning in plasmids. True KPI sequences were derived from comparison of the cloned copies. "Hot spots" of recombination in KPI genes were independently identified by DnaSP 4.0 and TOPALi v2.5 software. The KPI-A sequence from potato cv. Istrinskii was found to be 100% identical to the gene from Solanum nigrum. This fact illustrates a high degree of similarity of KPI genes in the genus Solanum. Pairwise comparison of KPI A and B genes unambiguously showed a non-uniform extent of polymorphism at different nt positions. Moreover, the occurrence of substitutions was not random along the strand. Taken together, these facts contradict the traditional hypothesis of random mutagenesis as a principal source of KPI gene polymorphism. The experimentally found mosaic structure of KPI genes in both plants studied is consistent with the hypothesis suggesting recombination of ancestral genes. The same mechanism was proposed earlier for other resistance-conferring genes in the nightshade family (Solanaceae). Based on the data obtained, we searched for potential motifs of site-specific binding with plant DNA recombinases. During this work, we analyzed the sequencing data reported by the Potato Genome Sequencing Consortium (PGSC), 2011 and found considerable inconsistence of their data concerning the number, location, and orientation of KPI genes of groups A and B. The key role of recombination rather than random point mutagenesis in KPI polymorphism was demonstrated for the first time. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  6. Lack of diversity in orthopaedic trials conducted in the United States.

    PubMed

    Somerson, Jeremy S; Bhandari, Mohit; Vaughan, Clayton T; Smith, Christopher S; Zelle, Boris A

    2014-04-02

    Several orthopaedic studies have suggested patient race and ethnicity to be important predictors of patient functional outcomes. This issue has also been emphasized by federal funding sources. However, the reporting of race and ethnicity has gained little attention in the orthopaedic literature. The objective of this study was to determine the percentage of orthopaedic randomized controlled clinical trials in the United States that included race and ethnicity data and to record the racial and ethnic distribution of patients enrolled in these trials. A systematic review of orthopaedic randomized controlled trials published from 2008 to 2011 was performed. The studies were identified through a manual search of thirty-two scientific journals, including all major orthopaedic journals as well as five leading medical journals. Only trials from the United States were included. The publication date, journal impact factor, orthopaedic subspecialty, ZIP code of the primary research site, number of enrolled patients, type of funding, and race and ethnicity of the study population were extracted from the identified studies. A total of 158 randomized controlled trials with 37,625 enrolled patients matched the inclusion criteria. Only thirty-two studies (20.3%) included race or ethnicity with at least one descriptor. Government funding significantly increased the likelihood of reporting these factors (p < 0.05). The percentages of Hispanic and African-American patients were extractable for studies with 7648 and 6591 enrolled patients, respectively. In those studies, 4.6% (352) of the patients were Hispanic and 6.2% (410) were African-American; these proportions were 3.5-fold and twofold lower, respectively, than those represented in the 2010 United States Census. Few orthopaedic randomized controlled trials performed in the United States reported data on race or ethnicity. Among trials that did report demographic race or ethnicity data, the inclusion of minority patients was substantially lower than would be expected on the basis of census demographics. Failure to represent the true racial diversity may result in decreased generalizability of trial conclusions across clinical populations.

  7. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  8. Predicting the number and sizes of IBD regions among family members and evaluating the family size requirement for linkage studies.

    PubMed

    Yang, Wanling; Wang, Zhanyong; Wang, Lusheng; Sham, Pak-Chung; Huang, Peng; Lau, Yu Lung

    2008-12-01

    With genotyping of high-density single nucleotide polymorphisms (SNPs) replacing that of microsatellite markers in linkage studies, it becomes possible to accurately determine the genomic regions shared identity by descent (IBD) by family members. In addition to evaluating the likelihood of linkage for a region with the underlining disease (the LOD score approach), an appropriate question to ask is what would be the expected number and sizes of IBD regions among the affecteds, as there could be more than one region reaching the maximum achievable LOD score for a given family. Here, we introduce a computer program to allow the prediction of the total number of IBD regions among family members and their sizes. Reversely, it can be used to predict the portion of the genome that can be excluded from consideration according to the family size and user-defined inheritance mode and penetrance. Such information has implications on the feasibility of conducting linkage analysis on a given family of certain size and structure or on a few small families when interfamily homogeneity can be assumed. It can also help determine the most relevant members to be genotyped for such a study. Simulation results showed that the IBD regions containing true mutations are usually larger than regions IBD due to random chance. We have made use of this feature in our program to allow evaluation of the identified IBD regions based on Bayesian probability calculation and simulation results.

  9. Surgical treatment for a ruptured true posterior communicating artery aneurysm arising on the fetal-type posterior communicating artery--two case reports and review of the literature.

    PubMed

    Nakano, Yoshiteru; Saito, Takeshi; Yamamoto, Junkoh; Takahashi, Mayu; Akiba, Daisuke; Kitagawa, Takehiro; Miyaoka, Ryo; Ueta, Kunihiro; Kurokawa, Toru; Nishizawa, Shigeru

    2011-12-01

    Only a small number of aneurysms arising on the posterior communicating artery itself (true Pcom aneurysm) have been reported. We report two cases of ruptured true Pcom aneurysms with some characteristic features of true Pcom aneurysm. A 43 year old man suffering from subarachnoid hemorrhage (SAH) had an aneurysm arising on the fetal-type Pcom artery itself, and underwent surgery for clipping. Most of the aneurysm was buried in the temporal lobe, so retraction of the temporal lobe was mandatory. During the retraction, premature rupture was encountered. After tentative dome clipping and the control of bleeding, complete clipping was achieved. Another patient, a 71 year old woman presenting with consciousness disturbance due to SAH, had an aneurysm on the fetal-type Pcom artery itself, and underwent surgery for clipping. It has been generally considered that hemodynamic factor plays an important role in the formation, the growth, and the rupture of the cerebral aneurysm. This factor is especially significant in true Pcom aneurysm formation and rupture. According to the literature, a combination of fetal type Pcom and formation of the true Pcom aneurysm has been reported in most cases (81.8%). Most of the aneurysm can be buried in the temporal lobe, and the retraction of the temporal lobe during the dissection of the neck would be necessary, which causes premature rupture of the true Pcom aneurysm. In the surgery for a true Pcom aneurysm, we should be aware of possible premature rupture when temporal lobe retraction is necessary.

  10. 25 CFR 166.308 - Can the number of animals and/or season of use be modified on the permitted land if I graze...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Can the number of animals and/or season of use be modified... WATER GRAZING PERMITS Land and Operations Management § 166.308 Can the number of animals and/or season... on-and-off grazing permit? Yes. The number of animals and/or season of use may be modified on...

  11. Space-Time Data Fusion

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel

    2011-01-01

    Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.

  12. An evaluation of behavior inferences from Bayesian state-space models: A case study with the Pacific walrus

    USGS Publications Warehouse

    Beatty, William; Jay, Chadwick V.; Fischbach, Anthony S.

    2016-01-01

    State-space models offer researchers an objective approach to modeling complex animal location data sets, and state-space model behavior classifications are often assumed to have a link to animal behavior. In this study, we evaluated the behavioral classification accuracy of a Bayesian state-space model in Pacific walruses using Argos satellite tags with sensors to detect animal behavior in real time. We fit a two-state discrete-time continuous-space Bayesian state-space model to data from 306 Pacific walruses tagged in the Chukchi Sea. We matched predicted locations and behaviors from the state-space model (resident, transient behavior) to true animal behavior (foraging, swimming, hauled out) and evaluated classification accuracy with kappa statistics (κ) and root mean square error (RMSE). In addition, we compared biased random bridge utilization distributions generated with resident behavior locations to true foraging behavior locations to evaluate differences in space use patterns. Results indicated that the two-state model fairly classified true animal behavior (0.06 ≤ κ ≤ 0.26, 0.49 ≤ RMSE ≤ 0.59). Kernel overlap metrics indicated utilization distributions generated with resident behavior locations were generally smaller than utilization distributions generated with true foraging behavior locations. Consequently, we encourage researchers to carefully examine parameters and priors associated with behaviors in state-space models, and reconcile these parameters with the study species and its expected behaviors.

  13. The impact of gender stereotypes on the evaluation of general practitioners' communication skills: an experimental study using transcripts of physician-patient encounters.

    PubMed

    Nicolai, Jennifer; Demmel, Ralf

    2007-12-01

    The present study has been designed to test for the effect of physicians' gender on the perception and assessment of empathic communication in medical encounters. Eighty-eight volunteers were asked to assess six transcribed interactions between physicians and a standardized patient. The effects of physicians' gender were tested by the experimental manipulation of physicians' gender labels in transcripts. Participants were randomly assigned to one of two testing conditions: (1) perceived gender corresponds to the physician's true gender; (2) perceived gender differs from the physician's true gender. Empathic communication was assessed using the Rating Scales for the Assessment of Empathic Communication in Medical Interviews. A 2 (physician's true gender: female vs. male)x2 (physician's perceived gender: female vs. male)x2 (rater's gender: female vs. male) mixed multivariate analysis of variance (MANOVA) yielded a main effect for physician's true gender. Female physicians were rated higher on empathic communication than male physicians irrespective of any gender labels. The present findings suggest that gender differences in the perception of physician's empathy are not merely a function of the gender label. These findings provide evidence for differences in male and female physicians' empathic communication that cannot be attributed to stereotype bias. Future efforts to evaluate communication skills training for general practitioners may consider gender differences.

  14. Two-Component Structure in the Entanglement Spectrum of Highly Excited States

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Chamon, Claudio; Hamma, Alioscia; Mucciolo, Eduardo R.

    2015-12-01

    We study the entanglement spectrum of highly excited eigenstates of two known models that exhibit a many-body localization transition, namely the one-dimensional random-field Heisenberg model and the quantum random energy model. Our results indicate that the entanglement spectrum shows a "two-component" structure: a universal part that is associated with random matrix theory, and a nonuniversal part that is model dependent. The nonuniversal part manifests the deviation of the highly excited eigenstate from a true random state even in the thermalized phase where the eigenstate thermalization hypothesis holds. The fraction of the spectrum containing the universal part decreases as one approaches the critical point and vanishes in the localized phase in the thermodynamic limit. We use the universal part fraction to construct an order parameter for measuring the degree of randomness of a generic highly excited state, which is also a promising candidate for studying the many-body localization transition. Two toy models based on Rokhsar-Kivelson type wave functions are constructed and their entanglement spectra are shown to exhibit the same structure.

  15. Leveraging Random Number Generation for Mastery of Learning in Teaching Quantitative Research Courses via an E-Learning Method

    ERIC Educational Resources Information Center

    Boonsathorn, Wasita; Charoen, Danuvasin; Dryver, Arthur L.

    2014-01-01

    E-Learning brings access to a powerful but often overlooked teaching tool: random number generation. Using random number generation, a practically infinite number of quantitative problem-solution sets can be created. In addition, within the e-learning context, in the spirit of the mastery of learning, it is possible to assign online quantitative…

  16. Problems with the random number generator RANF implemented on the CDC cyber 205

    NASA Astrophysics Data System (ADS)

    Kalle, Claus; Wansleben, Stephan

    1984-10-01

    We show that using RANF may lead to wrong results when lattice models are simulated by Monte Carlo methods. We present a shift-register sequence random number generator which generates two random numbers per cycle on a two pipe CDC Cyber 205.

  17. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    DTIC Science & Technology

    2017-12-01

    values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression

  18. Summation of the product of certain functions and generalized Fibonacci numbers

    NASA Astrophysics Data System (ADS)

    Chong, Chin-Yoon; Ang, Siew-Ling; Ho, C. K.

    2014-12-01

    In this paper, we derived the summation ∑ i = 0 n f(i)Ui and ∑ i = 0 ∞ f(i)Ui for certain functions f (i), where {Ui} is the generalized Fibonacci sequence defined by Un+2= pU n+1+qUn for all p,q∈ Z+ and for all non-negative integers n with the seed values U0 = 0, U1 = 1.

  19. Two-component Structure in the Entanglement Spectrum of Highly Excited States

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Chamon, Claudio; Hamma, Alioscia; Mucciolo, Eduardo

    We study the entanglement spectrum of highly excited eigenstates of two known models which exhibit a many-body localization transition, namely the one-dimensional random-field Heisenberg model and the quantum random energy model. Our results indicate that the entanglement spectrum shows a ``two-component'' structure: a universal part that is associated to Random Matrix Theory, and a non-universal part that is model dependent. The non-universal part manifests the deviation of the highly excited eigenstate from a true random state even in the thermalized phase where the Eigenstate Thermalization Hypothesis holds. The fraction of the spectrum containing the universal part decreases continuously as one approaches the critical point and vanishes in the localized phase in the thermodynamic limit. We use the universal part fraction to construct a new order parameter for the many-body delocalized-to-localized transition. Two toy models based on Rokhsar-Kivelson type wavefunctions are constructed and their entanglement spectra are shown to exhibit the same structure.

  20. True random bit generators based on current time series of contact glow discharge electrolysis

    NASA Astrophysics Data System (ADS)

    Rojas, Andrea Espinel; Allagui, Anis; Elwakil, Ahmed S.; Alawadhi, Hussain

    2018-05-01

    Random bit generators (RBGs) in today's digital information and communication systems employ a high rate physical entropy sources such as electronic, photonic, or thermal time series signals. However, the proper functioning of such physical systems is bound by specific constrains that make them in some cases weak and susceptible to external attacks. In this study, we show that the electrical current time series of contact glow discharge electrolysis, which is a dc voltage-powered micro-plasma in liquids, can be used for generating random bit sequences in a wide range of high dc voltages. The current signal is quantized into a binary stream by first using a simple moving average function which makes the distribution centered around zero, and then applying logical operations which enables the binarized data to pass all tests in industry-standard randomness test suite by the National Institute of Standard Technology. Furthermore, the robustness of this RBG against power supply attacks has been examined and verified.

  1. Bird counts in stands of big sagebrush and greasewood

    Treesearch

    Bruce L. Welch

    2005-01-01

    Total numbers of birds and numbers of bird species were significantly (p=0.05 percent) higher in stands of big sagebrush than in stands of greasewood. This was especially true for Brewer’s sparrow, lark sparrow, and mourning dove. The big sagebrush ecosystem appears to support greater number of birds and more species of birds than does the greasewood ecosystem.

  2. Source-Independent Quantum Random Number Generation

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  3. Study of Randomness in AES Ciphertexts Produced by Randomly Generated S-Boxes and S-Boxes with Various Modulus and Additive Constant Polynomials

    NASA Astrophysics Data System (ADS)

    Das, Suman; Sadique Uz Zaman, J. K. M.; Ghosh, Ranjan

    2016-06-01

    In Advanced Encryption Standard (AES), the standard S-Box is conventionally generated by using a particular irreducible polynomial {11B} in GF(28) as the modulus and a particular additive constant polynomial {63} in GF(2), though it can be generated by many other polynomials. In this paper, it has been shown that it is possible to generate secured AES S-Boxes by using some other selected modulus and additive polynomials and also can be generated randomly, using a PRNG like BBS. A comparative study has been made on the randomness of corresponding AES ciphertexts generated, using these S-Boxes, by the NIST Test Suite coded for this paper. It has been found that besides using the standard one, other moduli and additive constants are also able to generate equally or better random ciphertexts; the same is true for random S-Boxes also. As these new types of S-Boxes are user-defined, hence unknown, they are able to prevent linear and differential cryptanalysis. Moreover, they act as additional key-inputs to AES, thus increasing the key-space.

  4. On the limiting characteristics of quantum random number generators at various clusterings of photocounts

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2017-03-01

    Various methods for the clustering of photocounts constituting a sequence of random numbers are considered. It is shown that the clustering of photocounts resulting in the Fermi-Dirac distribution makes it possible to achieve the theoretical limit of the random number generation rate.

  5. Anonymous authenticated communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Cheryl L; Schroeppel, Richard C; Snyder, Lillian A

    2007-06-19

    A method of performing electronic communications between members of a group wherein the communications are authenticated as being from a member of the group and have not been altered, comprising: generating a plurality of random numbers; distributing in a digital medium the plurality of random numbers to the members of the group; publishing a hash value of contents of the digital medium; distributing to the members of the group public-key-encrypted messages each containing a same token comprising a random number; and encrypting a message with a key generated from the token and the plurality of random numbers.

  6. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  7. 40 CFR 204.57-5 - Reporting of test results.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Reporting of test results. 204.57-5... of test results. (a)(1) The manufacturer shall submit a copy of the test report for all testing... compressor. (iii) Compressor serial number. (iv) Test results by serial numbers (3) The first test report for...

  8. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  9. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  10. Diagnostic value of secreted frizzled-related protein 2 gene promoter hypermethylation in stool for colorectal cancer: A meta-analysis.

    PubMed

    Zhou, Zhiran; Zhang, Huitian; Lei, Yunxia

    2016-10-01

    To evaluate the diagnostic value of secreted frizzled-related protein 2 (SFRP2) gene promoter hypermethylation in stool for colorectal cancer (CRC). Open published diagnostic study of SFRP2 gene promoter hypermethylation in stool for CRC detection was electronic searched in the databases of PubMed, EMBASE, Cochrane Library, Web of Science, and China National Knowledge Infrastructure. The data of true positive, false positive false negative, and true negative identified by stool SFRP2 gene hypermethylation was extracted and pooled for diagnostic sensitivity, specificity, and summary receiver operating characteristic (SROC) curve. According to the inclusion and exclusion criteria, we finally included nine publications with 792 cases in the meta-analysis. Thus, the diagnostic sensitivity was aggregated through random effect model. The pooled sensitivity was 0.82 with the corresponding 95% confidence interval (95% CI) of 0.79-0.85; the pooled specificity and its corresponding 95% CI were 0.47 and 0.40-0.53 by the random effect model; we pooled the SROC curve by sensitivity versus specificity according to data published in the nine studies. The area under the SROC curve was 0.70 (95% CI: 0.65-0.73). SFRP2 gene promoter hypermethylation in stool can was a potential biomarker for CRC diagnosis with relative high sensitivity.

  11. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  12. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  13. Simulating Optical Correlation on a Digital Image Processing

    NASA Astrophysics Data System (ADS)

    Denning, Bryan

    1998-04-01

    Optical Correlation is a useful tool for recognizing objects in video scenes. In this paper, we explore the characteristics of a composite filter known as the equal correlation peak synthetic discriminant function (ECP SDF). Although the ECP SDF is commonly used in coherent optical correlation systems, the authors simulated the operation of a correlator using an EPIX frame grabber/image processor board to complete this work. Issues pertaining to simulating correlation using an EPIX board will be discussed. Additionally, the ability of the ECP SDF to detect objects that have been subjected to inplane rotation and small scale changes will be addressed by correlating filters against true-class objects placed randomly within a scene. To test the robustness of the filters, the results of correlating the filter against false-class objects that closely resemble the true class will also be presented.

  14. Not all numbers are equal: preferences and biases among children and adults when generating random sequences.

    PubMed

    Towse, John N; Loetscher, Tobias; Brugger, Peter

    2014-01-01

    We investigate the number preferences of children and adults when generating random digit sequences. Previous research has shown convincingly that adults prefer smaller numbers when randomly choosing between responses 1-6. We analyze randomization choices made by both children and adults, considering a range of experimental studies and task configurations. Children - most of whom are between 8 and 11~years - show a preference for relatively large numbers when choosing numbers 1-10. Adults show a preference for small numbers with the same response set. We report a modest association between children's age and numerical bias. However, children also exhibit a small number bias with a smaller response set available, and they show a preference specifically for the numbers 1-3 across many datasets. We argue that number space demonstrates both continuities (numbers 1-3 have a distinct status) and change (a developmentally emerging bias toward the left side of representational space or lower numbers).

  15. Breast Cancer Screening by Physical Examination: Randomized Trial in the Phillipines

    DTIC Science & Technology

    2005-10-01

    J -4327 TITLE: Breast Cancer Screening by Physical Examination: Randomized Trial in the Phillipines...Examination: Randomized Trial in the 5a. CONTRACT NUMBER Phillipines 5b. GRANT NUMBER DAMD17-94- J -4327 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...Grant DAMD17-94- J -4327 3 Table of

  16. Biotic games and cloud experimentation as novel media for biophysics education

    NASA Astrophysics Data System (ADS)

    Riedel-Kruse, Ingmar; Blikstein, Paulo

    2014-03-01

    First-hand, open-ended experimentation is key for effective formal and informal biophysics education. We developed, tested and assessed multiple new platforms that enable students and children to directly interact with and learn about microscopic biophysical processes: (1) Biotic games that enable local and online play using galvano- and photo-tactic stimulation of micro-swimmers, illustrating concepts such as biased random walks, Low Reynolds number hydrodynamics, and Brownian motion; (2) an undergraduate course where students learn optics, electronics, micro-fluidics, real time image analysis, and instrument control by building biotic games; and (3) a graduate class on the biophysics of multi-cellular systems that contains a cloud experimentation lab enabling students to execute open-ended chemotaxis experiments on slimemolds online, analyze their data, and build biophysical models. Our work aims to generate the equivalent excitement and educational impact for biophysics as robotics and video games have had for mechatronics and computer science, respectively. We also discuss how scaled-up cloud experimentation systems can support MOOCs with true lab components and life-science research in general.

  17. Current control of time-averaged magnetization in superparamagnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Bapna, Mukund; Majetich, Sara A.

    2017-12-01

    This work investigates spin transfer torque control of time-averaged magnetization in a small 20 nm × 60 nm nanomagnet with a low thermal stability factor, Δ ˜ 11. Here, the nanomagnet is a part of a magnetic tunnel junction and fluctuates between parallel and anti-parallel magnetization states with respect to the magnetization of the reference layer generating a telegraph signal in the current versus time measurements. The response of the nanomagnet to an external field is first analyzed to characterize the magnetic properties. We then show that the time-averaged magnetization in the telegraph signal can be fully controlled between +1 and -1 by voltage over a small range of 0.25 V. NIST Statistical Test Suite analysis is performed for testing true randomness of the telegraph signal that the device generates when operated at near critical current values for spin transfer torque. Utilizing the probabilistic nature of the telegraph signal generated at two different voltages, a prototype demonstration is shown for multiplication of two numbers using an artificial AND logic gate.

  18. Coded aperture ptychography: uniqueness and reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Pengwen; Fannjiang, Albert

    2018-02-01

    Uniqueness of solution is proved for any ptychographic scheme with a random mask under a minimum overlap condition and local geometric convergence analysis is given for the alternating projection (AP) and Douglas-Rachford (DR) algorithms. DR is shown to possess a unique fixed point in the object domain and for AP a simple criterion for distinguishing the true solution among possibly many fixed points is given. A minimalist scheme, where the adjacent masks overlap 50% of the area and each pixel of the object is illuminated by exactly four illuminations, is conveniently parametrized by the number q of shifted masks in each direction. The lower bound 1  -  C/q 2 is proved for the geometric convergence rate of the minimalist scheme, predicting a poor performance with large q which is confirmed by numerical experiments. The twin-image ambiguity is shown to arise for certain Fresnel masks and degrade the performance of reconstruction. Extensive numerical experiments are performed to explore the general features of a well-performing mask, the optimal value of q and the robustness with respect to measurement noise.

  19. Measurement of true ileal phosphorus digestibility in meat and bone meal for broiler chickens.

    PubMed

    Mutucumarana, R K; Ravindran, V; Ravindran, G; Cowieson, A J

    2015-07-01

    An experiment was conducted to estimate true ileal phosphorus (P:) digestibility of 3 meat and bone meal samples (MBM-1, MBM-2: , and MBM-3:) for broiler chickens. Four semipurified diets were formulated from each sample to contain graded concentrations of P. The experiment was conducted as a completely randomized design with 6 replicates (6 birds per replicate) per dietary treatment. A total of 432 Ross 308 broilers were assigned at 21 d of age to the 12 test diets. The apparent ileal digestibility coefficient of P was determined by the indicator method, and the linear regression method was used to determine the true P digestibility coefficient. The apparent ileal digestibility coefficient of P in birds fed diets containing MBM-1 and MBM-2 was unaffected by increasing dietary concentrations of P (P > 0.05). The apparent ileal digestibility coefficient of P in birds fed the MBM-3 diets decreased with increasing P concentrations (linear, P < 0.001; quadratic, P < 0. 01). In birds fed the MBM-1 and MBM-2 diets, ileal endogenous P losses were estimated to be 0.049 and 0.142 g/kg DM intake (DMI:), respectively. In birds fed the MBM-3 diets, endogenous P loss was estimated to be negative (-0.370 g/kg DMI). True ileal P digestibility of MBM-1, MBM-2, and MBM-3 was determined to be 0.693, 0.608, and 0.420, respectively. True ileal P digestibility coefficients determined for MBM-1 and MBM-2 were similar (P < 0.05), but were higher (P < 0.05) than that for MBM-3. Total P and true digestible P contents of MBM-1, MBM-2, and MBM-3 were determined to be 37.5 and 26.0; 60.2 and 36.6; and 59.8 and 25.1 g/kg, respectively, on an as-fed basis. © 2015 Poultry Science Association Inc.

  20. Generation of pseudo-random numbers

    NASA Technical Reports Server (NTRS)

    Howell, L. W.; Rheinfurth, M. H.

    1982-01-01

    Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.

  1. Direct generation of all-optical random numbers from optical pulse amplitude chaos.

    PubMed

    Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong

    2012-02-13

    We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.

  2. Vision Screening of School Children by Teachers as a Community Based Strategy to Address the Challenges of Childhood Blindness.

    PubMed

    Kaur, Gurvinder; Koshy, Jacob; Thomas, Satish; Kapoor, Harpreet; Zachariah, Jiju George; Bedi, Sahiba

    2016-04-01

    Early detection and treatment of vision problems in children is imperative to meet the challenges of childhood blindness. Considering the problems of inequitable distribution of trained manpower and limited access of quality eye care services to majority of our population, innovative community based strategies like 'Teachers training in vision screening' need to be developed for effective utilization of the available human resources. To evaluate the effectiveness of introducing teachers as the first level vision screeners. Teacher training programs were conducted for school teachers to educate them about childhood ocular disorders and the importance of their early detection. Teachers from government and semi-government schools located in Ludhiana were given training in vision screening. These teachers then conducted vision screening of children in their schools. Subsequently an ophthalmology team visited these schools for re-evaluation of children identified with low vision. Refraction was performed for all children identified with refractive errors and spectacles were prescribed. Children requiring further evaluation were referred to the base hospital. The project was done in two phases. True positives, false positives, true negatives and false negatives were calculated for evaluation. In phase 1, teachers from 166 schools underwent training in vision screening. The teachers screened 30,205 children and reported eye problems in 4523 (14.97%) children. Subsequently, the ophthalmology team examined 4150 children and confirmed eye problems in 2137 children. Thus, the teachers were able to correctly identify eye problems (true positives) in 47.25% children. Also, only 13.69% children had to be examined by the ophthalmology team, thus reducing their work load. Similarly, in phase 2, 46.22% children were correctly identified to have eye problems (true positives) by the teachers. By random sampling, 95.65% children were correctly identified as normal (true negatives) by the teachers. Considering the high true negative rates and reasonably good true positive rates and the wider coverage provided by the program, vision screening in schools by teachers is an effective method of identifying children with low vision. This strategy is also valuable in reducing the workload of the eye care staff.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrester, Peter J., E-mail: p.forrester@ms.unimelb.edu.au; Thompson, Colin J.

    The Golden-Thompson inequality, Tr (e{sup A+B}) ⩽ Tr (e{sup A}e{sup B}) for A, B Hermitian matrices, appeared in independent works by Golden and Thompson published in 1965. Both of these were motivated by considerations in statistical mechanics. In recent years the Golden-Thompson inequality has found applications to random matrix theory. In this article, we detail some historical aspects relating to Thompson's work, giving in particular a hitherto unpublished proof due to Dyson, and correspondence with Pólya. We show too how the 2 × 2 case relates to hyperbolic geometry, and how the original inequality holds true with the trace operation replaced bymore » any unitarily invariant norm. In relation to the random matrix applications, we review its use in the derivation of concentration type lemmas for sums of random matrices due to Ahlswede-Winter, and Oliveira, generalizing various classical results.« less

  4. Pruning a minimum spanning tree

    NASA Astrophysics Data System (ADS)

    Sandoval, Leonidas

    2012-04-01

    This work employs various techniques in order to filter random noise from the information provided by minimum spanning trees obtained from the correlation matrices of international stock market indices prior to and during times of crisis. The first technique establishes a threshold above which connections are considered affected by noise, based on the study of random networks with the same probability density distribution of the original data. The second technique is to judge the strength of a connection by its survival rate, which is the amount of time a connection between two stock market indices endures. The idea is that true connections will survive for longer periods of time, and that random connections will not. That information is then combined with the information obtained from the first technique in order to create a smaller network, in which most of the connections are either strong or enduring in time.

  5. Typical performance of approximation algorithms for NP-hard problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-11-01

    Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.

  6. Recommendations and illustrations for the evaluation of photonic random number generators

    NASA Astrophysics Data System (ADS)

    Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi

    2017-09-01

    The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.

  7. Effect of limestone particle size and calcium to non-phytate phosphorus ratio on true ileal calcium digestibility of limestone for broiler chickens.

    PubMed

    Anwar, M N; Ravindran, V; Morel, P C H; Ravindran, G; Cowieson, A J

    2016-10-01

    The purpose of this study was to determine the effect of limestone particle size and calcium (Ca) to non-phytate phosphorus (P) ratio on the true ileal Ca digestibility of limestone for broiler chickens. A limestone sample was passed through a set of sieves and separated into fine (<0.5 mm) and coarse (1-2 mm) particles. The analysed Ca concentration of both particle sizes was similar (420 g/kg). Six experimental diets were developed using each particle size with Ca:non-phytate P ratios of 1.5:1, 2.0:1 and 2.5:1, with ratios being adjusted by manipulating the dietary Ca concentrations. A Ca-free diet was also developed to determine the basal ileal endogenous Ca losses. Titanium dioxide (3 g/kg) was incorporated in all diets as an indigestible marker. Each experimental diet was randomly allotted to 6 replicate cages (8 birds per cage) and fed from d 21 to 24 post hatch. Apparent ileal digestibility of Ca was calculated using the indicator method and corrected for basal endogenous losses to determine the true Ca digestibility. The basal ileal endogenous Ca losses were determined to be 127 mg/kg of dry matter intake. Increasing Ca:non-phytate P ratios reduced the true Ca digestibility of limestone. The true Ca digestibility coefficients of limestone with Ca:non-phytate P ratios of 1.5, 2.0 and 2.5 were 0.65, 0.57 and 0.49, respectively. Particle size of limestone had a marked effect on the Ca digestibility, with the digestibility being higher in coarse particles (0.71 vs. 0.43).

  8. Evaluation of a risk-based environmental hot spot delineation algorithm.

    PubMed

    Sinha, Parikhit; Lambert, Michael B; Schew, William A

    2007-10-22

    Following remedial investigations of hazardous waste sites, remedial strategies may be developed that target the removal of "hot spots," localized areas of elevated contamination. For a given exposure area, a hot spot may be defined as a sub-area that causes risks for the whole exposure area to be unacceptable. The converse of this statement may also apply: when a hot spot is removed from within an exposure area, risks for the exposure area may drop below unacceptable thresholds. The latter is the motivation for a risk-based approach to hot spot delineation, which was evaluated using Monte Carlo simulation. Random samples taken from a virtual site ("true site") were used to create an interpolated site. The latter was gridded and concentrations from the center of each grid box were used to calculate 95% upper confidence limits on the mean site contaminant concentration and corresponding hazard quotients for a potential receptor. Grid cells with the highest concentrations were removed and hazard quotients were recalculated until the site hazard quotient dropped below the threshold of 1. The grid cells removed in this way define the spatial extent of the hot spot. For each of the 100,000 Monte Carlo iterations, the delineated hot spot was compared to the hot spot in the "true site." On average, the algorithm was able to delineate hot spots that were collocated with and equal to or greater in size than the "true hot spot." When delineated hot spots were mapped onto the "true site," setting contaminant concentrations in the mapped area to zero, the hazard quotients for these "remediated true sites" were on average within 5% of the acceptable threshold of 1.

  9. Sham acupuncture is as efficacious as true acupuncture for the treatment of IBS: A randomized placebo controlled trial.

    PubMed

    Lowe, C; Aiken, A; Day, A G; Depew, W; Vanner, S J

    2017-07-01

    Irritable bowel syndrome (IBS) patients increasingly seek out acupuncture therapy to alleviate symptoms, but it is unclear whether the benefit is due to a treatment-specific effect or a placebo response. This study examined whether true acupuncture is superior to sham acupuncture in relieving IBS symptoms and whether benefits were linked to purported acupuncture mechanisms. A double blind sham controlled acupuncture study was conducted with Rome I IBS patients receiving twice weekly true acupuncture for 4 weeks (n=43) or sham acupuncture (n=36). Patients returned at 12 weeks for a follow-up review. The primary endpoint of success as determined by whether patients met or exceeded their established goal for percentage symptom improvement. Questionnaires were completed for symptom severity scores, SF-36 and IBS-36 QOL tools, McGill pain score, and Pittsburg Sleep Quality Index. A subset of patients underwent barostat measurements of rectal sensation at baseline and 4 weeks. A total of 53% in the true acupuncture group met their criteria for a successful treatment intervention, but this did not differ significantly from the sham group (42%). IBS symptom scores similarly improved in both groups. Scores also improved in the IBS-36, SF-36, and the Pittsburg Sleep Quality Index, but did not differ between groups. Rectal sensory thresholds were increased in both groups following treatment and pain scores decreased; however, these changes were similar between groups. The lack of differences in symptom outcomes between sham and true treatment acupuncture suggests that acupuncture does not have a specific treatment effect in IBS. © 2017 John Wiley & Sons Ltd.

  10. Set-theoretic estimation of hybrid system configurations.

    PubMed

    Benazera, Emmanuel; Travé-Massuyès, Louise

    2009-10-01

    Hybrid systems serve as a powerful modeling paradigm for representing complex continuous controlled systems that exhibit discrete switches in their dynamics. The system and the models of the system are nondeterministic due to operation in uncertain environment. Bayesian belief update approaches to stochastic hybrid system state estimation face a blow up in the number of state estimates. Therefore, most popular techniques try to maintain an approximation of the true belief state by either sampling or maintaining a limited number of trajectories. These limitations can be avoided by using bounded intervals to represent the state uncertainty. This alternative leads to splitting the continuous state space into a finite set of possibly overlapping geometrical regions that together with the system modes form configurations of the hybrid system. As a consequence, the true system state can be captured by a finite number of hybrid configurations. A set of dedicated algorithms that can efficiently compute these configurations is detailed. Results are presented on two systems of the hybrid system literature.

  11. High resolution 4-D spectroscopy with sparse concentric shell sampling and FFT-CLEAN.

    PubMed

    Coggins, Brian E; Zhou, Pei

    2008-12-01

    Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.

  12. High Resolution 4-D Spectroscopy with Sparse Concentric Shell Sampling and FFT-CLEAN

    PubMed Central

    Coggins, Brian E.; Zhou, Pei

    2009-01-01

    SUMMARY Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise. PMID:18853260

  13. Spatial Downscaling of Alien Species Presences using Machine Learning

    NASA Astrophysics Data System (ADS)

    Daliakopoulos, Ioannis N.; Katsanevakis, Stelios; Moustakas, Aristides

    2017-07-01

    Large scale, high-resolution data on alien species distributions are essential for spatially explicit assessments of their environmental and socio-economic impacts, and management interventions for mitigation. However, these data are often unavailable. This paper presents a method that relies on Random Forest (RF) models to distribute alien species presence counts at a finer resolution grid, thus achieving spatial downscaling. A sufficiently large number of RF models are trained using random subsets of the dataset as predictors, in a bootstrapping approach to account for the uncertainty introduced by the subset selection. The method is tested with an approximately 8×8 km2 grid containing floral alien species presence and several indices of climatic, habitat, land use covariates for the Mediterranean island of Crete, Greece. Alien species presence is aggregated at 16×16 km2 and used as a predictor of presence at the original resolution, thus simulating spatial downscaling. Potential explanatory variables included habitat types, land cover richness, endemic species richness, soil type, temperature, precipitation, and freshwater availability. Uncertainty assessment of the spatial downscaling of alien species’ occurrences was also performed and true/false presences and absences were quantified. The approach is promising for downscaling alien species datasets of larger spatial scale but coarse resolution, where the underlying environmental information is available at a finer resolution than the alien species data. Furthermore, the RF architecture allows for tuning towards operationally optimal sensitivity and specificity, thus providing a decision support tool for designing a resource efficient alien species census.

  14. Performance of the SNPforID 52 SNP-plex assay in paternity testing.

    PubMed

    Børsting, Claus; Sanchez, Juan J; Hansen, Hanna E; Hansen, Anders J; Bruun, Hanne Q; Morling, Niels

    2008-09-01

    The performance of a multiplex assay with 52 autosomal single nucleotide polymorphisms (SNPs) developed for human identification was tested on 124 mother-child-father trios. The typical paternity indices (PIs) were 10(5)-10(6) for the trios and 10(3)-10(4) for the child-father duos. Using the SNP profiles from the randomly selected trios and 700 previously typed individuals, a total of 83,096 comparisons between mother, child and an unrelated man were performed. On average, 9-10 mismatches per comparison were detected. Four mismatches were genetic inconsistencies and 5-6 mismatches were opposite homozygosities. In only two of the 83,096 comparisons did an unrelated man match perfectly to a mother-child duo, and in both cases the PI of the true father was much higher than the PI of the unrelated man. The trios were also typed for 15 short tandem repeats (STRs) and seven variable number of tandem repeats (VNTRs). The typical PIs based on 15 STRs or seven VNTRs were 5-50 times higher than the typical PIs based on 52 SNPs. Six mutations in tandem repeats were detected among the randomly selected trios. In contrast, there was not found any mutations in the SNP loci. The results showed that the 52 SNP-plex assay is a very useful alternative to currently used methods in relationship testing. The usefulness of SNP markers with low mutation rates in paternity and immigration casework is discussed.

  15. Lessons Learned Through the Implementation of an eHealth Physical Activity Gaming Intervention with High School Youth.

    PubMed

    Pope, Lizzy; Garnett, Bernice; Dibble, Marguerite

    2018-04-01

    To encourage high school students to meet physical activity goals using a newly developed game, and to document the feasibility, benefits, and challenges of using an electronic gaming application to promote physical activity in high school students. Working with youth and game designers an electronic game, Camp Conquer, was developed to motivate high school students to meet physical activity goals. One-hundred-five high school students were recruited to participate in a 12-week pilot test of the game and randomly assigned to a Game Condition or Control Condition. Students in both conditions received a FitBit to track their activity, and participants in the Game Condition received access to Camp Conquer. Number of steps and active minutes each day were tracked for all participants. FitBit use, game logins, and qualitative feedback from researchers, school personnel, and participants were used to determine intervention engagement. The majority of study participants did not consistently wear their FitBit or engage with the gaming intervention. Numerous design challenges and barriers to successful implementation such as the randomized design, absence of a true school-based champion, ease of use, and game glitches were identified. Developing games is an exciting technique for motivating the completion of a variety of health behaviors. Although the present intervention was not successful in increasing physical activity in high school students, important lessons were learned regarding how to best structure a gaming intervention for the high school population.

  16. Measurement of true ileal calcium digestibility in meat and bone meal for broiler chickens using the direct method.

    PubMed

    Anwar, M N; Ravindran, V; Morel, P C H; Ravindran, G; Cowieson, A J

    2016-01-01

    The objective of the study that is presented herein was to determine the true ileal calcium (Ca) digestibility in meat and bone meal (MBM) for broiler chickens using the direct method. Four MBM samples (coded as MBM-1, MBM-2, MBM-3 and MBM-4) were obtained and analyzed for nutrient composition, particle size distribution and bone to soft tissue ratio. The Ca concentrations of MBM-1, MBM-2, MBM-3 and MBM-4 were determined to be 71, 118, 114 and 81 g/kg, respectively. The corresponding geometric mean particle diameters and bone to soft tissue ratios were 0.866, 0.622, 0.875 and 0.781 mm, and 1:1.49, 1:0.98, 1:0.92 and 1:1.35, respectively. Five experimental diets, including four diets with similar Ca concentration (8.3 g/kg) from each MBM and a Ca and phosphorus-free diet, were developed. Meat and bone meal served as the sole source of Ca in the MBM diets. Titanium dioxide (3 g/kg) was incorporated in all diets as an indigestible marker. Each experimental diet was randomly allotted to six replicate cages (eight birds per cage) and offered from d 28 to 31 post-hatch. Apparent ileal Ca digestibility was calculated by the indicator method and corrected for ileal endogenous Ca losses to determine the true ileal Ca digestibility. Ileal endogenous Ca losses were determined to be 88 mg/kg dry matter intake. True ileal Ca digestibility coefficients of MBM-1, MBM-2, MBM-3 and MBM-4 were determined to be 0.560, 0.446, 0.517 and 0.413, respectively. True Ca digestibility of MBM-1 was higher (P < 0.05) than MBM-2 and MBM-4 but similar (P > 0.05) to that of MBM-3. True Ca digestibility of MBM-2 was similar (P > 0.05) to MBM-3 and MBM-4, while that of MBM-3 was higher (P < 0.05) than MBM-4. These results demonstrated that the direct method can be used for the determination of true Ca digestibility in feed ingredients and that Ca in MBM is not highly available as often assumed. The variability in true Ca digestibility of MBM samples could not be attributed to Ca content, percentage bones or particle size. © 2015 Poultry Science Association Inc.

  17. Maintenance therapy with sucralfate in duodenal ulcer: genuine prevention or accelerated healing of ulcer recurrence?

    PubMed

    Bynum, T E; Koch, G G

    1991-08-08

    We sought to compare the efficacy of sucralfate to placebo for the prevention of duodenal ulcer recurrence and to determine that the efficacy of sucralfate was due to a true reduction in ulcer prevalence and not due to secondary effects such as analgesic activity or accelerated healing. This was a double-blind, randomized, placebo-controlled, parallel groups, multicenter clinical study with 254 patients. All patients had a past history of at least two duodenal ulcers with at least one ulcer diagnosed by endoscopic examination 3 months or less before the start of the study. Complete ulcer healing without erosions was required to enter the study. Sucralfate or placebo were dosed as a 1-g tablet twice a day for 4 months, or until ulcer recurrence. Endoscopic examinations once a month and when symptoms developed determined the presence or absence of duodenal ulcers. If a patient developed an ulcer between monthly scheduled visits, the patient was dosed with a 1-g sucralfate tablet twice a day until the next scheduled visit. Statistical analyses of the results determined the efficacy of sucralfate compared with placebo for preventing duodenal ulcer recurrence. Comparisons of therapeutic agents for preventing duodenal ulcers have usually been made by testing for statistical differences in the cumulative rates for all ulcers developed during a follow-up period, regardless of the time of detection. Statistical experts at the United States Food and Drug Administration (FDA) and on the FDA Advisory Panel expressed doubts about clinical study results based on this type of analysis. They suggested three possible mechanisms for reducing the number of observed ulcers: (a) analgesic effects, (b) accelerated healing, and (c) true ulcer prevention. Traditional ulcer analysis could miss recurring ulcers due to an analgesic effect or accelerated healing. Point-prevalence analysis could miss recurring ulcers due to accelerated healing between endoscopic examinations. Maximum ulcer analyses, a novel statistical method, eliminated analgesic effects by regularly scheduled endoscopies and accelerated healing of recurring ulcers by frequent endoscopies and an open-label phase. Maximum ulcer analysis reflects true ulcer recurrence and prevention. Sucralfate was significantly superior to placebo in reducing ulcer prevalence by all analyses. Significance (p less than 0.05) was found at months 3 and 4 for all analyses. All months were significant in the traditional analysis, months 2-4 in point-prevalence analysis, and months 3-4 in the maximal ulcer prevalence analysis. Sucralfate was shown to be effective for the prevention of duodenal ulcer recurrence by a true reduction in new ulcer development.

  18. An investigation of new toxicity test method performance in validation studies: 1. Toxicity test methods that have predictive capacity no greater than chance.

    PubMed

    Bruner, L H; Carr, G J; Harbell, J W; Curren, R D

    2002-06-01

    An approach commonly used to measure new toxicity test method (NTM) performance in validation studies is to divide toxicity results into positive and negative classifications, and the identify true positive (TP), true negative (TN), false positive (FP) and false negative (FN) results. After this step is completed, the contingent probability statistics (CPS), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) are calculated. Although these statistics are widely used and often the only statistics used to assess the performance of toxicity test methods, there is little specific guidance in the validation literature on what values for these statistics indicate adequate performance. The purpose of this study was to begin developing data-based answers to this question by characterizing the CPS obtained from an NTM whose data have a completely random association with a reference test method (RTM). Determining the CPS of this worst-case scenario is useful because it provides a lower baseline from which the performance of an NTM can be judged in future validation studies. It also provides an indication of relationships in the CPS that help identify random or near-random relationships in the data. The results from this study of randomly associated tests show that the values obtained for the statistics vary significantly depending on the cut-offs chosen, that high values can be obtained for individual statistics, and that the different measures cannot be considered independently when evaluating the performance of an NTM. When the association between results of an NTM and RTM is random the sum of the complementary pairs of statistics (sensitivity + specificity, NPV + PPV) is approximately 1, and the prevalence (i.e., the proportion of toxic chemicals in the population of chemicals) and PPV are equal. Given that combinations of high sensitivity-low specificity or low specificity-high sensitivity (i.e., the sum of the sensitivity and specificity equal to approximately 1) indicate lack of predictive capacity, an NTM having these performance characteristics should be considered no better for predicting toxicity than by chance alone.

  19. Analysis of histological findings obtained combining US/mp-MRI fusion-guided biopsies with systematic US biopsies: mp-MRI role in prostate cancer detection and false negative.

    PubMed

    Faiella, Eliodoro; Santucci, Domiziana; Greco, Federico; Frauenfelder, Giulia; Giacobbe, Viola; Muto, Giovanni; Zobel, Bruno Beomonte; Grasso, Rosario Francesco

    2018-02-01

    To evaluate the diagnostic accuracy of mp-MRI correlating US/mp-MRI fusion-guided biopsy with systematic random US-guided biopsy in prostate cancer diagnosis. 137 suspected prostatic abnormalities were identified on mp-MRI (1.5T) in 96 patients and classified according to PI-RADS score v2. All target lesions underwent US/mp-MRI fusion biopsy and prostatic sampling was completed by US-guided systematic random 12-core biopsies. Histological analysis and Gleason score were established for all the samples, both target lesions defined by mp-MRI, and random biopsies. PI-RADS score was correlated with the histological results, divided in three groups (benign tissue, atypia and carcinoma) and with Gleason groups, divided in four categories considering the new Grading system of the ISUP 2014, using t test. Multivariate analysis was used to correlate PI-RADS and Gleason categories to PSA level and abnormalities axial diameter. When the random core biopsies showed carcinoma (mp-MRI false-negatives), PSA value and lesions Gleason median value were compared with those of carcinomas identified by mp-MRI (true-positives), using t test. There was statistically significant difference between PI-RADS score in carcinoma, atypia and benign lesions groups (4.41, 3.61 and 3.24, respectively) and between PI-RADS score in Gleason < 7 group and Gleason > 7 group (4.14 and 4.79, respectively). mp-MRI performance was more accurate for lesions > 15 mm and in patients with PSA > 6 ng/ml. In systematic sampling, 130 (11.25%) mp-MRI false-negative were identified. There was no statistic difference in Gleason median value (7.0 vs 7.06) between this group and the mp-MRI true-positives, but a significant lower PSA median value was demonstrated (7.08 vs 7.53 ng/ml). mp-MRI remains the imaging modality of choice to identify PCa lesions. Integrating US-guided random sampling with US/mp-MRI fusion target lesions sampling, 3.49% of false-negative were identified.

  20. Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling

    NASA Astrophysics Data System (ADS)

    Bierhorst, Peter; Shalm, Lynden K.; Mink, Alan; Jordan, Stephen; Liu, Yi-Kai; Rommal, Andrea; Glancy, Scott; Christensen, Bradley; Nam, Sae Woo; Knill, Emanuel

    Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment to obtain data containing randomness that cannot be predicted by any theory that does not also allow the sending of signals faster than the speed of light. To certify and quantify the randomness, we develop a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtain 256 new random bits, uniform to within 10- 3 .

  1. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. p-Curve and p-Hacking in Observational Research

    PubMed Central

    Bruns, Stephan B.; Ioannidis, John P. A.

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable. PMID:26886098

  3. Symptoms in Response to Controlled Diesel Exhaust More Closely Reflect Exposure Perception Than True Exposure

    PubMed Central

    Carlsten, Chris; Oron, Assaf P.; Curtiss, Heidi; Jarvis, Sara; Daniell, William; Kaufman, Joel D.

    2013-01-01

    Background Diesel exhaust (DE) exposures are very common, yet exposure-related symptoms haven’t been rigorously examined. Objective Describe symptomatic responses to freshly generated and diluted DE and filtered air (FA) in a controlled human exposure setting; assess whether such responses are altered by perception of exposure. Methods 43 subjects participated within three double-blind crossover experiments to order-randomized DE exposure levels (FA and DE calibrated at 100 and/or 200 micrograms/m3 particulate matter of diameter less than 2.5 microns), and completed questionnaires regarding symptoms and dose perception. Results For a given symptom cluster, the majority of those exposed to moderate concentrations of diesel exhaust do not report such symptoms. The most commonly reported symptom cluster was of the nose (29%). Blinding to exposure is generally effective. Perceived exposure, rather than true exposure, is the dominant modifier of symptom reporting. Conclusion Controlled human exposure to moderate-dose diesel exhaust is associated with a range of mild symptoms, though the majority of individuals will not experience any given symptom. Blinding to DE exposure is generally effective. Perceived DE exposure, rather than true DE exposure, is the dominant modifier of symptom reporting. PMID:24358296

  4. Human Fear Chemosignaling: Evidence from a Meta-Analysis.

    PubMed

    de Groot, Jasper H B; Smeets, Monique A M

    2017-10-01

    Alarm pheromones are widely used in the animal kingdom. Notably, there are 26 published studies (N = 1652) highlighting a human capacity to communicate fear, stress, and anxiety via body odor from one person (66% males) to another (69% females). The question is whether the findings of this literature reflect a true effect, and what the average effect size is. These questions were answered by combining traditional meta-analysis with novel meta-analytical tools, p-curve analysis and p-uniform-techniques that could indicate whether findings are likely to reflect a true effect based on the distribution of P-values. A traditional random-effects meta-analysis yielded a small-to-moderate effect size (Hedges' g: 0.36, 95% CI: 0.31-0.41), p-curve analysis showed evidence diagnostic of a true effect (ps < 0.0001), and there was no evidence for publication bias. This meta-analysis did not assess the internal validity of the current studies; yet, the combined results illustrate the statistical robustness of a field in human olfaction dealing with the human capacity to communicate certain emotions (fear, stress, anxiety) via body odor. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    PubMed

    Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-03-15

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.

  6. Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE): a randomized controlled trial protocol.

    PubMed

    Winstein, Carolee J; Wolf, Steven L; Dromerick, Alexander W; Lane, Christianne J; Nelsen, Monica A; Lewthwaite, Rebecca; Blanton, Sarah; Scott, Charro; Reiss, Aimee; Cen, Steven Yong; Holley, Rahsaan; Azen, Stanley P

    2013-01-11

    Residual disability after stroke is substantial; 65% of patients at 6 months are unable to incorporate the impaired upper extremity into daily activities. Task-oriented training programs are rapidly being adopted into clinical practice. In the absence of any consensus on the essential elements or dose of task-specific training, an urgent need exists for a well-designed trial to determine the effectiveness of a specific multidimensional task-based program governed by a comprehensive set of evidence-based principles. The Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) Stroke Initiative is a parallel group, three-arm, single blind, superiority randomized controlled trial of a theoretically-defensible, upper extremity rehabilitation program provided in the outpatient setting.The primary objective of ICARE is to determine if there is a greater improvement in arm and hand recovery one year after randomization in participants receiving a structured training program termed Accelerated Skill Acquisition Program (ASAP), compared to participants receiving usual and customary therapy of an equivalent dose (DEUCC). Two secondary objectives are to compare ASAP to a true (active monitoring only) usual and customary (UCC) therapy group and to compare DEUCC and UCC. Following baseline assessment, participants are randomized by site, stratified for stroke duration and motor severity. 360 adults will be randomized, 14 to 106 days following ischemic or hemorrhagic stroke onset, with mild to moderate upper extremity impairment, recruited at sites in Atlanta, Los Angeles and Washington, D.C. The Wolf Motor Function Test (WMFT) time score is the primary outcome at 1 year post-randomization. The Stroke Impact Scale (SIS) hand domain is a secondary outcome measure.The design includes concealed allocation during recruitment, screening and baseline, blinded outcome assessment and intention to treat analyses. Our primary hypothesis is that the improvement in log-transformed WMFT time will be greater for the ASAP than the DEUCC group. This pre-planned hypothesis will be tested at a significance level of 0.05. ICARE will test whether ASAP is superior to the same number of hours of usual therapy. Pre-specified secondary analyses will test whether 30 hours of usual therapy is superior to current usual and customary therapy not controlled for dose. www.ClinicalTrials.gov Identifier: NCT00871715

  7. E-Rehabilitation - an Internet and mobile phone based tailored intervention to enhance self-management of cardiovascular disease: study protocol for a randomized controlled trial.

    PubMed

    Antypas, Konstantinos; Wangberg, Silje C

    2012-07-09

    Cardiac rehabilitation is very important for the recovery and the secondary prevention of cardiovascular disease, and one of its main strategies is to increase the level of physical activity. Internet and mobile phone based interventions have been successfully used to help people to achieve this. One of the components that are related to the efficacy of these interventions is tailoring of content to the individual. This trial is studying the effect of a longitudinally tailored Internet and mobile phone based intervention that is based on models of health behaviour, on the level of physical activity and the adherence to the intervention, as an extension of a face-to-face cardiac rehabilitation stay. A parallel group, cluster randomized controlled trial. The study population is adult participants of a cardiac rehabilitation programme in Norway with home Internet access and mobile phone, who in monthly clusters are randomized to the control or the intervention condition. Participants have access to a website with information regarding cardiac rehabilitation, an online discussion forum and an online activity calendar. Those randomized to the intervention condition, receive in addition tailored content based on models of health behaviour, through the website and mobile text messages. The objective is to assess the effect of the intervention on maintenance of self-management behaviours after the rehabilitation stay. Main outcome is the level of physical activity one month, three months and one year after the end of the cardiac rehabilitation programme. The randomization of clusters is based on a true random number online service, and participants, investigators and outcome assessor are blinded to the condition of the clusters. The study suggests a theory-based intervention that combines models of health behaviour in an innovative way, in order to tailor the delivered content. The users have been actively involved in its design, and because of the use of Open-Source software, the intervention can easily and at low-cost be reproduced and expanded by others. Challenges are the recruitment in the elderly population and the possible underrepresentation of women in the study sample. Funding by Northern Norway Regional Health Authority. Trial registry http://www.clinicaltrials.gov: NCT01223170.

  8. Unbiased All-Optical Random-Number Generator

    NASA Astrophysics Data System (ADS)

    Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja

    2017-10-01

    The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.

  9. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  10. Consequences of plant invasions on compartmentalization and species’ roles in plant–pollinator networks

    PubMed Central

    Albrecht, Matthias; Padrón, Benigno; Bartomeus, Ignasi; Traveset, Anna

    2014-01-01

    Compartmentalization—the organization of ecological interaction networks into subsets of species that do not interact with other subsets (true compartments) or interact more frequently among themselves than with other species (modules)—has been identified as a key property for the functioning, stability and evolution of ecological communities. Invasions by entomophilous invasive plants may profoundly alter the way interaction networks are compartmentalized. We analysed a comprehensive dataset of 40 paired plant–pollinator networks (invaded versus uninvaded) to test this hypothesis. We show that invasive plants have higher generalization levels with respect to their pollinators than natives. The consequences for network topology are that—rather than displacing native species from the network—plant invaders attracting pollinators into invaded modules tend to play new important topological roles (i.e. network hubs, module hubs and connectors) and cause role shifts in native species, creating larger modules that are more connected among each other. While the number of true compartments was lower in invaded compared with uninvaded networks, the effect of invasion on modularity was contingent on the study system. Interestingly, the generalization level of the invasive plants partially explains this pattern, with more generalized invaders contributing to a lower modularity. Our findings indicate that the altered interaction structure of invaded networks makes them more robust against simulated random secondary species extinctions, but more vulnerable when the typically highly connected invasive plants go extinct first. The consequences and pathways by which biological invasions alter the interaction structure of plant–pollinator communities highlighted in this study may have important dynamical and functional implications, for example, by influencing multi-species reciprocal selection regimes and coevolutionary processes. PMID:24943368

  11. Consideraciones para la estimacion de abundancia de poblaciones de mamiferos. [Considerations for the estimation of abundance of mammal populations.

    USGS Publications Warehouse

    Walker, R.S.; Novare, A.J.; Nichols, J.D.

    2000-01-01

    Estimation of abundance of mammal populations is essential for monitoring programs and for many ecological investigations. The first step for any study of variation in mammal abundance over space or time is to define the objectives of the study and how and why abundance data are to be used. The data used to estimate abundance are count statistics in the form of counts of animals or their signs. There are two major sources of uncertainty that must be considered in the design of the study: spatial variation and the relationship between abundance and the count statistic. Spatial variation in the distribution of animals or signs may be taken into account with appropriate spatial sampling. Count statistics may be viewed as random variables, with the expected value of the count statistic equal to the true abundance of the population multiplied by a coefficient p. With direct counts, p represents the probability of detection or capture of individuals, and with indirect counts it represents the rate of production of the signs as well as their probability of detection. Comparisons of abundance using count statistics from different times or places assume that the p are the same for all times or places being compared (p= pi). In spite of considerable evidence that this assumption rarely holds true, it is commonly made in studies of mammal abundance, as when the minimum number alive or indices based on sign counts are used to compare abundance in different habitats or times. Alternatives to relying on this assumption are to calibrate the index used by testing the assumption of p= pi, or to incorporate the estimation of p into the study design.

  12. Effects of a Structured Discharge Planning Program on Perceived Functional Status, Cardiac Self-efficacy, Patient Satisfaction, and Unexpected Hospital Revisits Among Filipino Cardiac Patients: A Randomized Controlled Study.

    PubMed

    Cajanding, Ruff Joseph

    Cardiovascular diseases remain the leading cause of morbidity and mortality among Filipinos and are responsible for a very large number of hospital readmissions. Comprehensive discharge planning programs have demonstrated positive benefits among various populations of patients with cardiovascular disease, but the clinical and psychosocial effects of such intervention among Filipino patients with acute myocardial infarction (AMI) have not been studied. In this study we aimed to determine the effectiveness of a nurse-led structured discharge planning program on perceived functional status, cardiac self-efficacy, patient satisfaction, and unexpected hospital revisits among Filipino patients with AMI. A true experimental (randomized control) 2-group design with repeated measures and data collected before and after intervention and at 1-month follow-up was used in this study. Participants were assigned to either the control (n = 68) or the intervention group (n = 75). Intervention participants underwent a 3-day structured discharge planning program implemented by a cardiovascular nurse practitioner, which is comprised of a series of individualized lecture-discussion, provision of feedback, integrative problem solving, goal setting, and action planning. Control participants received standard routine care. Measures of functional status, cardiac self-efficacy, and patient satisfaction were measured at baseline; cardiac self-efficacy and patient satisfaction scores were measured prior to discharge, and perceived functional status and number of revisits were measured 1 month after discharge. Participants in the intervention group had significant improvement in functional status, cardiac self-efficacy, and patient satisfaction scores at baseline and at follow-up compared with the control participants. Furthermore, participants in the intervention group had significantly fewer hospital revisits compared with those who received only standard care. The results demonstrate that a nurse-led structured discharge planning program is an effective intervention in improving perceived functional health status, cardiac self-efficacy, and patient satisfaction, while reducing the number of unexpected hospital revisits, among Filipino patients with AMI. It is recommended that this intervention be incorporated in the optimal care of patients being discharged with an AMI.

  13. 40 CFR 205.171-7 - Reporting of the test results.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Reporting of the test results. 205.171... Reporting of the test results. (a)(1) The manufacturer must submit a copy of the test report for all testing...) Year, make serial number, and model of test motorcycle; and (iv) Test results by serial numbers. (b) In...

  14. 40 CFR 205.160-5 - Reporting of the test results.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Reporting of the test results. 205.160... test results. (a)(1) The manufacturer must submit a copy of the test report for all testing conducted...; (iii) Vehicle serial number; and (iv) Test results by serial numbers. (b) In the case where an EPA...

  15. Spreadsheet Simulation of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Boger, George

    2005-01-01

    If larger and larger samples are successively drawn from a population and a running average calculated after each sample has been drawn, the sequence of averages will converge to the mean, [mu], of the population. This remarkable fact, known as the law of large numbers, holds true if samples are drawn from a population of discrete or continuous…

  16. Ensuring the Future

    ERIC Educational Resources Information Center

    Ashkin, Stephen P.

    2005-01-01

    It is hard to be a school professional these days. There seems to be an increasing number of demands, but the time, money and public support needed to bring about school improvements seem to be declining. This is especially true when considering buildings and grounds, which often get a seat at the back of the class. A number of new, affordable…

  17. 21 CFR 607.35 - Notification of registrant; blood product establishment registration number and NDC Labeler Code.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Notification of registrant; blood product establishment registration number and NDC Labeler Code. 607.35 Section 607.35 Food and Drugs FOOD AND DRUG... PRODUCT LISTING FOR MANUFACTURERS OF HUMAN BLOOD AND BLOOD PRODUCTS Procedures for Domestic Blood Product...

  18. The Role of Knowledge Visualisation in Supporting Postgraduate Dissertation Assessment

    ERIC Educational Resources Information Center

    Renaud, Karen; Van Biljon, Judy

    2017-01-01

    There has been a worldwide increase in the number of postgraduate students over the last few years and therefore some examiners struggle to maintain high standards of consistency, accuracy and fairness. This is especially true in developing countries where the increase is supervision capacity is not on a par with the growth in student numbers. The…

  19. 21 CFR 720.9 - Misbranding by reference to filing or to statement number.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Misbranding by reference to filing or to statement number. 720.9 Section 720.9 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) COSMETICS VOLUNTARY FILING OF COSMETIC PRODUCT INGREDIENT COMPOSITION STATEMENTS § 720...

  20. Random numbers certified by Bell's theorem.

    PubMed

    Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C

    2010-04-15

    Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory.

  1. Pólya number and first return of bursty random walk: Rigorous solutions

    NASA Astrophysics Data System (ADS)

    Wan, J.; Xu, X. P.

    2012-03-01

    The recurrence properties of random walks can be characterized by Pólya number, i.e., the probability that the walker has returned to the origin at least once. In this paper, we investigate Pólya number and first return for bursty random walk on a line, in which the walk has different step size and moving probabilities. Using the concept of the Catalan number, we obtain exact results for first return probability, the average first return time and Pólya number for the first time. We show that Pólya number displays two different functional behavior when the walk deviates from the recurrent point. By utilizing the Lagrange inversion formula, we interpret our findings by transferring Pólya number to the closed-form solutions of an inverse function. We also calculate Pólya number using another approach, which corroborates our results and conclusions. Finally, we consider the recurrence properties and Pólya number of two variations of the bursty random walk model.

  2. How reliable are ligand-centric methods for Target Fishing?

    NASA Astrophysics Data System (ADS)

    Peon, Antonio; Dang, Cuong; Ballester, Pedro

    2016-04-01

    Computational methods for Target Fishing (TF), also known as Target Prediction or Polypharmacology Prediction, can be used to discover new targets for small-molecule drugs. This may result in repositioning the drug in a new indication or improving our current understanding of its efficacy and side effects. While there is a substantial body of research on TF methods, there is still a need to improve their validation, which is often limited to a small part of the available targets and not easily interpretable by the user. Here we discuss how target-centric TF methods are inherently limited by the number of targets that can possibly predict (this number is by construction much larger in ligand-centric techniques). We also propose a new benchmark to validate TF methods, which is particularly suited to analyse how predictive performance varies with the query molecule. On average over approved drugs, we estimate that only five predicted targets will have to be tested to find two true targets with submicromolar potency (a strong variability in performance is however observed). In addition, we find that an approved drug has currently an average of eight known targets, which reinforces the notion that polypharmacology is a common and strong event. Furthermore, with the assistance of a control group of randomly-selected molecules, we show that the targets of approved drugs are generally harder to predict.

  3. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets: I. Summary statistics

    USGS Publications Warehouse

    Antweiler, Ronald C.; Taylor, Howard E.

    2008-01-01

    The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.

  4. Decision-making ability of Physarum polycephalum enhanced by its coordinated spatiotemporal oscillatory dynamics.

    PubMed

    Iwayama, Koji; Zhu, Liping; Hirata, Yoshito; Aono, Masashi; Hara, Masahiko; Aihara, Kazuyuki

    2016-04-12

    An amoeboid unicellular organism, a plasmodium of the true slime mold Physarum polycephalum, exhibits complex spatiotemporal oscillatory dynamics and sophisticated information processing capabilities while deforming its amorphous body. We previously devised an 'amoeba-based computer (ABC),' that implemented optical feedback control to lead this amoeboid organism to search for a solution to the traveling salesman problem (TSP). In the ABC, the shortest TSP route (the optimal solution) is represented by the shape of the organism in which the body area (nutrient absorption) is maximized while the risk of being exposed to aversive light stimuli is minimized. The shortness of the TSP route found by ABC, therefore, serves as a quantitative measure of the optimality of the decision made by the organism. However, it remains unclear how the decision-making ability of the organism originates from the oscillatory dynamics of the organism. We investigated the number of coexisting traveling waves in the spatiotemporal patterns of the oscillatory dynamics of the organism. We show that a shorter TSP route can be found when the organism exhibits a lower number of traveling waves. The results imply that the oscillatory dynamics are highly coordinated throughout the global body. Based on the results, we discuss the fact that the decision-making ability of the organism can be enhanced not by uncorrelated random fluctuations, but by its highly coordinated oscillatory dynamics.

  5. Intelligence Support to the Life Science Community: Mitigating Threats from Bioterrorism

    DTIC Science & Technology

    2004-01-01

    biological weapons capability at a rate that will outpace the moni- toring ability of the national security community. Thus, peo- ple doing security...of the 14 K. Alibek, Biohazard: The Chilling True Story of the Largest Biological Weapons Program in the World (New York: Random House, 1999) and...Alberts, R. M. May, “Scientist Support for Biological Weapons Controls.” Science 298 (8 November 2002): 1135. information from general release without a

  6. Insulin glulisine in the management of diabetes

    PubMed Central

    Yamada, Satoru

    2009-01-01

    Insulin glulisine is appealing in principle, but the advantages of this drug over the other rapid-acting insulin analogs are still relatively unknown. The frequency of hypoglycemia, convenience in the timing of administration, and improvements in terms of HbA1c seem similar among the rapid-acting insulin analogs, including insulin glulisine. Only properly randomized long-term clinical studies with insulin glulisine will reveal the true value of this novel insulin analog. PMID:21437124

  7. Experimentally generated randomness certified by the impossibility of superluminal signals.

    PubMed

    Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K

    2018-04-01

    From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.

  8. Influence of particle aspect ratio on the midinfrared extinction spectra of wavelength-sized ice crystals.

    PubMed

    Wagner, Robert; Benz, Stefan; Möhler, Ottmar; Saathoff, Harald; Schnaiter, Martin; Leisner, Thomas

    2007-12-20

    We have used the T-matrix method and the discrete dipole approximation to compute the midinfrared extinction cross-sections (4500-800 cm(-1)) of randomly oriented circular ice cylinders for aspect ratios extending up to 10 for oblate and down to 1/6 for prolate particle shapes. Equal-volume sphere diameters ranged from 0.1 to 10 microm for both particle classes. A high degree of particle asphericity provokes a strong distortion of the spectral habitus compared to the extinction spectrum of compactly shaped ice crystals with an aspect ratio around 1. The magnitude and the sign (increase or diminution) of the shape-related changes in both the absorption and the scattering cross-sections crucially depend on the particle size and the values for the real and imaginary part of the complex refractive index. When increasing the particle asphericity for a given equal-volume sphere diameter, the values for the overall extinction cross-sections may change in opposite directions for different parts of the spectrum. We have applied our calculations to the analysis of recent expansion cooling experiments on the formation of cirrus clouds, performed in the large coolable aerosol and cloud chamber AIDA of Forschungszentrum Karlsruhe at a temperature of 210 K. Depending on the nature of the seed particles and the temperature and relative humidity characteristics during the expansion, ice crystals of various shapes and aspect ratios could be produced. For a particular expansion experiment, using Illite mineral dust particles coated with a layer of secondary organic matter as seed aerosol, we have clearly detected the spectral signatures characteristic of strongly aspherical ice crystal habits in the recorded infrared extinction spectra. We demonstrate that the number size distributions and total number concentrations of the ice particles that were generated in this expansion run can only be accurately derived from the recorded infrared spectra when employing aspect ratios as high as 10 in the retrieval approach. Remarkably, the measured spectra could also be accurately fitted when employing an aspect ratio of 1 in the retrieval. The so-deduced ice particle number concentrations, however, exceeded the true values, determined with an optical particle counter, by more than 1 order of magnitude. Thus, the shape-induced spectral changes between the extinction spectra of platelike ice crystals of aspect ratio 10 and compactly shaped particles of aspect ratio 1 can be efficiently balanced by deforming the true number size distribution of the ice cloud. As a result of this severe size/shape ambiguity in the spectral analysis, we consider it indispensable to cross-check the infrared retrieval results of wavelength-sized ice particles with independent reference measurements of either the number size distribution or the particle morphology.

  9. Statistical auditing and randomness test of lotto k/N-type games

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  10. A New Metamodeling Approach for Time-dependent Reliability of Dynamic Systems with Random Parameters Excited by Input Random Processes

    DTIC Science & Technology

    2014-04-09

    Excited by Input Random Processes Igor Baseski1,2, Dorin Drignei3, Zissimos P. Mourelatos1, Monica Majcher1 Oakland University, Rochester MI 48309 1...CONTRACT NUMBER W56HZV-04-2-0001 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Igor Baseski; Dorin Drignei; Zissimos Mourelatos; Monica

  11. Realistic noise-tolerant randomness amplification using finite number of devices.

    PubMed

    Brandão, Fernando G S L; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-21

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  12. Realistic noise-tolerant randomness amplification using finite number of devices

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  13. Realistic noise-tolerant randomness amplification using finite number of devices

    PubMed Central

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-01-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology. PMID:27098302

  14. Generating random numbers by means of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi

    2018-07-01

    To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the angular displacements of chaotic motion, random binary numerical arrays can be generated. To check the randomness of generated numerical arrays, the NIST Special Publication 800-20 method was adopted. As a result, it was found that all the random arrays which were generated by the chaotic motion could pass the validity criteria and some of them were even better than the quality of pseudo-random numbers generated by a computer. Through the experiments, it is demonstrated that chaotic pendulum can be used as an efficient mechanical facility in generating random numbers, and can be applied in teaching random motion to the students.

  15. Moving along the number line: operational momentum in nonsymbolic arithmetic.

    PubMed

    McCrink, Koleen; Dehaene, Stanislas; Dehaene-Lambertz, Ghislaine

    2007-11-01

    Can human adults perform arithmetic operations with large approximate numbers, and what effect, if any, does an internal spatial-numerical representation of numerical magnitude have on their responses? We conducted a psychophysical study in which subjects viewed several hundred short videos of sets of objects being added or subtracted from one another and judged whether the final numerosity was correct or incorrect. Over a wide range of possible outcomes, the subjects' responses peaked at the approximate location of the true numerical outcome and gradually tapered off as a function of the ratio of the true and proposed outcomes (Weber's law). Furthermore, an operational momentum effect was observed, whereby addition problems were overestimated and subtraction problems were underestimated. The results show that approximate arithmetic operates according to precise quantitative rules, perhaps analogous to those characterizing movement on an internal continuum.

  16. Compact Quantum Random Number Generator with Silicon Nanocrystals Light Emitting Device Coupled to a Silicon Photomultiplier

    NASA Astrophysics Data System (ADS)

    Bisadi, Zahra; Acerbi, Fabio; Fontana, Giorgio; Zorzi, Nicola; Piemonte, Claudio; Pucker, Georg; Pavesi, Lorenzo

    2018-02-01

    A small-sized photonic quantum random number generator, easy to be implemented in small electronic devices for secure data encryption and other applications, is highly demanding nowadays. Here, we propose a compact configuration with Silicon nanocrystals large area light emitting device (LED) coupled to a Silicon photomultiplier to generate random numbers. The random number generation methodology is based on the photon arrival time and is robust against the non-idealities of the detector and the source of quantum entropy. The raw data show high quality of randomness and pass all the statistical tests in national institute of standards and technology tests (NIST) suite without a post-processing algorithm. The highest bit rate is 0.5 Mbps with the efficiency of 4 bits per detected photon.

  17. Frozen embryo transfer: a review on the optimal endometrial preparation and timing.

    PubMed

    Mackens, S; Santos-Ribeiro, S; van de Vijver, A; Racca, A; Van Landuyt, L; Tournaye, H; Blockeel, C

    2017-11-01

    What is the optimal endometrial preparation protocol for a frozen embryo transfer (FET)? Although the optimal endometrial preparation protocol for FET needs further research and is yet to be determined, we propose a standardized timing strategy based on the current available evidence which could assist in the harmonization and comparability of clinic practice and future trials. Amid a continuous increase in the number of FET cycles, determining the optimal endometrial preparation protocol has become paramount to maximize ART success. In current daily practice, different FET preparation methods and timing strategies are used. This is a review of the current literature on FET preparation methods, with special attention to the timing of the embryo transfer. Literature on the topic was retrieved in PubMed and references from relevant articles were investigated until June 2017. The number of high quality randomized controlled trials (RCTs) is scarce and, hence, the evidence for the best protocol for FET is poor. Future research should compare both the pregnancy and neonatal outcomes between HRT and true natural cycle (NC) FET. In terms of embryo transfer timing, we propose to start progesterone intake on the theoretical day of oocyte retrieval in HRT and to perform blastocyst transfer at hCG + 7 or LH + 6 in modified or true NC, respectively. As only a few high quality RCTs on the optimal preparation for FET are available in the existing literature, no definitive conclusion for benefit of one protocol over the other can be drawn so far. Caution when using HRT for FET is warranted since the rate of early pregnancy loss is alarmingly high in some reports. S.M. is funded by the Research Fund of Flanders (FWO). H.T. and C.B. report grants from Merck, Goodlife, Besins and Abbott during the conduct of the study. Not applicable. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Observer error structure in bull trout redd counts in Montana streams: Implications for inference on true redd numbers

    USGS Publications Warehouse

    Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.

    2006-01-01

    Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.

  19. Validation of SmartRank: A likelihood ratio software for searching national DNA databases with complex DNA profiles.

    PubMed

    Benschop, Corina C G; van de Merwe, Linda; de Jong, Jeroen; Vanvooren, Vanessa; Kempenaers, Morgane; Kees van der Beek, C P; Barni, Filippo; Reyes, Eusebio López; Moulin, Léa; Pene, Laurent; Haned, Hinda; Sijen, Titia

    2017-07-01

    Searching a national DNA database with complex and incomplete profiles usually yields very large numbers of possible matches that can present many candidate suspects to be further investigated by the forensic scientist and/or police. Current practice in most forensic laboratories consists of ordering these 'hits' based on the number of matching alleles with the searched profile. Thus, candidate profiles that share the same number of matching alleles are not differentiated and due to the lack of other ranking criteria for the candidate list it may be difficult to discern a true match from the false positives or notice that all candidates are in fact false positives. SmartRank was developed to put forward only relevant candidates and rank them accordingly. The SmartRank software computes a likelihood ratio (LR) for the searched profile and each profile in the DNA database and ranks database entries above a defined LR threshold according to the calculated LR. In this study, we examined for mixed DNA profiles of variable complexity whether the true donors are retrieved, what the number of false positives above an LR threshold is and the ranking position of the true donors. Using 343 mixed DNA profiles over 750 SmartRank searches were performed. In addition, the performance of SmartRank and CODIS were compared regarding DNA database searches and SmartRank was found complementary to CODIS. We also describe the applicable domain of SmartRank and provide guidelines. The SmartRank software is open-source and freely available. Using the best practice guidelines, SmartRank enables obtaining investigative leads in criminal cases lacking a suspect. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Response Rates in Random-Digit-Dialed Telephone Surveys: Estimation vs. Measurement.

    ERIC Educational Resources Information Center

    Franz, Jennifer D.

    The efficacy of the random digit dialing method in telephone surveys was examined. Random digit dialing (RDD) generates a pure random sample and provides the advantage of including unlisted phone numbers, as well as numbers which are too new to be listed. Its disadvantage is that it generates a major proportion of nonworking and business…

Top