Sample records for uniform random number

  1. Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.

    DTIC Science & Technology

    The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)

  2. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  3. Secure uniform random-number extraction via incoherent strategies

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Zhu, Huangjun

    2018-01-01

    To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.

  4. Linking of uniform random polygons in confined spaces

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.

    2007-03-01

    In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .

  5. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  6. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  7. Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling

    NASA Astrophysics Data System (ADS)

    Bierhorst, Peter; Shalm, Lynden K.; Mink, Alan; Jordan, Stephen; Liu, Yi-Kai; Rommal, Andrea; Glancy, Scott; Christensen, Bradley; Nam, Sae Woo; Knill, Emanuel

    Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment to obtain data containing randomness that cannot be predicted by any theory that does not also allow the sending of signals faster than the speed of light. To certify and quantify the randomness, we develop a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtain 256 new random bits, uniform to within 10- 3 .

  8. Some limit theorems for ratios of order statistics from uniform random variables.

    PubMed

    Xu, Shou-Fang; Miao, Yu

    2017-01-01

    In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.

  9. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  10. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  11. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  12. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  13. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  14. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  15. Standard random number generation for MBASIC

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.

  16. Pathwise upper semi-continuity of random pullback attractors along the time axis

    NASA Astrophysics Data System (ADS)

    Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke

    2018-07-01

    The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.

  17. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  18. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  19. Small violations of Bell inequalities for multipartite pure random states

    NASA Astrophysics Data System (ADS)

    Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.

    2018-05-01

    For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.

  20. Misinterpretation of statistical distance in security of quantum key distribution shown by simulation

    NASA Astrophysics Data System (ADS)

    Iwakoshi, Takehisa; Hirota, Osamu

    2014-10-01

    This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.

  1. Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.

    2018-05-01

    Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.

  2. An investigation of the uniform random number generator

    NASA Technical Reports Server (NTRS)

    Temple, E. C.

    1982-01-01

    Most random number generators that are in use today are of the congruential form X(i+1) + AX(i) + C mod M where A, C, and M are nonnegative integers. If C=O, the generator is called the multiplicative type and those for which C/O are called mixed congruential generators. It is easy to see that congruential generators will repeat a sequence of numbers after a maximum of M values have been generated. The number of numbers that a procedure generates before restarting the sequence is called the length or the period of the generator. Generally, it is desirable to make the period as long as possible. A detailed discussion of congruential generators is given. Also, several promising procedures that differ from the multiplicative and mixed procedure are discussed.

  3. CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties

    DTIC Science & Technology

    2017-03-01

    inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse

  4. Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei

    2018-04-01

    We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.

  5. Experimentally generated randomness certified by the impossibility of superluminal signals.

    PubMed

    Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K

    2018-04-01

    From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.

  6. DG-IMEX Stochastic Galerkin Schemes for Linear Transport Equation with Random Inputs and Diffusive Scalings

    DOE PAGES

    Chen, Zheng; Liu, Liu; Mu, Lin

    2017-05-03

    In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less

  7. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  8. Impact of Beamforming on the Path Connectivity in Cognitive Radio Ad Hoc Networks

    PubMed Central

    Dung, Le The; Hieu, Tran Dinh; Choi, Seong-Gon; Kim, Byung-Seo; An, Beongku

    2017-01-01

    This paper investigates the impact of using directional antennas and beamforming schemes on the connectivity of cognitive radio ad hoc networks (CRAHNs). Specifically, considering that secondary users use two kinds of directional antennas, i.e., uniform linear array (ULA) and uniform circular array (UCA) antennas, and two different beamforming schemes, i.e., randomized beamforming and center-directed to communicate with each other, we study the connectivity of all combination pairs of directional antennas and beamforming schemes and compare their performances to those of omnidirectional antennas. The results obtained in this paper show that, compared with omnidirectional transmission, beamforming transmission only benefits the connectivity when the density of secondary user is moderate. Moreover, the combination of UCA and randomized beamforming scheme gives the highest path connectivity in all evaluating scenarios. Finally, the number of antenna elements and degree of path loss greatly affect path connectivity in CRAHNs. PMID:28346377

  9. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  10. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  11. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  12. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  13. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    PubMed

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  14. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    PubMed Central

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-01

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042

  15. Wave Propagation inside Random Media

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaojun

    This thesis presents results of studies of wave scattering within and transmission through random and periodic systems. The main focus is on energy profiles inside quasi-1D and 1D random media. The connection between transport and the states of the medium is manifested in the equivalence of the dimensionless conductance, g, and the Thouless number which is the ratio of the average linewidth and spacing of energy levels. This equivalence and theories regarding the energy profiles inside random media are based on the assumption that LDOS is uniform throughout the samples. We have conducted microwave measurements of the longitudinal energy profiles within disordered samples contained in a copper tube supporting multiple waveguide channels with an antenna moving along a slit on the tube. These measurements allow us to determine the local density of states (LDOS) at a location which is the sum of energy from all incoming channels on both sides. For diffusive samples, the LDOS is uniform and the energy profile decays linearly as expected. However, for localized samples, we find that the LDOS drops sharply towards the middle of the sample and the energy profile does not follow the result of the local diffusion theory where the LDOS is assumed to be uniform. We analyze the field spectra into quasi-normal modes and found that the mode linewidth and the number of modes saturates as the sample length increases. Thus the Thouless number saturates while the dimensionless conductance g continues to fall with increasing length, indicating that the modes are localized near the boundaries. This is in contrast to the general believing that g and Thouless number follow the same scaling behavior. Previous measurements show that single parameter scaling (SPS) still holds in the same sample where the LDOS is suppressed te{shi2014microwave}. We explore the extension of SPS to the interior of the sample by analyzing statistics of the logrithm of the energy density ln W(x) and found that =-x/l where l is the transport mean free path. The result does not depend on the sample length, which is counterintuitive yet remarkably simple. More supprisingly, the linear fall-off of energy profile holds for totally disordered random 1D layered samples in simulations where the LDOS is uniform as well as for single mode random waveguide experiments and 1D nearly periodic samples where the LDOS is suppressed in the middle of the sample. The generalization of the transmission matrix to the interior of quasi-1D random samples, which is defined as the field matrix, and its eigenvalues statistics are also discussed. The maximum energy deposition at a location is not the intensity of the first transmission eigenchannel but the eigenvalue of the first energy density eigenchannels at that cross section, which can be much greater than the average value. The contrast, which is the ratio of the intensity at the focused point to the background intensity, in optimal focusing is determined by the participation number of the energy density eigenvalues and its inverse gives the variance of the energy density at that cross section in a single configuration. We have also studied topological states in photonic structures. We have demonstrated robust propagation of electromagnetic waves along reconfigurable pathways within a topological photonic metacrystal. Since the wave is confined within the domain wall, which is the boundary between two distinct topological insulating systems, we can freely steer the wave by reconstructing the photonic structure. Other topics, such as speckle pattern evolutions and the effects of boundary conditions on the statistics of transmission eigenvalues and energy profiles are also discussed.

  16. Effects of asymmetric rolling process on ridging resistance of ultra-purified 17%Cr ferritic stainless steel

    NASA Astrophysics Data System (ADS)

    Lu, Cheng-zhuang; Li, Jing-yuan; Fang, Zhi

    2018-02-01

    In ferritic stainless steels, a significant non-uniform recrystallization orientation and a substantial texture gradient usually occur, which can degrade the ridging resistance of the final sheets. To improve the homogeneity of the recrystallization orientation and reduce the texture gradient in ultra-purified 17%Cr ferritic stainless steel, in this work, we performed conventional and asymmetric rolling processes and conducted macro and micro-texture analyses to investigate texture evolution under different cold-rolling conditions. In the conventional rolling specimens, we observed that the deformation was not uniform in the thickness direction, whereas there was homogeneous shear deformation in the asymmetric rolling specimens as well as the formation of uniform recrystallized grains and random orientation grains in the final annealing sheets. As such, the ridging resistance of the final sheets was significantly improved by employing the asymmetric rolling process. This result indicates with certainty that the texture gradient and orientation inhomogeneity can be attributed to non-uniform deformation, whereas the uniform orientation gradient in the thickness direction is explained by the increased number of shear bands obtained in the asymmetric rolling process.

  17. Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space

    PubMed Central

    Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred

    2016-01-01

    Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112

  18. Exact Markov chains versus diffusion theory for haploid random mating.

    PubMed

    Tyvand, Peder A; Thorvaldsen, Steinar

    2010-05-01

    Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.

  19. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  20. Color image encryption based on hybrid hyper-chaotic system and cellular automata

    NASA Astrophysics Data System (ADS)

    Yaghouti Niyat, Abolfazl; Moattar, Mohammad Hossein; Niazi Torshiz, Masood

    2017-03-01

    This paper proposes an image encryption scheme based on Cellular Automata (CA). CA is a self-organizing structure with a set of cells in which each cell is updated by certain rules that are dependent on a limited number of neighboring cells. The major disadvantages of cellular automata in cryptography include limited number of reversal rules and inability to produce long sequences of states by these rules. In this paper, a non-uniform cellular automata framework is proposed to solve this problem. This proposed scheme consists of confusion and diffusion steps. In confusion step, the positions of the original image pixels are replaced by chaos mapping. Key image is created using non-uniform cellular automata and then the hyper-chaotic mapping is used to select random numbers from the image key for encryption. The main contribution of the paper is the application of hyper chaotic functions and non-uniform CA for robust key image generation. Security analysis and experimental results show that the proposed method has a very large key space and is resistive against noise and attacks. The correlation between adjacent pixels in the encrypted image is reduced and the amount of entropy is equal to 7.9991 which is very close to 8 which is ideal.

  1. Random-effects meta-analysis: the number of studies matters.

    PubMed

    Guolo, Annamaria; Varin, Cristiano

    2017-06-01

    This paper investigates the impact of the number of studies on meta-analysis and meta-regression within the random-effects model framework. It is frequently neglected that inference in random-effects models requires a substantial number of studies included in meta-analysis to guarantee reliable conclusions. Several authors warn about the risk of inaccurate results of the traditional DerSimonian and Laird approach especially in the common case of meta-analysis involving a limited number of studies. This paper presents a selection of likelihood and non-likelihood methods for inference in meta-analysis proposed to overcome the limitations of the DerSimonian and Laird procedure, with a focus on the effect of the number of studies. The applicability and the performance of the methods are investigated in terms of Type I error rates and empirical power to detect effects, according to scenarios of practical interest. Simulation studies and applications to real meta-analyses highlight that it is not possible to identify an approach uniformly superior to alternatives. The overall recommendation is to avoid the DerSimonian and Laird method when the number of meta-analysis studies is modest and prefer a more comprehensive procedure that compares alternative inferential approaches. R code for meta-analysis according to all of the inferential methods examined in the paper is provided.

  2. The Enrichment of Smoler’s Model of Land Combat.

    DTIC Science & Technology

    1980-09-01

    of land Combat RupIe ICSO ( -. AU TWORVS) 4. CONTRACT OR GRANT NIANG9WO) Glenn M. Mills 9, 09m0000000 0RGANIZATION "NME AU AGGRS Is. :00OGRAN £LMEN61T...each unit prior to the initiation of the battle. This realization, A is determined by using a random Uniform(O,1) number and the above formula. A new...move to an alternate position the user has selected. The duration of the move is also a user input. He simply specifies the number of 10 second time

  3. Carbon nanotube bundles with tensile strength over 80 GPa.

    PubMed

    Bai, Yunxiang; Zhang, Rufan; Ye, Xuan; Zhu, Zhenxing; Xie, Huanhuan; Shen, Boyuan; Cai, Dali; Liu, Bofei; Zhang, Chenxi; Jia, Zhao; Zhang, Shenli; Li, Xide; Wei, Fei

    2018-05-14

    Carbon nanotubes (CNTs) are one of the strongest known materials. When assembled into fibres, however, their strength becomes impaired by defects, impurities, random orientations and discontinuous lengths. Fabricating CNT fibres with strength reaching that of a single CNT has been an enduring challenge. Here, we demonstrate the fabrication of CNT bundles (CNTBs) that are centimetres long with tensile strength over 80 GPa using ultralong defect-free CNTs. The tensile strength of CNTBs is controlled by the Daniels effect owing to the non-uniformity of the initial strains in the components. We propose a synchronous tightening and relaxing strategy to release these non-uniform initial strains. The fabricated CNTBs, consisting of a large number of components with parallel alignment, defect-free structures, continuous lengths and uniform initial strains, exhibit a tensile strength of 80 GPa (corresponding to an engineering tensile strength of 43 GPa), which is far higher than that of any other strong fibre.

  4. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    PubMed

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.

  5. Fluid Physics Under a Stochastic Acceleration Field

    NASA Technical Reports Server (NTRS)

    Vinals, Jorge

    2001-01-01

    The research summarized in this report has involved a combined theoretical and computational study of fluid flow that results from the random acceleration environment present onboard space orbiters, also known as g-jitter. We have focused on a statistical description of the observed g-jitter, on the flows that such an acceleration field can induce in a number of experimental configurations of interest, and on extending previously developed methodology to boundary layer flows. Narrow band noise has been shown to describe many of the features of acceleration data collected during space missions. The scale of baroclinically induced flows when the driving acceleration is random is not given by the Rayleigh number. Spatially uniform g-jitter induces additional hydrodynamic forces among suspended particles in incompressible fluids. Stochastic modulation of the control parameter shifts the location of the onset of an oscillatory instability. Random vibration of solid boundaries leads to separation of boundary layers. Steady streaming ahead of a modulated solid-melt interface enhances solute transport, and modifies the stability boundaries of a planar front.

  6. Development and first use of a novel cylindrical ball bearing phantom for 9-DOF geometric calibrations of flat panel imaging devices used in image-guided ion beam therapy

    NASA Astrophysics Data System (ADS)

    Zechner, A.; Stock, M.; Kellner, D.; Ziegler, I.; Keuschnigg, P.; Huber, P.; Mayer, U.; Sedlmayer, F.; Deutschmann, H.; Steininger, P.

    2016-11-01

    Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager’s deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner’s geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.

  7. Development and first use of a novel cylindrical ball bearing phantom for 9-DOF geometric calibrations of flat panel imaging devices used in image-guided ion beam therapy.

    PubMed

    Zechner, A; Stock, M; Kellner, D; Ziegler, I; Keuschnigg, P; Huber, P; Mayer, U; Sedlmayer, F; Deutschmann, H; Steininger, P

    2016-11-21

    Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager's deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner's geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.

  8. Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers

    NASA Astrophysics Data System (ADS)

    Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan

    2018-04-01

    The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1  ×  2.1  ×  2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D-MRI with high temporal and spatial resolution within short scan time for visualization of organ or tumor motion during free breathing. Further studies, e.g. the application of the method for radiotherapy planning are needed to investigate the clinical applicability and diagnostic value of the approach.

  9. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  10. Zipf's law in city size from a resource utilization model.

    PubMed

    Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S; Chakrabarti, Bikas K

    2014-10-01

    We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.

  11. Zipf's law in city size from a resource utilization model

    NASA Astrophysics Data System (ADS)

    Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S.; Chakrabarti, Bikas K.

    2014-10-01

    We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.

  12. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  13. Semantic Importance Sampling for Statistical Model Checking

    DTIC Science & Technology

    2015-01-16

    SMT calls while maintaining correctness. Finally, we implement SIS in a tool called osmosis and use it to verify a number of stochastic systems with...2 surveys related work. Section 3 presents background definitions and concepts. Section 4 presents SIS, and Section 5 presents our tool osmosis . In...which I∗M|=Φ(x) = 1. We do this by first randomly selecting a cube c from C∗ with uniform probability since each cube has equal probability 9 5. OSMOSIS

  14. Optimal hash arrangement of tentacles in jellyfish

    NASA Astrophysics Data System (ADS)

    Okabe, Takuya; Yoshimura, Jin

    2016-06-01

    At first glance, the trailing tentacles of a jellyfish appear to be randomly arranged. However, close examination of medusae has revealed that the arrangement and developmental order of the tentacles obey a mathematical rule. Here, we show that medusa jellyfish adopt the best strategy to achieve the most uniform distribution of a variable number of tentacles. The observed order of tentacles is a real-world example of an optimal hashing algorithm known as Fibonacci hashing in computer science.

  15. A novel image encryption algorithm based on chaos maps with Markov properties

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  16. Distinguishability of generic quantum states

    NASA Astrophysics Data System (ADS)

    Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol

    2016-06-01

    Properties of random mixed states of dimension N distributed uniformly with respect to the Hilbert-Schmidt measure are investigated. We show that for large N , due to the concentration of measure, the trace distance between two random states tends to a fixed number D ˜=1 /4 +1 /π , which yields the Helstrom bound on their distinguishability. To arrive at this result, we apply free random calculus and derive the symmetrized Marchenko-Pastur distribution, which is shown to describe numerical data for the model of coupled quantum kicked tops. Asymptotic value for the root fidelity between two random states, √{F }=3/4 , can serve as a universal reference value for further theoretical and experimental studies. Analogous results for quantum relative entropy and Chernoff quantity provide other bounds on the distinguishablity of both states in a multiple measurement setup due to the quantum Sanov theorem. We study also mean entropy of coherence of random pure and mixed states and entanglement of a generic mixed state of a bipartite system.

  17. Forced magnetohydrodynamic turbulence in a uniform external magnetic field

    NASA Technical Reports Server (NTRS)

    Hossain, M.; Vahala, G.; Montgomery, D.

    1985-01-01

    Two-dimensional dissipative MHD turbulence is randomly driven at small spatial scales and is studied by numerical simulation in the presence of a strong uniform external magnetic field. A behavior is observed which is apparently distinct from the inverse cascade which prevails in the absence of an external magnetic field. The magnetic spectrum becomes dominated by the three longest wavelength Alfven waves in the system allowed by the boundary conditions: those which, in a box size of edge 2 pi, have wave numbers (kx, ky) = (1, 1), and (1, -1), where the external magnetic field is in the x direction. At any given instant, one of these three modes dominates the vector potential spectrum, but they do not constitute a resonantly coupled triad. Rather, they are apparently coupled by the smaller-scale turbulence.

  18. Forced MHD turbulence in a uniform external magnetic field

    NASA Technical Reports Server (NTRS)

    Hossain, M.; Vahala, G.; Montgomery, D.

    1985-01-01

    Two-dimensional dissipative MHD turbulence is randomly driven at small spatial scales and is studied by numerical simulation in the presence of a strong uniform external magnetic field. A behavior is observed which is apparently distinct from the inverse cascade which prevails in the absence of an external magnetic field. The magnetic spectrum becomes dominated by the three longest wavelength Alfven waves in the system allowed by the boundary conditions: those which, in a box size of edge 2 pi, have wave numbers (kx' ky) = (1, 1), and (1, -1), where the external magnetic field is in the x direction. At any given instant, one of these three modes dominates the vector potential spectrum, but they do not constitute a resonantly coupled triad. Rather, they are apparently coupled by the smaller-scale turbulence.

  19. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation

  20. Improved high-dimensional prediction with Random Forests by the use of co-data.

    PubMed

    Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A

    2017-12-28

    Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.

  1. SU-E-T-510: Interplay Between Spots Sizes, Spot / Line Spacing and Motion in Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, TK

    Purpose In proton beam configuration for spot scanning proton therapy (SSPT), one can define the spacing between spots and lines of scanning as a ratio of given spot size. If the spacing increases, the number of spots decreases which can potentially decrease scan time, and so can whole treatment time, and vice versa. However, if the spacing is too large, the uniformity of scanned field decreases. Also, the field uniformity can be affected by motion during SSPT beam delivery. In the present study, the interplay between spot/ line spacing and motion is investigated. Methods We used four Gaussian-shape spot sizesmore » with 0.5cm, 1.0cm, 1.5cm, and 2.0cm FWHM, three spot/line spacing that creates uniform field profile which are 1/3*FWHM, σ/3*FWHM and 2/3*FWHM, and three random motion amplitudes within, +/−0.3mm, +/−0.5mm, and +/−1.0mm. We planned with 2Gy uniform single layer of 10×10cm2 and 20×20cm2 fields. Then, mean dose within 80% area of given field size, contrubuting MU per each spot assuming 1cGy/MU calibration for all spot sizes, number of spots and uniformity were calculated. Results The plans with spot/line spacing equal to or smaller than 2/3*FWHM without motion create ∼100% uniformity. However, it was found that the uniformity decreases with increased spacing, and it is more pronounced with smaller spot sizes, but is not affected by scanned field sizes. Conclusion It was found that the motion during proton beam delivery can alter the dose uniformity and the amount of alteration changes with spot size which changes with energy and spot/line spacing. Currently, robust evaluation in TPS (e.g. Eclipse system) performs range uncertainty evaluation using isocenter shift and CT calibration error. Based on presented study, it is recommended to add interplay effect evaluation to robust evaluation process. For future study, the additional interplay between the energy layers and motion is expected to present volumetric effect.« less

  2. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  3. Deployment-based lifetime optimization model for homogeneous Wireless Sensor Network under retransmission.

    PubMed

    Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning

    2014-12-10

    Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.

  4. Interval Graph Limits

    PubMed Central

    Diaconis, Persi; Holmes, Susan; Janson, Svante

    2015-01-01

    We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368

  5. Long-lasting permethrin impregnated uniforms: A randomized-controlled trial for tick bite prevention.

    PubMed

    Vaughn, Meagan F; Funkhouser, Sheana Whelan; Lin, Feng-Chang; Fine, Jason; Juliano, Jonathan J; Apperson, Charles S; Meshnick, Steven R

    2014-05-01

    Because of frequent exposure to tick habitats, outdoor workers are at high risk for tick-borne diseases. Adherence to National Institute for Occupational Safety and Health-recommended tick bite prevention methods is poor. A factory-based method for permethrin impregnation of clothing that provides long-lasting insecticidal and repellent activity is commercially available, and studies are needed to assess the long-term effectiveness of this clothing under field conditions. To evaluate the protective effectiveness of long-lasting permethrin impregnated uniforms among a cohort of North Carolina outdoor workers. A double-blind RCT was conducted between March 2011 and September 2012. Subjects included outdoor workers from North Carolina State Divisions of Forestry, Parks and Recreation, and Wildlife who worked in eastern or central North Carolina. A total of 159 volunteer subjects were randomized, and 127 and 101 subjects completed the first and second years of follow-up, respectively. Uniforms of participants in the treatment group were factory-impregnated with long-lasting permethrin whereas control group uniforms received a sham treatment. Participants continued to engage in their usual tick bite prevention activities. Incidence of work-related tick bites reported on weekly tick bite logs. Study subjects reported 1,045 work-related tick bites over 5,251 person-weeks of follow-up. The mean number of reported tick bites in the year prior to enrollment was similar for both the treatment and control groups, but markedly different during the study period. In our analysis conducted in 2013, the effectiveness of long-lasting permethrin impregnated uniforms for the prevention of work-related tick bites was 0.82 (95% CI=0.66, 0.91) and 0.34 (95% CI=-0.67, 0.74) for the first and second years of follow-up. These results indicate that long-lasting permethrin impregnated uniforms are highly effective for at least 1 year in deterring tick bites in the context of typical tick bite prevention measures employed by outdoor workers. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  6. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  7. Feedback shift register sequences versus uniformly distributed random sequences for correlation chromatography

    NASA Technical Reports Server (NTRS)

    Kaljurand, M.; Valentin, J. R.; Shao, M.

    1996-01-01

    Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.

  8. Robust PRNG based on homogeneously distributed chaotic dynamics

    NASA Astrophysics Data System (ADS)

    Garasym, Oleg; Lozi, René; Taralova, Ina

    2016-02-01

    This paper is devoted to the design of new chaotic Pseudo Random Number Generator (CPRNG). Exploring several topologies of network of 1-D coupled chaotic mapping, we focus first on two dimensional networks. Two topologically coupled maps are studied: TTL rc non-alternate, and TTL SC alternate. The primary idea of the novel maps has been based on an original coupling of the tent and logistic maps to achieve excellent random properties and homogeneous /uniform/ density in the phase plane, thus guaranteeing maximum security when used for chaos base cryptography. In this aim two new nonlinear CPRNG: MTTL 2 sc and NTTL 2 are proposed. The maps successfully passed numerous statistical, graphical and numerical tests, due to proposed ring coupling and injection mechanisms.

  9. Asymptotic laws for random knot diagrams

    NASA Astrophysics Data System (ADS)

    Chapman, Harrison

    2017-06-01

    We study random knotting by considering knot and link diagrams as decorated, (rooted) topological maps on spheres and pulling them uniformly from among sets of a given number of vertices n, as first established in recent work with Cantarella and Mastin. The knot diagram model is an exciting new model which captures both the random geometry of space curve models of knotting as well as the ease of computing invariants from diagrams. We prove that unknot diagrams are asymptotically exponentially rare, an analogue of Sumners and Whittington’s landmark result for self-avoiding polygons. Our proof uses the same key idea: we first show that knot diagrams obey a pattern theorem, which describes their fractal structure. We examine how quickly this behavior occurs in practice. As a consequence, almost all diagrams are asymmetric, simplifying sampling from this model. We conclude with experimental data on knotting in this model. This model of random knotting is similar to those studied by Diao et al, and Dunfield et al.

  10. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

    PubMed

    Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

    2011-03-01

    During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  12. Emergence of an optimal search strategy from a simple random walk

    PubMed Central

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-01-01

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445

  13. Emergence of an optimal search strategy from a simple random walk.

    PubMed

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  14. Security of practical private randomness generation

    NASA Astrophysics Data System (ADS)

    Pironio, Stefano; Massar, Serge

    2013-01-01

    Measurements on entangled quantum systems necessarily yield outcomes that are intrinsically unpredictable if they violate a Bell inequality. This property can be used to generate certified randomness in a device-independent way, i.e., without making detailed assumptions about the internal working of the quantum devices used to generate the random numbers. Furthermore these numbers are also private; i.e., they appear random not only to the user but also to any adversary that might possess a perfect description of the devices. Since this process requires a small initial random seed to sample the behavior of the quantum devices and to extract uniform randomness from the raw outputs of the devices, one usually speaks of device-independent randomness expansion. The purpose of this paper is twofold. First, we point out that in most real, practical situations, where the concept of device independence is used as a protection against unintentional flaws or failures of the quantum apparatuses, it is sufficient to show that the generated string is random with respect to an adversary that holds only classical side information; i.e., proving randomness against quantum side information is not necessary. Furthermore, the initial random seed does not need to be private with respect to the adversary, provided that it is generated in a way that is independent from the measured systems. The devices, however, will generate cryptographically secure randomness that cannot be predicted by the adversary, and thus one can, given access to free public randomness, talk about private randomness generation. The theoretical tools to quantify the generated randomness according to these criteria were already introduced in S. Pironio [Nature (London)NATUAS0028-083610.1038/nature09008 464, 1021 (2010)], but the final results were improperly formulated. The second aim of this paper is to correct this inaccurate formulation and therefore lay out a precise theoretical framework for practical device-independent randomness generation.

  15. Analysis of Basis Weight Uniformity of Microfiber Nonwovens and Its Impact on Permeability and Filtration Properties

    NASA Astrophysics Data System (ADS)

    Amirnasr, Elham

    It is widely recognized that nonwoven basis weight non-uniformity affects various properties of nonwovens. However, few studies can be found in this topic. The development of uniformity definition and measurement methods and the study of their impact on various web properties such as filtration properties and air permeability would be beneficial both in industrial applications and in academia. They can be utilized as a quality control tool and would provide insights about nonwoven behaviors that cannot be solely explained by average values. Therefore, for quantifying nonwoven web basis weight uniformity we purse to develop an optical analytical tool. The quadrant method and clustering analysis was utilized in an image analysis scheme to help define "uniformity" and its spatial variation. Implementing the quadrant method in an image analysis system allows the establishment of a uniformity index that can be used to quantify the degree of uniformity. Clustering analysis has also been modified and verified using uniform and random simulated images with known parameters. Number of clusters and cluster properties such as cluster size, member and density was determined. We also utilized this new measurement method to evaluate uniformity of nonwovens produced with different processes and investigated impacts of uniformity on filtration and permeability. The results of quadrant method shows that uniformity index computed from quadrant method demonstrate a good range for non-uniformity of nonwoven webs. Clustering analysis is also been applied on reference nonwoven with known visual uniformity. From clustering analysis results, cluster size is promising to be used as uniformity parameter. It is been shown that non-uniform nonwovens has provide lager cluster size than uniform nonwovens. It was been tried to find a relationship between web properties and uniformity index (as a web characteristic). To achieve this, filtration properties, air permeability, solidity and uniformity index of meltblown and spunbond samples was measured. Results for filtration test show some deviation between theoretical and experimental filtration efficiency by considering different types of fiber diameter. This deviation can occur due to variation in basis weight non-uniformity. So an appropriate theory is required to predict the variation of filtration efficiency with respect to non-uniformity of nonwoven filter media. And the results for air permeability test showed that uniformity index determined by quadrant method and measured properties have some relationship. In the other word, air permeability decreases as uniformity index on nonwoven web increase.

  16. Expert Assessment of Stigmergy: A Report for the Department of National Defence

    DTIC Science & Technology

    2005-10-01

    pheromone table may be reduced by implementing a clustering scheme. Termite can take advantage of the wireless broadcast medium, since it is possible for...comparing it with any other routing scheme. The Termite scheme [RW] differs from the source routing [ITT] by applying pheromone trails or random walks...rather than uniform or probabilistic ones. Random walk ants differ from uniform ants since they follow pheromone trails, if any. Termite [RW] also

  17. On Tree-Based Phylogenetic Networks.

    PubMed

    Zhang, Louxin

    2016-07-01

    A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree-based networks. We present a simple necessary and sufficient condition for tree-based networks and prove that a universal tree-based network exists for any number of taxa that contains as its base every phylogenetic tree on the same set of taxa. This answers two problems posted by Francis and Steel recently. A byproduct is a computer program for generating random binary phylogenetic networks under the uniform distribution model.

  18. Method and apparatus for enhancing vortex pinning by conformal crystal arrays

    DOEpatents

    Janko, Boldizsar; Reichhardt, Cynthia; Reichhardt, Charles; Ray, Dipanjan

    2015-07-14

    Disclosed is a method and apparatus for strongly enhancing vortex pinning by conformal crystal arrays. The conformal crystal array is constructed by a conformal transformation of a hexagonal lattice, producing a non-uniform structure with a gradient where the local six-fold coordination of the pinning sites is preserved, and with an arching effect. The conformal pinning arrays produce significantly enhanced vortex pinning over a much wider range of field than that found for other vortex pinning geometries with an equivalent number of vortex pinning sites, such as random, square, and triangular.

  19. A Search Model for Imperfectly Detected Targets

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert

    2012-01-01

    Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.

  20. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  1. Lamellar cationic lipid-DNA complexes from lipids with a strong preference for planar geometry: A Minimal Electrostatic Model.

    PubMed

    Perico, Angelo; Manning, Gerald S

    2014-11-01

    We formulate and analyze a minimal model, based on condensation theory, of the lamellar cationic lipid (CL)-DNA complex of alternately charged lipid bilayers and DNA monolayers in a salt solution. Each lipid bilayer, composed by a random mixture of cationic and neutral lipids, is assumed to be a rigid uniformly charged plane. Each DNA monolayer, located between two lipid bilayers, is formed by the same number of parallel DNAs with a uniform separation distance. For the electrostatic calculation, the model lipoplex is collapsed to a single plane with charge density equal to the net lipid and DNA charge. The free energy difference between the lamellar lipoplex and a reference state of the same number of free lipid bilayers and free DNAs, is calculated as a function of the fraction of CLs, of the ratio of the number of CL charges to the number of negative charges of the DNA phosphates, and of the total number of planes. At the isoelectric point the free energy difference is minimal. The complex formation, already favoured by the decrease of the electrostatic charging free energy, is driven further by the free energy gain due to the release of counterions from the DNAs and from the lipid bilayers, if strongly charged. This minimal model compares well with experiment for lipids having a strong preference for planar geometry and with major features of more detailed models of the lipoplex. © 2014 Wiley Periodicals, Inc.

  2. Long-time predictability in disordered spin systems following a deep quench

    NASA Astrophysics Data System (ADS)

    Ye, J.; Gheissari, R.; Machta, J.; Newman, C. M.; Stein, D. L.

    2017-04-01

    We study the problem of predictability, or "nature vs nurture," in several disordered Ising spin systems evolving at zero temperature from a random initial state: How much does the final state depend on the information contained in the initial state, and how much depends on the detailed history of the system? Our numerical studies of the "dynamical order parameter" in Edwards-Anderson Ising spin glasses and random ferromagnets indicate that the influence of the initial state decays as dimension increases. Similarly, this same order parameter for the Sherrington-Kirkpatrick infinite-range spin glass indicates that this information decays as the number of spins increases. Based on these results, we conjecture that the influence of the initial state on the final state decays to zero in finite-dimensional random-bond spin systems as dimension goes to infinity, regardless of the presence of frustration. We also study the rate at which spins "freeze out" to a final state as a function of dimensionality and number of spins; here the results indicate that the number of "active" spins at long times increases with dimension (for short-range systems) or number of spins (for infinite-range systems). We provide theoretical arguments to support these conjectures, and also study analytically several mean-field models: the random energy model, the uniform Curie-Weiss ferromagnet, and the disordered Curie-Weiss ferromagnet. We find that for these models, the information contained in the initial state does not decay in the thermodynamic limit—in fact, it fully determines the final state. Unlike in short-range models, the presence of frustration in mean-field models dramatically alters the dynamical behavior with respect to the issue of predictability.

  3. Long-time predictability in disordered spin systems following a deep quench.

    PubMed

    Ye, J; Gheissari, R; Machta, J; Newman, C M; Stein, D L

    2017-04-01

    We study the problem of predictability, or "nature vs nurture," in several disordered Ising spin systems evolving at zero temperature from a random initial state: How much does the final state depend on the information contained in the initial state, and how much depends on the detailed history of the system? Our numerical studies of the "dynamical order parameter" in Edwards-Anderson Ising spin glasses and random ferromagnets indicate that the influence of the initial state decays as dimension increases. Similarly, this same order parameter for the Sherrington-Kirkpatrick infinite-range spin glass indicates that this information decays as the number of spins increases. Based on these results, we conjecture that the influence of the initial state on the final state decays to zero in finite-dimensional random-bond spin systems as dimension goes to infinity, regardless of the presence of frustration. We also study the rate at which spins "freeze out" to a final state as a function of dimensionality and number of spins; here the results indicate that the number of "active" spins at long times increases with dimension (for short-range systems) or number of spins (for infinite-range systems). We provide theoretical arguments to support these conjectures, and also study analytically several mean-field models: the random energy model, the uniform Curie-Weiss ferromagnet, and the disordered Curie-Weiss ferromagnet. We find that for these models, the information contained in the initial state does not decay in the thermodynamic limit-in fact, it fully determines the final state. Unlike in short-range models, the presence of frustration in mean-field models dramatically alters the dynamical behavior with respect to the issue of predictability.

  4. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  5. Impact of AlO x layer on resistive switching characteristics and device-to-device uniformity of bilayered HfO x -based resistive random access memory devices

    NASA Astrophysics Data System (ADS)

    Chuang, Kai-Chi; Chung, Hao-Tung; Chu, Chi-Yan; Luo, Jun-Dao; Li, Wei-Shuo; Li, Yi-Shao; Cheng, Huang-Chung

    2018-06-01

    An AlO x layer was deposited on HfO x , and bilayered dielectric films were found to confine the formation locations of conductive filaments (CFs) during the forming process and then improve device-to-device uniformity. In addition, the Ti interposing layer was also adopted to facilitate the formation of oxygen vacancies. As a result, the resistive random access memory (RRAM) device with TiN/Ti/AlO x (1 nm)/HfO x (6 nm)/TiN stack layers demonstrated excellent device-to-device uniformity although it achieved slightly larger resistive switching characteristics, which were forming voltage (V Forming) of 2.08 V, set voltage (V Set) of 1.96 V, and reset voltage (V Reset) of ‑1.02 V, than the device with TiN/Ti/HfO x (6 nm)/TiN stack layers. However, the device with a thicker 2-nm-thick AlO x layer showed worse uniformity than the 1-nm-thick one. It was attributed to the increased oxygen atomic percentage in the bilayered dielectric films of the 2-nm-thick one. The difference in oxygen content showed that there would be less oxygen vacancies to form CFs. Therefore, the random growth of CFs would become severe and the device-to-device uniformity would degrade.

  6. Response of moderately thick laminated cross-ply composite shells subjected to random excitation

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaak; Cederbaum, Gabriel; Librescu, Liviu

    1989-01-01

    This study deals with the dynamic response of transverse shear deformable laminated shells subjected to random excitation. The analysis encompasses the following problems: (1) the dynamic response of circular cylindrical shells of finite length excited by an axisymmetric uniform ring loading, stationary in time, and (2) the response of spherical and cylindrical panels subjected to stationary random loadings with uniform spatial distribution. The associated equations governing the structural theory of shells are derived upon discarding the classical Love-Kirchhoff (L-K) assumptions. In this sense, the theory is formulated in the framework of the first-order transverse shear deformation theory (FSDT).

  7. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  8. Reduced projection angles for binary tomography with particle aggregation.

    PubMed

    Al-Rifaie, Mohammad Majid; Blackwell, Tim

    This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.

  9. Cluster pattern analysis of energy deposition sites for the brachytherapy sources 103Pd, 125I, 192Ir, 137Cs, and 60Co.

    PubMed

    Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders

    2014-09-21

    Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.

  10. Cluster pattern analysis of energy deposition sites for the brachytherapy sources 103Pd, 125I, 192Ir, 137Cs, and 60Co

    NASA Astrophysics Data System (ADS)

    Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders

    2014-09-01

    Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.

  11. Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps

    NASA Astrophysics Data System (ADS)

    Brooks, E. M.; Stein, S. A.; Spencer, B. D.

    2015-12-01

    The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.

  12. An Unconditional Test for Change Point Detection in Binary Sequences with Applications to Clinical Registries.

    PubMed

    Ellenberger, David; Friede, Tim

    2016-08-05

    Methods for change point (also sometimes referred to as threshold or breakpoint) detection in binary sequences are not new and were introduced as early as 1955. Much of the research in this area has focussed on asymptotic and exact conditional methods. Here we develop an exact unconditional test. An unconditional exact test is developed which assumes the total number of events as random instead of conditioning on the number of observed events. The new test is shown to be uniformly more powerful than Worsley's exact conditional test and means for its efficient numerical calculations are given. Adaptions of methods by Berger and Boos are made to deal with the issue that the unknown event probability imposes a nuisance parameter. The methods are compared in a Monte Carlo simulation study and applied to a cohort of patients undergoing traumatic orthopaedic surgery involving external fixators where a change in pin site infections is investigated. The unconditional test controls the type I error rate at the nominal level and is uniformly more powerful than (or to be more precise uniformly at least as powerful as) Worsley's exact conditional test which is very conservative for small sample sizes. In the application a beneficial effect associated with the introduction of a new treatment procedure for pin site care could be revealed. We consider the new test an effective and easy to use exact test which is recommended in small sample size change point problems in binary sequences.

  13. A Bayesian Approach to the Paleomagnetic Conglomerate Test

    NASA Astrophysics Data System (ADS)

    Heslop, David; Roberts, Andrew P.

    2018-02-01

    The conglomerate test has served the paleomagnetic community for over 60 years as a means to detect remagnetizations. The test states that if a suite of clasts within a bed have uniformly random paleomagnetic directions, then the conglomerate cannot have experienced a pervasive event that remagnetized the clasts in the same direction. The current form of the conglomerate test is based on null hypothesis testing, which results in a binary "pass" (uniformly random directions) or "fail" (nonrandom directions) outcome. We have recast the conglomerate test in a Bayesian framework with the aim of providing more information concerning the level of support a given data set provides for a hypothesis of uniformly random paleomagnetic directions. Using this approach, we place the conglomerate test in a fully probabilistic framework that allows for inconclusive results when insufficient information is available to draw firm conclusions concerning the randomness or nonrandomness of directions. With our method, sample sets larger than those typically employed in paleomagnetism may be required to achieve strong support for a hypothesis of random directions. Given the potentially detrimental effect of unrecognized remagnetizations on paleomagnetic reconstructions, it is important to provide a means to draw statistically robust data-driven inferences. Our Bayesian analysis provides a means to do this for the conglomerate test.

  14. Enhanced hyperuniformity from random reorganization.

    PubMed

    Hexner, Daniel; Chaikin, Paul M; Levine, Dov

    2017-04-25

    Diffusion relaxes density fluctuations toward a uniform random state whose variance in regions of volume [Formula: see text] scales as [Formula: see text] Systems whose fluctuations decay faster, [Formula: see text] with [Formula: see text], are called hyperuniform. The larger [Formula: see text], the more uniform, with systems like crystals achieving the maximum value: [Formula: see text] Although finite temperature equilibrium dynamics will not yield hyperuniform states, driven, nonequilibrium dynamics may. Such is the case, for example, in a simple model where overlapping particles are each given a small random displacement. Above a critical particle density [Formula: see text], the system evolves forever, never finding a configuration where no particles overlap. Below [Formula: see text], however, it eventually finds such a state, and stops evolving. This "absorbing state" is hyperuniform up to a length scale [Formula: see text], which diverges at [Formula: see text] An important question is whether hyperuniformity survives noise and thermal fluctuations. We find that hyperuniformity of the absorbing state is not only robust against noise, diffusion, or activity, but that such perturbations reduce fluctuations toward their limiting behavior, [Formula: see text], a uniformity similar to random close packing and early universe fluctuations, but with arbitrary controllable density.

  15. Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis

    PubMed Central

    Hensen, B.; Kalb, N.; Blok, M. S.; Dréau, A. E.; Reiserer, A.; Vermeulen, R. F. L.; Schouten, R. N.; Markham, M.; Twitchen, D. J.; Goodenough, K.; Elkouss, D.; Wehner, S.; Taminiau, T. H.; Hanson, R.

    2016-01-01

    The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator. PMID:27509823

  16. Failure tolerance of spike phase synchronization in coupled neural networks

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2011-09-01

    Neuronal synchronization plays an important role in the various functionality of nervous system such as binding, cognition, information processing, and computation. In this paper, we investigated how random and intentional failures in the nodes of a network influence its phase synchronization properties. We considered both artificially constructed networks using models such as preferential attachment, Watts-Strogatz, and Erdős-Rényi as well as a number of real neuronal networks. The failure strategy was either random or intentional based on properties of the nodes such as degree, clustering coefficient, betweenness centrality, and vulnerability. Hindmarsh-Rose model was considered as the mathematical model for the individual neurons, and the phase synchronization of the spike trains was monitored as a function of the percentage/number of removed nodes. The numerical simulations were supplemented by considering coupled non-identical Kuramoto oscillators. Failures based on the clustering coefficient, i.e., removing the nodes with high values of the clustering coefficient, had the least effect on the spike synchrony in all of the networks. This was followed by errors where the nodes were removed randomly. However, the behavior of the other three attack strategies was not uniform across the networks, and different strategies were the most influential in different network structure.

  17. Phase information contained in meter-scale SAR images

    NASA Astrophysics Data System (ADS)

    Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda

    2007-10-01

    The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.

  18. Improving Unipolar Resistive Switching Uniformity with Cone-Shaped Conducting Filaments and Its Logic-In-Memory Application.

    PubMed

    Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei

    2018-02-21

    Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.

  19. An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry

    DTIC Science & Technology

    2015-12-01

    ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  20. Random myosin loss along thick-filaments increases myosin attachment time and the proportion of bound myosin heads to mitigate force decline in skeletal muscle

    PubMed Central

    Tanner, Bertrand C.W.; McNabb, Mark; Palmer, Bradley M.; Toth, Michael J.; Miller, Mark S.

    2014-01-01

    Diminished skeletal muscle performance with aging, disuse, and disease may be partially attributed to the loss of myofilament proteins. Several laboratories have found a disproportionate loss of myosin protein content relative to other myofilament proteins, but due to methodological limitations, the structural manifestation of this protein loss is unknown. To investigate how variations in myosin content affect ensemble cross-bridge behavior and force production we simulated muscle contraction in the half-sarcomere as myosin was removed either i) uniformly, from the Z-line end of thick-filaments, or ii) randomly, along the length of thick-filaments. Uniform myosin removal decreased force production, showing a slightly steeper force-to-myosin content relationship than the 1:1 relationship that would be expected from the loss of cross-bridges. Random myosin removal also decreased force production, but this decrease was less than observed with uniform myosin loss, largely due to increased myosin attachment time (ton) and fractional cross-bridge binding with random myosin loss. These findings support our prior observations that prolonged ton may augment force production in single fibers with randomly reduced myosin content from chronic heart failure patients. These simulation also illustrate that the pattern of myosin loss along thick-filaments influences ensemble cross-bridge behavior and maintenance of force throughout the sarcomere. PMID:24486373

  1. Scaling, clustering and avalanches for steel beads in an external magnetic field

    NASA Astrophysics Data System (ADS)

    Marquinez, Alyse; Thvedt, Ingrid; Lehman, S. Y.; Jacobs, D. T.

    2011-03-01

    We investigated avalanches using uniform 3mm steel spheres (``beads'') dropped onto a conical bead pile within a uniform magnetic field. The bead pile is built by pouring beads onto a circular base where the bottom layer of beads had been glued randomly. Beads are then individually dropped from a fixed height after which the pile is massed. This process is repeated for thousands of bead drops. By measuring the number of avalanches of a given size that occurred during the experiment, the resulting avalanche size distribution was compared to a power law description as predicted by self-organized criticality. As the magnetic field intensity increased, the beads clustered to give a larger angle of repose and we measured the change in the avalanche size distribution. The moments of the distribution give a sensitive test of mean-field theory as the universality class for these bead piles. We acknowledge support from Research Corporation and NSF-REU grant DMR 0649112.

  2. Role of corticosteroid as a prophylactic measure in fat embolism syndrome: a literature review.

    PubMed

    Sen, Ramesh K; Tripathy, Sujit K; Krishnan, Vibhu

    2012-06-01

    Despite a number of studies on steroid therapy as a prophylactic measure in fat embolism syndrome (FES), there is no universal agreement about its role in this critical situation. The present article attempts to search the available literature, and provides a more lucid picture to the readers on this issue. Seven articles (total 483 patients) were reviewed and analyzed. Total of 223 patients received steroid (methyl prednisolone sodium succinate), while the remaining 260 patients formed the control population. Among these subjects, 9 patients in steroid-receiving group and 60 patients in the control group developed FES (P < 0.05). The lack of uniformities in these studies, variable dose and single-center trial are the principal limitations and confuses the surgeons to have definite conclusion. Large-scale, more uniformly designed, multi-centered, randomized, prospective trials are needed to determine the correct situations and dosage in which steroids provide the maximum benefit (with the least possible risk).

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the criticalmore » value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).« less

  4. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  5. Systematic and random variations in digital Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Duggin, M. J. (Principal Investigator); Sakhavat, H.

    1985-01-01

    Radiance recorded by any remote sensing instrument will contain noise which will consist of both systematic and random variations. Systematic variations may be due to sun-target-sensor geometry, atmospheric conditions, and the interaction of the spectral characteristics of the sensor with those of upwelling radiance. Random variations in the data may be caused by variations in the nature and in the heterogeneity of the ground cover, by variations in atmospheric transmission, and by the interaction of these variations with the sensing device. It is important to be aware of the extent of random and systematic errors in recorded radiance data across ostensibly uniform ground areas in order to assess the impact on quantative image analysis procedures for both the single date and the multidate cases. It is the intention here to examine the systematic and the random variations in digital radiance data recorded in each band by the thematic mapper over crop areas which are ostensibly uniform and which are free from visible cloud.

  6. Extracting Information about the Rotator Cuff from Magnetic Resonance Images Using Deterministic and Random Techniques

    PubMed Central

    De Los Ríos, F. A.; Paluszny, M.

    2015-01-01

    We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281

  7. Dynamical and structural transitions in periodically-driven emulsions: Reversibility loss and random hyper-unifom organization

    NASA Astrophysics Data System (ADS)

    Weijs, Joost H.; Jeanneret, Raphaël; Dreyfus, Rémi; Bartolo, Denis

    2015-03-01

    We present experiments and numerical simulations of a microfluidic echo process, in which a large number of droplets interact in a periodically driven viscous fluid [Jeanneret & Bartolo, Nature Comm. 5, 3474 (2013)]. Upon increasing the driving amplitude we demonstrate the collective reversibility loss of the droplet dynamics. In addition we show that this genuine dynamical phase transition is associated with a structural one: at the onset of irreversibility the droplet ensemble self-organises into a random hyperuniform state. Numerical simulations evidence that the purely reversible hydrodynamic interactions together with hard-core repulsion account for most of our experimental findings. Hyperuniformity is relevant for the production of large-band-gap materials, but are difficult to construct both numerically and experimentally. The hydrodynamic echo-process may provide a robust, fast, and simple way to produce hyper uniform structures over a wide range of packing fractions.

  8. On random search: Collection kinetics of Paramecia into a trap embedded in a closed domain

    NASA Astrophysics Data System (ADS)

    Deforet, Maxime; Duplat, Jérôme; Vandenberghe, Nicolas; Villermaux, Emmanuel

    2010-06-01

    We study the kinetics of a large number of organisms initially spread uniformly in a circular two-dimensional medium, at the center of which a smaller circular trap has been introduced. We take advantage of the acidophily of Paramecium caudatum, which, coming from a neutral medium, penetrates a region of moderate acidity but moves back in the opposite situation when it meets a sharp negative acidity gradient to quantify its rate of irreversible aggregation into a spot of acidified medium in water. Two regimes are distinguished: A ballistic regime characteristic of "fresh" paramecia where the organisms swim in a straight path with a well defined velocity and a Brownian regime characteristic of older paramecia where the mean free path of the organisms is smaller than the system size. Both regimes are characterized by distinct aggregation laws. They both result from a pure random trapping process that appears to have no adaptive strategy.

  9. Emergence of cooperation with self-organized criticality

    NASA Astrophysics Data System (ADS)

    Park, Sangmin; Jeong, Hyeong-Chai

    2012-02-01

    Cooperation and self-organized criticality are two main keywords in current studies of evolution. We propose a generalized Bak-Sneppen model and provide a natural mechanism which accounts for both phenomena simultaneously. We use the prisoner's dilemma games to mimic the interactions among the members in the population. Each member is identified by its cooperation probability, and its fitness is given by the payoffs from neighbors. The least fit member with the minimum payoff is replaced by a new member with a random cooperation probability. When the neighbors of the least fit one are also replaced with a non-zero probability, a strong cooperation emerges. The Bak-Sneppen process builds a self-organized structure so that the cooperation can emerge even in the parameter region where a uniform or random population decreases the number of cooperators. The emergence of cooperation is due to the same dynamical correlation that leads to self-organized criticality in replacement activities.

  10. The Newcomb-Benford law in its relation to some common distributions.

    PubMed

    Formann, Anton K

    2010-05-07

    An often reported, but nevertheless persistently striking observation, formalized as the Newcomb-Benford law (NBL), is that the frequencies with which the leading digits of numbers occur in a large variety of data are far away from being uniform. Most spectacular seems to be the fact that in many data the leading digit 1 occurs in nearly one third of all cases. Explanations for this uneven distribution of the leading digits were, among others, scale- and base-invariance. Little attention, however, found the interrelation between the distribution of the significant digits and the distribution of the observed variable. It is shown here by simulation that long right-tailed distributions of a random variable are compatible with the NBL, and that for distributions of the ratio of two random variables the fit generally improves. Distributions not putting most mass on small values of the random variable (e.g. symmetric distributions) fail to fit. Hence, the validity of the NBL needs the predominance of small values and, when thinking of real-world data, a majority of small entities. Analyses of data on stock prices, the areas and numbers of inhabitants of countries, and the starting page numbers of papers from a bibliography sustain this conclusion. In all, these findings may help to understand the mechanisms behind the NBL and the conditions needed for its validity. That this law is not only of scientific interest per se, but that, in addition, it has also substantial implications can be seen from those fields where it was suggested to be put into practice. These fields reach from the detection of irregularities in data (e.g. economic fraud) to optimizing the architecture of computers regarding number representation, storage, and round-off errors.

  11. Collision Models for Particle Orbit Code on SSX

    NASA Astrophysics Data System (ADS)

    Fisher, M. W.; Dandurand, D.; Gray, T.; Brown, M. R.; Lukin, V. S.

    2011-10-01

    Coulomb collision models are being developed and incorporated into the Hamiltonian particle pushing code (PPC) for applications to the Swarthmore Spheromak eXperiment (SSX). A Monte Carlo model based on that of Takizuka and Abe [JCP 25, 205 (1977)] performs binary collisions between test particles and thermal plasma field particles randomly drawn from a stationary Maxwellian distribution. A field-based electrostatic fluctuation model scatters particles from a spatially uniform random distribution of positive and negative spherical potentials generated throughout the plasma volume. The number, radii, and amplitude of these potentials are chosen to mimic the correct particle diffusion statistics without the use of random particle draws or collision frequencies. An electromagnetic fluctuating field model will be presented, if available. These numerical collision models will be benchmarked against known analytical solutions, including beam diffusion rates and Spitzer resistivity, as well as each other. The resulting collisional particle orbit models will be used to simulate particle collection with electrostatic probes in the SSX wind tunnel, as well as particle confinement in typical SSX fields. This work has been supported by US DOE, NSF and ONR.

  12. Percolation Thresholds in Angular Grain media: Drude Directed Infiltration

    NASA Astrophysics Data System (ADS)

    Priour, Donald

    Pores in many realistic systems are not well delineated channels, but are void spaces among grains impermeable to charge or fluid flow which comprise the medium. Sparse grain concentrations lead to permeable systems, while concentrations in excess of a critical density block bulk fluid flow. We calculate percolation thresholds in porous materials made up of randomly placed (and oriented) disks, tetrahedrons, and cubes. To determine if randomly generated finite system samples are permeable, we deploy virtual tracer particles which are scattered (e.g. specularly) by collisions with impenetrable angular grains. We hasten the rate of exploration (which would otherwise scale as ncoll1 / 2 where ncoll is the number of collisions with grains if the tracers followed linear trajectories) by considering the tracer particles to be charged in conjunction with a randomly directed uniform electric field. As in the Drude treatment, where a succession of many scattering events leads to a constant drift velocity, tracer displacements on average grow linearly in ncoll. By averaging over many disorder realizations for a variety of systems sizes, we calculate the percolation threshold and critical exponent which characterize the phase transition.

  13. Detecting Spatial Patterns in Biological Array Experiments

    PubMed Central

    ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.

    2005-01-01

    Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791

  14. Comparison of Turbulent Heat-Transfer Results for Uniform Wall Heat Flux and Uniform Wall Temperature

    NASA Technical Reports Server (NTRS)

    Siegel, R.; Sparrow, E. M.

    1960-01-01

    The purpose of this note is to examine in a more precise way how the Nusselt numbers for turbulent heat transfer in both the fully developed and thermal entrance regions of a circular tube are affected by two different wall boundary conditions. The comparisons are made for: (a) Uniform wall temperature (UWT); and (b) uniform wall heat flux (UHF). Several papers which have been concerned with the turbulent thermal entrance region problem are given. 1 Although these analyses have all utilized an eigenvalue formulation for the thermal entrance region there were differences in the choices of eddy diffusivity expressions, velocity distributions, and methods for carrying out the numerical solutions. These differences were also found in the fully developed analyses. Hence when making a comparison of the analytical results for uniform wall temperature and uniform wall heat flux, it was not known if differences in the Nusselt numbers could be wholly attributed to the difference in wall boundary conditions, since all the analytical results were not obtained in a consistent way. To have results which could be directly compared, computations were carried out for the uniform wall temperature case, using the same eddy diffusivity, velocity distribution, and digital computer program employed for uniform wall heat flux. In addition, the previous work was extended to a lower Reynolds number range so that comparisons could be made over a wide range of both Reynolds and Prandtl numbers.

  15. Mirror Numbers and Wigner's ``Unreasonable Effectiveness''

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander

    2006-04-01

    Wigner's ``unreasonable effectiveness of mathematics in physics'' can be augmented by concept of mirror number (MN). It is defined as digital string infinite in both directions. Example is ()5141327182() where first 5 digits is Pi ``spelled'' backward (``mirrored'') and last 5 digits is the beginning of decimal exp1 string. Let MN be constructed from two different transcendental (or algebraically irrational) numbers, set of such MNs is Cantor-uncountable. Most MNs have contain any finite digital sequence repeated infinitely many times. In spirit of ``Contact'' (C.Sagan) each normal MN contains ``Library of Babel'' of all possible texts and patterns (J.L.Borges). Infinite at both ends, MN do not have any numerical values and, contrary to numbers written in positional systems, all digits in MNs have equal weight -- sort of ``numerological democracy''. In Pythagorean-Platonic models (space-time and physical world originating from pure numbers) idea of MN resolves paradox of ``beginning'' (or ``end'') of time. Because in MNs all digits have equal status, (quantum) randomness leads to more uniform and fully ergodic phase trajectories (cf. F.Dyson, Infinite in All Directions) .

  16. Test Method for the Fatigue Life of Layered TiB/Ti Functionally Graded Beams Subjected to Fully Reversed Bending

    NASA Astrophysics Data System (ADS)

    Byrd, Larry; Rickerd, Greg; Wyen, Travis; Cooley, Glenn; Quast, Jeff

    2008-02-01

    Sonic fatigue of aircraft is characterized by fully reversed bending of components subjected to acoustic excitation. This problem is compounded in high temperature environments because solutions for acoustics which tend to result in stiff structures make thermal problems worse. Conversely solutions to the thermal problem which allow expansion often fail in the presence of high acoustic levels. Errors in fatigue life prediction in the combined environment often range from a factor of 4 to 10. This results in either heavy, overly stiff structure or premature failure. This work will test the hypothesis that the fatigue life of a layered functionally graded material (FGM) will be dominated by the failure of the stiffest outer layer. This is based on the observation that for isotropic materials the life is approximately 90% crack initiation and only 10% crack growth before failure. Four sets of cantilever specimens will be tested using an electro-mechanical shaker for base excitation. The excitation will be narrow band random around the fundamental frequency. Two sets of specimens are of uniform composition consisting of 85%TiB/Ti and two are graded specimens consisting of layers that vary from commercially pure titanium to 85%TiB/Ti. Strain vs number of cycles to failure curves will be generated with both constant amplitude sine and narrow band random around the fundamental frequency excitation. The results will be examined to compare life of the uniform material to the functionally graded material. Also to be studied will be the use of Miner's rule to predict the fatigue life of the randomly excited specimens.

  17. Optimizing the LSST Dither Pattern for Survey Uniformity

    NASA Astrophysics Data System (ADS)

    Awan, Humna; Gawiser, Eric J.; Kurczynski, Peter; Carroll, Christopher M.; LSST Dark Energy Science Collaboration

    2015-01-01

    The Large Synoptic Survey Telescope (LSST) will gather detailed data of the southern sky, enabling unprecedented study of Baryonic Acoustic Oscillations, which are an important probe of dark energy. These studies require a survey with highly uniform depth, and we aim to find an observation strategy that optimizes this uniformity. We have shown that in the absence of dithering (large telescope-pointing offsets), the LSST survey will vary significantly in depth. Hence, we implemented various dithering strategies, including random and repulsive random pointing offsets and spiral patterns with the spiral reaching completion in either a few months or the entire ten-year run. We employed three different implementations of dithering strategies: a single offset assigned to all fields observed on each night, offsets assigned to each field independently whenever the field is observed, and offsets assigned to each field only when the field is observed on a new night. Our analysis reveals that large dithers are crucial to guarantee survey uniformity and that assigning dithers to each field independently whenever the field is observed significantly increases this uniformity. These results suggest paths towards an optimal observation strategy that will enable LSST to achieve its science goals.We gratefully acknowledge support from the National Science Foundation REU program at Rutgers, PHY-1263280, and the Department of Energy, DE-SC0011636.

  18. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

  19. Nifedipine capsules may provide a viable alternative to oral powders for paediatric patients.

    PubMed

    Helin-Tanninen, M; Naaranlahti, T; Kontra, K; Savolainen, K

    2007-02-01

    To compare content uniformities between different sizes of extemporaneously compounded nifedipine oral powders and capsules, in order to find out if capsules could be used instead of oral powders as paediatric medications. Actual content and content uniformity of extemporaneously compounded 1-mg nifedipine oral powders and capsules were evaluated by a high performance liquid chromatographic assay. Capsules and powders were prepared by triturating 10-mg nifedipine tablets with different amounts of lactose or microcrystalline cellulose with a mortar and pestle using a standard geometric dilution technique. Oral powders were weighed individually and capsules were filled by a hand-operated capsule-filling machine. Four different sizes of powders (500, 300, 100 and 50 mg) and three different sizes of capsules (numbers 1, 3 and 4) were prepared. Ten oral powders and 10 capsules from each batch were randomly selected and individually assayed for nifedipine amount. The extemporaneously prepared nifedipine oral powders and capsules were within acceptable limits for content uniformity, as defined by the European Pharmacopoeia, but the results indicate that the loss of nifedipine during the preparation process may be considerable for both preparations. The concentration on nifedipine decreased while the total mass of the oral powder decreased. These results demonstrate that nifedipine oral powders can be replaced by capsules, whose contents are emptied for use, in paediatric medications. Compounding small capsules, such as size number 3 or 4, is acceptable when considering the average drug content. The total weight of the oral powder should be at least 300 mg. The preparation of nifedipine in all studied capsule sizes was safe with either lactose monohydrate or microcrystalline cellulose as excipients. Thus, emptied capsules seem to be a good choice for delivering a paediatric medication. The loss of nifedipine was considerable in oral powders with low total weight.

  20. Statistical effects related to low numbers of reacting molecules analyzed for a reversible association reaction A + B = C in ideally dispersed systems: An apparent violation of the law of mass action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.

    2016-03-28

    Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less

  1. Ultra-broadband and planar sound diffuser with high uniformity of reflected intensity

    NASA Astrophysics Data System (ADS)

    Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Yang, Jing; Yang, Jun; Cheng, Jian-Chun

    2017-09-01

    Schroeder diffusers, as a classical design of acoustic diffusers proposed over 40 years ago, play key roles in many practical scenarios ranging from architectural acoustics to noise control to particle manipulation. Despite the great success of conventional acoustic diffusers, it is still worth pursuing ideal acoustic diffusers that are essentially expected to produce perfect sound diffuse reflection within the unlimited bandwidth. Here, we propose a different mechanism for designing acoustic diffusers to overcome the basic limits in intensity uniformity and working bandwidth in the previous designs and demonstrate a practical implementation by acoustic metamaterials with dispersionless phase-steering capability. In stark contrast to the existing production of diffuse fields relying on random scattering of sound energy by using a specific mathematical number sequence of periodically distributed unit cells, we directly mold the reflected wavefront into the desired shape by precisely manipulating the local phases of individual subwavelength metastructures. We also benchmark our design via numerical simulation with a commercially available Schroeder diffuser, and the results verify that our proposed diffuser scatters incident acoustic energy into all directions more uniformly within an ultra-broad band regardless of the incident angle. Furthermore, our design enables further improvement of the working bandwidth just by simply downscaling each individual element. With ultra-broadband functionality and high uniformity of reflected intensity, our metamaterial-based production of the diffusive field opens a route to the design and application of acoustic diffusers and may have a significant impact on various fields such as architectural acoustics and medical ultrasound imaging/treatment.

  2. The effect of uniform color on judging athletes' aggressiveness, fairness, and chance of winning.

    PubMed

    Krenn, Bjoern

    2015-04-01

    In the current study we questioned the impact of uniform color in boxing, taekwondo and wrestling. On 18 photos showing two athletes competing, the hue of each uniform was modified to blue, green or red. For each photo, six color conditions were generated (blue-red, blue-green, green-red and vice versa). In three experiments these 108 photos were randomly presented. Participants (N = 210) had to select the athlete that seemed to be more aggressive, fairer or more likely to win the fight. Results revealed that athletes wearing red in boxing and wrestling were judged more aggressive and more likely to win than athletes wearing blue or green uniforms. In addition, athletes wearing green were judged fairer in boxing and wrestling than athletes wearing red. In taekwondo we did not find any significant impact of uniform color. Results suggest that uniform color in combat sports carries specific meanings that affect others' judgments.

  3. An In-Depth Analysis of the Chung-Lu Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winlaw, M.; DeSterck, H.; Sanders, G.

    2015-10-28

    In the classic Erd}os R enyi random graph model [5] each edge is chosen with uniform probability and the degree distribution is binomial, limiting the number of graphs that can be modeled using the Erd}os R enyi framework [10]. The Chung-Lu model [1, 2, 3] is an extension of the Erd}os R enyi model that allows for more general degree distributions. The probability of each edge is no longer uniform and is a function of a user-supplied degree sequence, which by design is the expected degree sequence of the model. This property makes it an easy model to work withmore » theoretically and since the Chung-Lu model is a special case of a random graph model with a given degree sequence, many of its properties are well known and have been studied extensively [2, 3, 13, 8, 9]. It is also an attractive null model for many real-world networks, particularly those with power-law degree distributions and it is sometimes used as a benchmark for comparison with other graph generators despite some of its limitations [12, 11]. We know for example, that the average clustering coe cient is too low relative to most real world networks. As well, measures of a nity are also too low relative to most real-world networks of interest. However, despite these limitations or perhaps because of them, the Chung-Lu model provides a basis for comparing new graph models.« less

  4. Protective role of curcumin against sulfite-induced structural changes in rats' medial prefrontal cortex.

    PubMed

    Noorafshan, Ali; Asadi-Golshan, Reza; Abdollahifar, Mohammad-Amin; Karbalay-Doust, Saied

    2015-08-01

    Sodium metabisulfite as a food preservative can affect the central nervous system. Curcumin, the main ingredient of turmeric has neuroprotective activity. This study was designed to evaluate the effects of sulfite and curcumin on the medial prefrontal cortex (mPFC) using stereological methods. Thirty rats were randomly divided into five groups. The rats in groups I-V received distilled water, olive oil, curcumin (100 mg/kg/day), sodium metabisulfite (25 mg/kg/day), and sulfite + curcumin, respectively, for 8 weeks. The brains were subjected to the stereological methods. Cavalieri and optical disector techniques were used to estimate the total volume of mPFC and the number of neurons and glial cells. Intersections counting were applied on the thick vertical uniform random sections to estimate the dendrites length, and classify the spines. Non-parametric tests were used to analyze the data. The mean mPFC volume, neurons number, glia number, dendritic length, and total spines per neuron were 3.7 mm(3), 365,000, 180,000, 1820 µm, and 1700 in distilled water group, respectively. A reduction was observed in the volume of mPFC (∼8%), number of neurons (∼15%), and number of glia (∼14%) in mPFC of the sulfite group compared to the control groups (P < 0.005). Beside, dendritic length per neuron (∼10%) and the total spines per neuron (mainly mushroom spines) (∼25%) were reduced in the sulfite group (P < 0.005). The sulfite-induced structural changes in mPFC and curcumin had a protective role against the changes in the rats.

  5. Management of Brain Metastases.

    PubMed

    Jeyapalan, Suriya A.; Batchelor, Tracy

    2004-07-01

    Advances in neurosurgery and the development of stereotactic radiosurgery have expanded treatment options available for patients with brain metastases. However, despite several randomized clinical trials and multiple uncontrolled studies, there is not a uniform consensus on the best treatment strategy for all patients with brain metastases. The heterogeneity of this patient population in terms of functional status, types of underlying cancers, status of systemic disease control, and number and location of brain metastases make such consensus difficult. Nevertheless, in certain situations, there is Class I evidence that supports one approach or another. The primary objectives in the management of this patient population include improved duration and quality of survival. Very few patients achieve long-term survival after the diagnosis of a brain metastasis.

  6. Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles

    NASA Astrophysics Data System (ADS)

    Kobayashi, Naoki; Yamazaki, Hiroshi

    2018-01-01

    We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.

  7. Effects of clustered transmission on epidemic growth Comment on "Mathematical models to characterize early epidemic growth: A review" by Gerardo Chowell et al.

    NASA Astrophysics Data System (ADS)

    Merler, Stefano

    2016-09-01

    Characterizing the early growth profile of an epidemic outbreak is key for predicting the likely trajectory of the number of cases and for designing adequate control measures. Epidemic profiles characterized by exponential growth have been widely observed in the past and a grounding theoretical framework for the analysis of infectious disease dynamics was provided by the pioneering work of Kermack and McKendrick [1]. In particular, exponential growth stems from the assumption that pathogens spread in homogeneous mixing populations; that is, individuals of the population mix uniformly and randomly with each other. However, this assumption was readily recognized as highly questionable [2], and sub-exponential profiles of epidemic growth have been observed in a number of epidemic outbreaks, including HIV/AIDS, foot-and-mouth disease, measles and, more recently, Ebola [3,4].

  8. Shape and Displacement Fluctuations in Soft Vesicles Filled by Active Particles

    PubMed Central

    Paoluzzi, Matteo; Di Leonardo, Roberto; Marchetti, M. Cristina; Angelani, Luca

    2016-01-01

    We investigate numerically the dynamics of shape and displacement fluctuations of two-dimensional flexible vesicles filled with active particles. At low concentration most of the active particles accumulate at the boundary of the vesicle where positive particle number fluctuations are amplified by trapping, leading to the formation of pinched spots of high density, curvature and pressure. At high concentration the active particles cover the vesicle boundary almost uniformly, resulting in fairly homogeneous pressure and curvature, and nearly circular vesicle shape. The change between polarized and spherical shapes is driven by the number of active particles. The center-of-mass of the vesicle performs a persistent random walk with a long time diffusivity that is strongly enhanced for elongated active particles due to orientational correlations in their direction of propulsive motion. In our model shape-shifting induces directional sensing and the cell spontaneously migrate along the polarization direction. PMID:27678166

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMillan, Kyle; Marleau, Peter; Brubaker, Erik

    In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed imagemore » quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.« less

  10. Non-Uniformly Sampled MR Correlated Spectroscopic Imaging in Breast Cancer and Nonlinear Reconstruction

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-16-1-0524 TITLE: Non-Uniformly Sampled MR Correlated Spectroscopic Imaging in Breast Cancer and Nonlinear Reconstruction...author(s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other...COVERED 30 Sep 2016 - 29 Sep 2017 5a. CONTRACT NUMBER 4. TITLE AND SUBTITLE Non-Uniformly Sampled MR Correlated Spectroscopic Imaging in Breast

  11. Buttoned down: Are School Uniform Policies a Perfect Fit for All Students?

    ERIC Educational Resources Information Center

    Messitt, Maggie

    2013-01-01

    In the 1999-2000 school year, only about 12 percent of U.S. public schools required their students to wear uniforms. Since then, the number of schools requiring uniforms has risen. Uniform policies are now in place at about a fifth of all public schools in the United States--but do school uniforms really level the playing field? New research has…

  12. Overlooked Threats to Respondent Driven Sampling Estimators: Peer Recruitment Reality, Degree Measures, and Random Selection Assumption.

    PubMed

    Li, Jianghong; Valente, Thomas W; Shin, Hee-Sung; Weeks, Margaret; Zelenev, Alexei; Moothi, Gayatri; Mosher, Heather; Heimer, Robert; Robles, Eduardo; Palmer, Greg; Obidoa, Chinekwu

    2017-06-28

    Intensive sociometric network data were collected from a typical respondent driven sample (RDS) of 528 people who inject drugs residing in Hartford, Connecticut in 2012-2013. This rich dataset enabled us to analyze a large number of unobserved network nodes and ties for the purpose of assessing common assumptions underlying RDS estimators. Results show that several assumptions central to RDS estimators, such as random selection, enrollment probability proportional to degree, and recruitment occurring over recruiter's network ties, were violated. These problems stem from an overly simplistic conceptualization of peer recruitment processes and dynamics. We found nearly half of participants were recruited via coupon redistribution on the street. Non-uniform patterns occurred in multiple recruitment stages related to both recruiter behavior (choosing and reaching alters, passing coupons, etc.) and recruit behavior (accepting/rejecting coupons, failing to enter study, passing coupons to others). Some factors associated with these patterns were also associated with HIV risk.

  13. Signs of universality in the structure of culture

    NASA Astrophysics Data System (ADS)

    Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego

    2017-11-01

    Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.

  14. Coverage-guaranteed sensor node deployment strategies for wireless sensor networks.

    PubMed

    Fan, Gaojuan; Wang, Ruchuan; Huang, Haiping; Sun, Lijuan; Sha, Chao

    2010-01-01

    Deployment quality and cost are two conflicting aspects in wireless sensor networks. Random deployment, where the monitored field is covered by randomly and uniformly deployed sensor nodes, is an appropriate approach for large-scale network applications. However, their successful applications depend considerably on the deployment quality that uses the minimum number of sensors to achieve a desired coverage. Currently, the number of sensors required to meet the desired coverage is based on asymptotic analysis, which cannot meet deployment quality due to coverage overestimation in real applications. In this paper, we first investigate the coverage overestimation and address the challenge of designing coverage-guaranteed deployment strategies. To overcome this problem, we propose two deployment strategies, namely, the Expected-area Coverage Deployment (ECD) and BOundary Assistant Deployment (BOAD). The deployment quality of the two strategies is analyzed mathematically. Under the analysis, a lower bound on the number of deployed sensor nodes is given to satisfy the desired deployment quality. We justify the correctness of our analysis through rigorous proof, and validate the effectiveness of the two strategies through extensive simulation experiments. The simulation results show that both strategies alleviate the coverage overestimation significantly. In addition, we also evaluate two proposed strategies in the context of target detection application. The comparison results demonstrate that if the target appears at the boundary of monitored region in a given random deployment, the average intrusion distance of BOAD is considerably shorter than that of ECD with the same desired deployment quality. In contrast, ECD has better performance in terms of the average intrusion distance when the invasion of intruder is from the inside of monitored region.

  15. Antipersistent dynamics in kinetic models of wealth exchange

    NASA Astrophysics Data System (ADS)

    Goswami, Sanchari; Chatterjee, Arnab; Sen, Parongama

    2011-11-01

    We investigate the detailed dynamics of gains and losses made by agents in some kinetic models of wealth exchange. An earlier work suggested that a walk in an abstract gain-loss space can be conceived for the agents. For models in which agents do not save, or save with uniform saving propensity, the walk has diffusive behavior. For the case in which the saving propensity λ is distributed randomly (0≤λ<1), the resultant walk showed a ballistic nature (except at a particular value of λ*≈0.47). Here we consider several other features of the walk with random λ. While some macroscopic properties of this walk are comparable to a biased random walk, at microscopic level, there are gross differences. The difference turns out to be due to an antipersistent tendency toward making a gain (loss) immediately after making a loss (gain). This correlation is in fact present in kinetic models without saving or with uniform saving as well, such that the corresponding walks are not identical to ordinary random walks. In the distributed saving case, antipersistence occurs with a simultaneous overall bias.

  16. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  17. Accelerated 1 H MRSI using randomly undersampled spiral-based k-space trajectories.

    PubMed

    Chatnuntawech, Itthi; Gagoski, Borjan; Bilgic, Berkin; Cauley, Stephen F; Setsompop, Kawin; Adalsteinsson, Elfar

    2014-07-30

    To develop and evaluate the performance of an acquisition and reconstruction method for accelerated MR spectroscopic imaging (MRSI) through undersampling of spiral trajectories. A randomly undersampled spiral acquisition and sensitivity encoding (SENSE) with total variation (TV) regularization, random SENSE+TV, is developed and evaluated on single-slice numerical phantom, in vivo single-slice MRSI, and in vivo three-dimensional (3D)-MRSI at 3 Tesla. Random SENSE+TV was compared with five alternative methods for accelerated MRSI. For the in vivo single-slice MRSI, random SENSE+TV yields up to 2.7 and 2 times reduction in root-mean-square error (RMSE) of reconstructed N-acetyl aspartate (NAA), creatine, and choline maps, compared with the denoised fully sampled and uniformly undersampled SENSE+TV methods with the same acquisition time, respectively. For the in vivo 3D-MRSI, random SENSE+TV yields up to 1.6 times reduction in RMSE, compared with uniform SENSE+TV. Furthermore, by using random SENSE+TV, we have demonstrated on the in vivo single-slice and 3D-MRSI that acceleration factors of 4.5 and 4 are achievable with the same quality as the fully sampled data, as measured by RMSE of reconstructed NAA map, respectively. With the same scan time, random SENSE+TV yields lower RMSEs of metabolite maps than other methods evaluated. Random SENSE+TV achieves up to 4.5-fold acceleration with comparable data quality as the fully sampled acquisition. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  18. TOC: Table of Contents Practices of Primary Journals--Recommendations for Monolingual, Multilingual and International Journals.

    ERIC Educational Resources Information Center

    Juhasz, Stephen; And Others

    Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…

  19. Averaging in SU(2) open quantum random walk

    NASA Astrophysics Data System (ADS)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  20. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  1. Mining TCGA Data Using Boolean Implications

    PubMed Central

    Sinha, Subarna; Tsang, Emily K.; Zeng, Haoyang; Meister, Michela; Dill, David L.

    2014-01-01

    Boolean implications (if-then rules) provide a conceptually simple, uniform and highly scalable way to find associations between pairs of random variables. In this paper, we propose to use Boolean implications to find relationships between variables of different data types (mutation, copy number alteration, DNA methylation and gene expression) from the glioblastoma (GBM) and ovarian serous cystadenoma (OV) data sets from The Cancer Genome Atlas (TCGA). We find hundreds of thousands of Boolean implications from these data sets. A direct comparison of the relationships found by Boolean implications and those found by commonly used methods for mining associations show that existing methods would miss relationships found by Boolean implications. Furthermore, many relationships exposed by Boolean implications reflect important aspects of cancer biology. Examples of our findings include cis relationships between copy number alteration, DNA methylation and expression of genes, a new hierarchy of mutations and recurrent copy number alterations, loss-of-heterozygosity of well-known tumor suppressors, and the hypermethylation phenotype associated with IDH1 mutations in GBM. The Boolean implication results used in the paper can be accessed at http://crookneck.stanford.edu/microarray/TCGANetworks/. PMID:25054200

  2. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  3. Some aspects of the aeroacoustics of high-speed jets

    NASA Technical Reports Server (NTRS)

    Lighthill, James

    1993-01-01

    Some of the background to contemporary jet aeroacoustics is addressed. Then scaling laws for noise generation by low-Mach-number airflows and by turbulence convected at 'not so low' Mach number is reviewed. These laws take into account the influence of Doppler effects associated with the convection of aeroacoustic sources. Next, a uniformly valid Doppler-effect approximation exhibits the transition, with increasing Mach number of convection, from compact-source radiation at low Mach numbers to a statistical assemblage of conical shock waves radiated by eddies convected at supersonic speed. In jets, for example, supersonic eddy convection is typically found for jet exit speeds exceeding twice the atmospheric speed of sound. The Lecture continues by describing a new dynamical theory of the nonlinear propagation of such statistically random assemblages of conical shock waves. It is shown, both by a general theoretical analysis and by an illustrative computational study, how their propagation is dominated by a characteristic 'bunching' process. That process associated with a tendency for shock waves that have already formed unions with other shock waves to acquire an increased proneness to form further unions - acts so as to enhance the high-frequency part of the spectrum of noise emission from jets at these high exit speeds.

  4. Adapting radiotherapy to hypoxic tumours

    NASA Astrophysics Data System (ADS)

    Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag

    2006-10-01

    In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.

  5. Flexible sampling large-scale social networks by self-adjustable random walk

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Ke; Zhu, Jonathan J. H.

    2016-12-01

    Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.

  6. Cost-effectiveness of multidisciplinary management of Tinnitus at a specialized Tinnitus centre

    PubMed Central

    Cima, Rilana; Joore, Manuela; Maes, Iris; Scheyen, Dyon; Refaie, Amr El; Baguley, David M; Vlaeyen, Johan WS; Anteunis, Lucien

    2009-01-01

    Background Tinnitus is a common chronic health condition that affects 10% to 20% of the general population. Among severe sufferers it causes disability in various areas. As a result of the tinnitus, quality of life is often impaired. At present there is no cure or uniformly effective treatment, leading to fragmentized and costly tinnitus care. Evidence suggests that a comprehensive multidisciplinary approach in treating tinnitus is effective. The main objective of this study is to examine the effectiveness, costs, and cost-effectiveness of a comprehensive treatment provided by a specialized tinnitus center versus usual care. This paper describes the study protocol. Methods/Design In a randomized controlled clinical trial 198 tinnitus patients will be randomly assigned to a specialized tinnitus care group or a usual care group. Adult tinnitus sufferers referred to the audiological centre are eligible. Included patients will be followed for 12 months. Primary outcome measure is generic quality of life (measured with the Health Utilities Index Mark III). Secondary outcomes are severity of tinnitus, general distress, tinnitus cognitions, tinnitus specific fear, and costs. Based on health state utility outcome data the number of patients to include is 198. Economic evaluation will be performed from a societal perspective. Discussion This is, to our knowledge, the first randomized controlled trial that evaluates a comprehensive treatment of tinnitus and includes a full economic evaluation from a societal perspective. If this intervention proves to be effective and cost-effective, implementation of this intervention is considered and anticipated. Trial Registration The trial has been registered at ClinicalTrial.gov. The trial registration number is NCT00733044 PMID:19210767

  7. Kinetic market models with single commodity having price fluctuations

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Chakrabarti, B. K.

    2006-12-01

    We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.

  8. Tomographical imaging using uniformly redundant arrays

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1979-01-01

    An investigation is conducted of the behavior of two types of uniformly redundant array (URA) when used for close-up imaging. One URA pattern is a quadratic residue array whose characteristics for imaging planar sources have been simulated by Fenimore and Cannon (1978), while the second is based on m sequences that have been simulated by Gunson and Polychronopulos (1976) and by MacWilliams and Sloan (1976). Close-up imaging is necessary in order to obtain depth information for tomographical purposes. The properties of the two URA patterns are compared with a random array of equal open area. The goal considered in the investigation is to determine if a URA pattern exists which has the desirable defocus properties of the random array while maintaining artifact-free image properties for in-focus objects.

  9. Metastable dynamical patterns and their stabilization in arrays of bidirectionally coupled sigmoidal neurons

    NASA Astrophysics Data System (ADS)

    Horikawa, Yo

    2013-12-01

    Transient patterns in a bistable ring of bidirectionally coupled sigmoidal neurons were studied. When the system had a pair of spatially uniform steady solutions, the instability of unstable spatially nonuniform steady solutions decreased exponentially with the number of neurons because of the symmetry of the system. As a result, transient spatially nonuniform patterns showed dynamical metastability: Their duration increased exponentially with the number of neurons and the duration of randomly generated patterns obeyed a power-law distribution. However, these metastable dynamical patterns were easily stabilized in the presence of small variations in coupling strength. Metastable rotating waves and their pinning in the presence of asymmetry in the direction of coupling and the disappearance of metastable dynamical patterns due to asymmetry in the output function of a neuron were also examined. Further, in a two-dimensional array of neurons with nearest-neighbor coupling, intrinsically one-dimensional patterns were dominant in transients, and self-excitation in these neurons affected the metastable dynamical patterns.

  10. Numerical study of Potts models with aperiodic modulations: influence on first-order transitions

    NASA Astrophysics Data System (ADS)

    Branco, Nilton; Girardi, Daniel

    2012-02-01

    We perform a numerical study of Potts models on a rectangular lattice with aperiodic interactions along one spatial direction. The number of states q is such that the transition is a first-order one for the uniform model. The Wolff algorithm is employed, for many lattice sizes, allowing for a finite-size scaling analyses to be carried out. Three different self-dual aperiodic sequences are employed, such that the exact critical temperature is known: this leads to precise results for the exponents. We analyze models with q=6 and 15 and show that the Harris-Luck criterion, originally introduced in the study of continuous transitions, is obeyed also for first-order ones. The new universality class that emerges for relevant aperiodic modulations depends on the number of states of the Potts model, as obtained elsewhere for random disorder, and on the aperiodic sequence. We determine the occurrence of log-periodic behavior, as expected for models with aperiodic modulated interactions.

  11. Comparing three pedagogical approaches to psychomotor skills acquisition.

    PubMed

    Willis, Ross E; Richa, Jacqueline; Oppeltz, Richard; Nguyen, Patrick; Wagner, Kelly; Van Sickle, Kent R; Dent, Daniel L

    2012-01-01

    We compared traditional pedagogical approaches such as time- and repetition-based methods with proficiency-based training. Laparoscopic novices were assigned randomly to 1 of 3 training conditions. In experiment 1, participants in the time condition practiced for 60 minutes, participants in the repetition condition performed 5 practice trials, and participants in the proficiency condition trained until reaching a predetermined proficiency goal. In experiment 2, practice time and number of trials were equated across conditions. In experiment 1, participants in the proficiency-based training conditions outperformed participants in the other 2 conditions (P < .014); however, these participants trained longer (P < .001) and performed more repetitions (P < .001). In experiment 2, despite training for similar amounts of time and number of repetitions, participants in the proficiency condition outperformed their counterparts (P < .038). In both experiments, the standard deviations for the proficiency condition were smaller than the other conditions. Proficiency-based training results in trainees who perform uniformly and at a higher level than traditional training methodologies. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, V. A.

    2008-01-01

    There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.

  13. Statistical mechanics of monatomic liquids

    NASA Astrophysics Data System (ADS)

    Wallace, Duane C.

    1997-10-01

    Two key experimental properties of elemental liquids, together with an analysis of the condensed-system potential-energy surface, lead us logically to the dynamical theory of monatomic liquids. Experimentally, the ion motional specific heat is approximately 3Nk for N ions, implying the normal modes of motion are approximately 3N independent harmonic oscillators. This implies the potential surface contains nearly harmonic valleys. The equilibrium configuration at the bottom of each valley is a ``structure.'' Structures are crystalline or amorphous, and amorphous structures can have a remnant of local crystal symmetry, or can be random. The random structures are by far the most numerous, and hence dominate the statistical mechanics of the liquid state, and their macroscopic properties are uniform over the structure class, for large-N systems. The Hamiltonian for any structural valley is the static structure potential, a sum of harmonic normal modes, and an anharmonic correction. Again from experiment, the constant-density entropy of melting contains a universal disordering contribution of NkΔ, suggesting the random structural valleys are of universal number wN, where lnw=Δ. Our experimental estimate for Δ is 0.80. In quasiharmonic approximation, the liquid theory for entropy agrees with experiment, for all currently analyzable experimental data at elevated temperatures, to within 1-2% of the total entropy. Further testable predictions of the theory are mentioned.

  14. The Supermarket Model with Bounded Queue Lengths in Equilibrium

    NASA Astrophysics Data System (ADS)

    Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.

    2018-04-01

    In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.

  15. Generalized Nonlinear Yule Models

    NASA Astrophysics Data System (ADS)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-11-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  16. Properties of plane discrete Poisson-Voronoi tessellations on triangular tiling formed by the Kolmogorov-Johnson-Mehl-Avrami growth of triangular islands

    NASA Astrophysics Data System (ADS)

    Korobov, A.

    2011-08-01

    Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.

  17. Properties of plane discrete Poisson-Voronoi tessellations on triangular tiling formed by the Kolmogorov-Johnson-Mehl-Avrami growth of triangular islands.

    PubMed

    Korobov, A

    2011-08-01

    Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.

  18. Amorphous topological insulators constructed from random point sets

    NASA Astrophysics Data System (ADS)

    Mitchell, Noah P.; Nash, Lisa M.; Hexner, Daniel; Turner, Ari M.; Irvine, William T. M.

    2018-04-01

    The discovery that the band structure of electronic insulators may be topologically non-trivial has revealed distinct phases of electronic matter with novel properties1,2. Recently, mechanical lattices have been found to have similarly rich structure in their phononic excitations3,4, giving rise to protected unidirectional edge modes5-7. In all of these cases, however, as well as in other topological metamaterials3,8, the underlying structure was finely tuned, be it through periodicity, quasi-periodicity or isostaticity. Here we show that amorphous Chern insulators can be readily constructed from arbitrary underlying structures, including hyperuniform, jammed, quasi-crystalline and uniformly random point sets. While our findings apply to mechanical and electronic systems alike, we focus on networks of interacting gyroscopes as a model system. Local decorations control the topology of the vibrational spectrum, endowing amorphous structures with protected edge modes—with a chirality of choice. Using a real-space generalization of the Chern number, we investigate the topology of our structures numerically, analytically and experimentally. The robustness of our approach enables the topological design and self-assembly of non-crystalline topological metamaterials on the micro and macro scale.

  19. Understanding spatial connectivity of individuals with non-uniform population density.

    PubMed

    Wang, Pu; González, Marta C

    2009-08-28

    We construct a two-dimensional geometric graph connecting individuals placed in space within a given contact distance. The individuals are distributed using a measured country's density of population. We observe that while large clusters (group of individuals connected) emerge within some regions, they are trapped in detached urban areas owing to the low population density of the regions bordering them. To understand the emergence of a giant cluster that connects the entire population, we compare the empirical geometric graph with the one generated by placing the same number of individuals randomly in space. We find that, for small contact distances, the empirical distribution of population dominates the growth of connected components, but no critical percolation transition is observed in contrast to the graph generated by a random distribution of population. Our results show that contact distances from real-world situations as for WIFI and Bluetooth connections drop in a zone where a fully connected cluster is not observed, hinting that human mobility must play a crucial role in contact-based diseases and wireless viruses' large-scale spreading.

  20. A New Family of Solvable Pearson-Dirichlet Random Walks

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2011-07-01

    An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.

  1. The 1/ N Expansion of Tensor Models Beyond Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Gurau, Razvan

    2014-09-01

    We analyze in full mathematical rigor the most general quartically perturbed invariant probability measure for a random tensor. Using a version of the Loop Vertex Expansion (which we call the mixed expansion) we show that the cumulants write as explicit series in 1/ N plus bounded rest terms. The mixed expansion recasts the problem of determining the subleading corrections in 1/ N into a simple combinatorial problem of counting trees decorated by a finite number of loop edges. As an aside, we use the mixed expansion to show that the (divergent) perturbative expansion of the tensor models is Borel summable and to prove that the cumulants respect an uniform scaling bound. In particular the quartically perturbed measures fall, in the N→ ∞ limit, in the universality class of Gaussian tensor models.

  2. Bounds on the entanglement entropy of droplet states in the XXZ spin chain

    NASA Astrophysics Data System (ADS)

    Beaud, V.; Warzel, S.

    2018-01-01

    We consider a class of one-dimensional quantum spin systems on the finite lattice Λ ⊂Z , related to the XXZ spin chain in its Ising phase. It includes in particular the so-called droplet Hamiltonian. The entanglement entropy of energetically low-lying states over a bipartition Λ = B ∪ Bc is investigated and proven to satisfy a logarithmic bound in terms of min{n, |B|, |Bc|}, where n denotes the maximal number of down spins in the considered state. Upon addition of any (positive) random potential, the bound becomes uniformly constant on average, thereby establishing an area law. The proof is based on spectral methods: a deterministic bound on the local (many-body integrated) density of states is derived from an energetically motivated Combes-Thomas estimate.

  3. TU-AB-201-04: Optimizing the Number of Catheter Implants and Their Tracks for Prostate HDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riofrio, D; Luan, S; Zhou, J

    Purpose: In prostate HDR brachytherapy, interstitial implants are placed manually on the fly. The aim for this research is to develop a computer algorithm to find optimal and reliable implant trajectories using minimal number of implants. Methods: Our new algorithm mainly uses these key ideas: (1) positive charged static particles are uniformly placed on the surface of prostate and critical structures such as urethra, bladder, and rectum. (2) Positive charged kinetic particles are placed at a cross-section of the prostate with an initial velocity parallel to the principal implant direction. (3) The kinetic particles move through the prostate, interacting withmore » each other, spreading out, while staying away from the prostate surface and critical structures. The initial velocity ensures that the trajectories observe the curvature constraints of typical implant procedures. (4) The finial trajectories of kinetic particles are smoothed using a third-degree polynomial regression, which become the implant trajectories. (5) The dwelling times and final dose distribution are calculated using least-distance programming. Results: (1) We experimented with previously treated cases. Our plan achieves all prescription goals while reducing the number of implants by 41%! Our plan also has less uniform target dose, which implies a higher dose is delivered to the prostate. (2) We expect future implant procedures will be performed under the guidance of such pre-calculated trajectories. To assess the applicability, we randomly perturb the tracks to mimic the manual implant errors. Our studies showed the impact of these perturbations are negligible, which is compensated by the least distance programming. Conclusions: We developed a new inverse planning system for prostate HDR therapy that can find optimal implant trajectories while minimizing the number of implants. For future work, we plan to integrate our new inverse planning system with an existing needle tracking system.« less

  4. Efficient Hierarchical Quorums in Unstructured Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Henry, Kevin; Swanson, Colleen; Xie, Qi; Daudjee, Khuzaima

    Managing updates in a peer-to-peer (P2P) network can be a challenging task, especially in the unstructured setting. If one peer reads or updates a data item, then it is desirable to read the most recent version or to have the update visible to all other peers. In practice, this should be accomplished by coordinating and writing to only a small number of peers. We propose two approaches, inspired by hierarchical quorums, to solve this problem in unstructured P2P networks. Our first proposal provides uniform load balancing, while the second sacrifices full load balancing for larger average quorum intersection, and hence greater tolerance to network churn. We demonstrate that applying a random logical tree structure to peers on a per-data item basis allows us to achieve near optimal quorum size, thus minimizing the number of peers that must be coordinated to perform a read or write operation. Unlike previous approaches, our random hierarchical quorums are always guaranteed to overlap at at least one peer when all peers are reachable and, as demonstrated through performance studies, prove to be more resilient to changing network conditions to maximize quorum intersection than previous approaches with a similar quorum size. Furthermore, our two quorum approaches are interchangeable within the same network, providing adaptivity by allowing one to be swapped for the other as network conditions change.

  5. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 2: Analysis of the ERBE integrating sphere ground calibration

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1987-01-01

    An explicit solution of the spectral radiance leaving an arbitrary point on the wall of a spherical cavity with diffuse reflectivity is obtained. The solution is applicable to spheres with an arbitrary number of openings of any size and shape, an arbitrary number of light sources with possible non-diffuse characteristics, a non-uniform sphere wall temperature distribution, non-uniform and non-diffuse sphere wall emissivity and non-uniform but diffuse sphere wall spectral reflectivity. A general measurement equation describing the output of a sensor with a given field of view, angular and spectral response measuring the sphere output is obtained. The results are applied to the Earth Radiation Budget Experiment (ERBE) integrating sphere. The sphere wall radiance uniformity, loading effects and non-uniform wall temperature effects are investigated. It is shown that using appropriate interpretation and processing, a high-accuracy short-wave calibration of the ERBE sensors can be achieved.

  6. Assessing whether black uniforms affect the decisions of Turkish soccer referees: is finding of Frank and Gilovich's study valid for Turkish culture?

    PubMed

    Tiryaki, M Sefik

    2005-02-01

    Frank and Gilovich (1988) found that teams with black uniforms were penalized by referees more than other teams that did not wear black uniforms in the U.S. National Football League (NFL), and the U.S. National Hockey League (NHL). This finding was examined for the referees in the Turkish Premier Soccer League (TPSL) for the soccer teams wearing or not wearing black uniforms during actual games. 30 male referees' (ages 22-45 years, M = 34.8) decisions were analyzed in a total of 2142 Turkish premier soccer league games played in 7 seasons. Using the number of red and yellow cards and penalty kicks teams drew as a penalty decision criteria, no significant differences were found between Turkish soccer teams wearing black uniforms or those not and the number of penalty kicks. This result, which was different from that of Frank and Gilovich's work, was discussed in relation to the social psychological point of view of different cultures and societies.

  7. Application of random coherence order selection in gradient-enhanced multidimensional NMR

    NASA Astrophysics Data System (ADS)

    Bostock, Mark J.; Nietlispach, Daniel

    2016-03-01

    Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1-norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1-norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended to the full suite of experiments available to modern NMR spectroscopy, allowing resolution enhancements for all indirect dimensions; alone or in combination with NUS, RQD can be used to improve experimental resolution, or shorten experiment times, of considerable benefit to the challenging applications undertaken by modern NMR.

  8. Regular expansion solutions for small Peclet number heat or mass transfer in concentrated two-phase particulate systems

    NASA Technical Reports Server (NTRS)

    Yaron, I.

    1974-01-01

    Steady state heat or mass transfer in concentrated ensembles of drops, bubbles or solid spheres in uniform, slow viscous motion, is investigated. Convective effects at small Peclet numbers are taken into account by expanding the nondimensional temperature or concentration in powers of the Peclet number. Uniformly valid solutions are obtained, which reflect the effects of dispersed phase content and rate of internal circulation within the fluid particles. The dependence of the range of Peclet and Reynolds numbers, for which regular expansions are valid, on particle concentration is discussed.

  9. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  10. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  11. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  12. Processing of laser formed SiC powder

    NASA Technical Reports Server (NTRS)

    Haggerty, J. S.; Bowen, H. K.

    1985-01-01

    Superior SiC characteristics can be achieved through the use of ideal constituent powders and careful post-synthesis processing steps. High purity SiC powders of approx. 1000 A uniform diameter, nonagglomerated and spherical were produced. This required major revision of the particle formation and growth model from one based on classical nucleation and growth to one based on collision and coalescence of Si particles followed by their carburization. Dispersions based on pure organic solvents as well as steric stabilization were investigated. Although stable dispersions were formed by both, subsequent part fabrication emphasized the pure solvents since fewer problems with drying and residuals of the high purity particles were anticipated. Test parts were made by the colloidal pressing technique; both liquid filtration and consolidation (rearrangement) stages were modeled. Green densities corresponding to a random close packed structure (approx. 63%) were achieved; this highly perfect structure has a high, uniform coordination number (greater than 11) approaching the quality of an ordered structure without introducing domain boundary effects. After drying, parts were densified at temperatures ranging from 1800 to 2100 C. Optimum densification temperatures will probably be in the 1900 to 2000 C range based on these preliminary results which showed that 2050 C samples had experienced substantial grain growth. Although overfired, the 2050 C samples exhibited excellent mechanical properties. Biaxial tensile strengths up to 714 MPa and Vickers hardness values of 2430 kg/sq mm 2 were both more typical of hot pressed than sintered SiC. Both result from the absence of large defects and the confinement of residual porosity (less than 2.5%) to small diameter, uniformly distributed pores.

  13. Tablet splitting and weight uniformity of half-tablets of 4 medications in pharmacy practice.

    PubMed

    Tahaineh, Linda M; Gharaibeh, Shadi F

    2012-08-01

    Tablet splitting is a common practice for multiple reasons including cost savings; however, it does not necessarily result in weight-uniform half-tablets. To determine weight uniformity of half-tablets resulting from splitting 4 products available in the Jordanian market and investigate the effect of tablet characteristics on weight uniformity of half-tablets. Ten random tablets each of warfarin 5 mg, digoxin 0.25 mg, phenobarbital 30 mg, and prednisolone 5 mg were weighed and split by 6 PharmD students using a knife. The resulting half-tablets were weighed and evaluated for weight uniformity. Other relevant physical characteristics of the 4 products were measured. The average tablet hardness of the sampled tablets ranged from 40.3 N to 68.9 N. Digoxin, phenobarbital, and prednisolone half-tablets failed the weight uniformity test; however, warfarin half-tablets passed. Digoxin, warfarin, and phenobarbital tablets had a score line and warfarin tablets had the deepest score line of 0.81 mm. Splitting warfarin tablets produces weight-uniform half-tablets that may possibly be attributed to the hardness and the presence of a deep score line. Digoxin, phenobarbital, and prednisolone tablet splitting produces highly weight variable half-tablets. This can be of clinical significance in the case of the narrow therapeutic index medication digoxin.

  14. The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.

    PubMed

    de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J

    2017-05-16

    Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.

  15. Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field

    NASA Astrophysics Data System (ADS)

    Borelli, M. E. S.; Carneiro, C. E. I.

    1996-02-01

    We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.

  16. Comparison of two leading uniform theories of edge diffraction with the exact uniform asymptotic solution

    NASA Technical Reports Server (NTRS)

    Boersma, J.; Rahmat-Samii, Y.

    1980-01-01

    The diffraction of an arbitrary cylindrical wave by a half-plane has been treated by Rahmat-Samii and Mittra who used a spectral domain approach. In this paper, their exact solution for the total field is expressed in terms of a new integral representation. For large wave number k, two rigorous procedures are described for the exact uniform asymptotic expansion of the total field solution. The uniform expansions obtained are valid in the entire space, including transition regions around the shadow boundaries. The final results are compared with the formulations of two leading uniform theories of edge diffraction, namely, the uniform asymptotic theory and the uniform theory of diffraction. Some unique observations and conclusions are made in relating the two theories.

  17. Washing and changing uniforms: is guidance being adhered to?

    PubMed

    Potter, Yvonne Camilla; Justham, David

    To allay public apprehension regarding the risk of nurses' uniforms transmitting healthcare-associated infections (HCAI), national and local guidelines have been issued to control use, laundry and storage. This paper aims to measure the knowledge of registered nurses (RNs) and healthcare assistants (HCAs) working within a rural NHS foundation Trust and their adherence to the local infection prevention and control (IPC) standard regarding uniforms through a Trust-wide audit. Stratified random sampling selected 597 nursing staff and 399 responded (67%) by completing a short questionnaire based on the local standard. Responses were coded and transferred to SPSS (v. 17) for analysis. The audit found that nursing staff generally adhere to the guidelines, changing their uniforms daily and immediately upon accidental soiling, and wearing plastic aprons where indicated. At home, staff normally machine-wash and then iron their uniforms at the hottest setting. Nevertheless, few observe the local direction to place their newly-laundered uniforms in protective covers. This paper recommends a re-audit to compare compliance rates with baseline figures and further research into the reasons why compliance is lacking to sanction interventions for improvement, such as providing relevant staff education and re-introducing appropriate changing facilities.

  18. Role of work uniform in alleviating perceptual strain among construction workers.

    PubMed

    Yang, Yang; Chan, Albert Ping-Chuen

    2017-02-07

    This study aims to examine the benefits of wearing a new construction work uniform in real-work settings. A field experiment with a randomized assignment of an intervention group to a newly designed uniform and a control group to a commercially available trade uniform was executed. A total of 568 sets of physical, physiological, perceptual, and microclimatological data were obtained. A linear mixed-effects model (LMM) was built to examine the cause-effect relationship between the Perceptual Strain Index (PeSI) and heat stressors including wet bulb globe temperature (WBGT), estimated workload (relative heart rate), exposure time, trade, workplace, and clothing type. An interaction effect between clothing and trade revealed that perceptual strain of workers across four trades was significantly alleviated by 1.6-6.3 units in the intervention group. Additionally, the results of a questionnaire survey on assessing the subjective sensations on the two uniforms indicated that wearing comfort was improved by 1.6-1.8 units when wearing the intervention type. This study not only provides convincing evidences on the benefits of wearing the newly designed work uniform in reducing perceptual strain but also heightens the value of the field experiment in heat stress intervention studies.

  19. Role of work uniform in alleviating perceptual strain among construction workers

    PubMed Central

    YANG, Yang; CHAN, Albert Ping-chuen

    2016-01-01

    This study aims to examine the benefits of wearing a new construction work uniform in real-work settings. A field experiment with a randomized assignment of an intervention group to a newly designed uniform and a control group to a commercially available trade uniform was executed. A total of 568 sets of physical, physiological, perceptual, and microclimatological data were obtained. A linear mixed-effects model (LMM) was built to examine the cause-effect relationship between the Perceptual Strain Index (PeSI) and heat stressors including wet bulb globe temperature (WBGT), estimated workload (relative heart rate), exposure time, trade, workplace, and clothing type. An interaction effect between clothing and trade revealed that perceptual strain of workers across four trades was significantly alleviated by 1.6–6.3 units in the intervention group. Additionally, the results of a questionnaire survey on assessing the subjective sensations on the two uniforms indicated that wearing comfort was improved by 1.6–1.8 units when wearing the intervention type. This study not only provides convincing evidences on the benefits of wearing the newly designed work uniform in reducing perceptual strain but also heightens the value of the field experiment in heat stress intervention studies. PMID:27666953

  20. A new approach to evaluate gamma-ray measurements

    NASA Technical Reports Server (NTRS)

    Dejager, O. C.; Swanepoel, J. W. H.; Raubenheimer, B. C.; Vandervalt, D. J.

    1985-01-01

    Misunderstandings about the term random samples its implications may easily arise. Conditions under which the phases, obtained from arrival times, do not form a random sample and the dangers involved are discussed. Watson's U sup 2 test for uniformity is recommended for light curves with duty cycles larger than 10%. Under certain conditions, non-parametric density estimation may be used to determine estimates of the true light curve and its parameters.

  1. Genetic structure is determined by stochastic factors in a natural population of Drosophila buzzatii in Argentina.

    PubMed

    Vilardi, J C; Hasson, E; Rodriguez, C; Fanara, J J

    1994-01-01

    D. buzzatii is a cactophilic species associated with several cactaceae in Argentina. This particular ecological niche implies that this species is faced with a non-uniform environment constituted by discrete and ephemeral breeding sites, which are colonized by a finite number of inseminated females. The genetic consequences of this population structure upon the second chromosome polymorphism were investigated by means of F-statistics in a natural endemic population of Argentina. The present study suggests that differentiation of inversion frequencies in third instar larvae among breeding sites has taken place mainly at random and selection is not operating to determine the structure of this population. The average number of parents breeding on a single pad seems to be similar to the number colonizing Opuntia ficus indica rotting cladodes in Carboneras, a derived population from Spain. There is no significant excess of heterokaryotypes within pads or in the population as a whole. The results obtained in the present study suggest that the potential role of selective versus stochastic factors relative to the among pad heterogeneity in the population here studied is different from that of the Spanish population previously reported. Potential mechanisms responsible for these differences are discussed.

  2. Number of repetitions for evaluating technological traits in cotton genotypes.

    PubMed

    Carvalho, L P; Farias, F J C; Morello, C L; Rodrigues, J I S; Teodoro, P E

    2016-08-19

    With the changes in spinning technology, technological cotton traits, such as fiber length, fiber uniformity, fiber strength, fineness, fiber maturity, percentage of fibers, and short fiber index, are of great importance for selecting cotton genotypes. However, for accurate discrimination of genotypes, it is important that these traits are evaluated with the best possible accuracy. The aim of this study was to determine the number of measurements (repetitions) needed to accurately assess technological traits of cotton genotypes. Seven experiments were conducted in four Brazilian States (Ceará, Rio Grande do Norte, Goiás, and Mato Grosso do Sul). We used nine brown and two white colored fiber lines in a randomized block design with four replications. After verifying the assumptions of residual normality and homogeneity of variances, analysis of variance was performed to estimate the repeatability coefficient and calculating the number of repetitions. Trials with four replications were found to be sufficient to identify superior cotton genotypes for all measured traits except short fiber index with a selective accuracy >90% and at least 81% accuracy in predicting their actual value. These results allow more accurate and reliable results in future researches with evaluating technological traits in cotton genotypes.

  3. Analysis of AIRS and IASI System Performance Under Clear and Cloudy Conditions

    NASA Technical Reports Server (NTRS)

    Aumann, Hartmut H.; Strow, L. Larrabee

    2010-01-01

    The radiometric and spectral system performance of space-borne infrared radiometers is generally specified and analyzed under strictly cloud-free, spatially uniform and warm conditions, with the assumption that the observed performance applies to the full dynamic range under clear and cloudy conditions and that random noise cancels for the evaluation of the radiometric accuracy. Such clear conditions are found in only one percent of the data. Ninety nine percent of the data include clouds, which produce spatially highly non-uniform scenes with 11 micrometers window brightness temperatures as low as 200K. We use AIRS and IASI radiance spectra to compare system performance under clear and a wide range of cloudy conditions. Although the two instruments are in polar orbits, with the ascending nodes separated by four hours, daily averages already reveal surprisingly similar measurements. The AIRS and IASI radiometric performance based on the mean of large numbers of observation is comparable and agrees within 200 mK over a wide range of temperatures. There are also some unexpected differences at the 200 -500 mK level, which are of significance for climate applications. The results were verified with data from July 2007 through January 2010, but many can already be gleaned from the analysis of a single day of data.

  4. Momentum distribution of the uniform electron gas: Improved parametrization and exact limits of the cumulant expansion

    NASA Astrophysics Data System (ADS)

    Gori-Giorgi, Paola; Ziesche, Paul

    2002-12-01

    The momentum distribution of the unpolarized uniform electron gas in its Fermi-liquid regime, n(k,rs), with the momenta k measured in units of the Fermi wave number kF and with the density parameter rs, is constructed with the help of the convex Kulik function G(x). It is assumed that n(0,rs),n(1±,rs), the on-top pair density g(0,rs), and the kinetic energy t(rs) are known (respectively, from accurate calculations for rs=1,…,5, from the solution of the Overhauser model, and from quantum Monte Carlo calculations via the virial theorem). Information from the high- and the low-density limit, corresponding to the random-phase approximation and to the Wigner crystal limit, is used. The result is an accurate parametrization of n(k,rs), which fulfills most of the known exact constraints. It is in agreement with the effective-potential calculations of Takada and Yasuhara [Phys. Rev. B 44, 7879 (1991)], is compatible with quantum Monte Carlo data, and is valid in the density range rs≲12. The corresponding cumulant expansions of the pair density and of the static structure factor are discussed, and some exact limits are derived.

  5. MATHEMATICAL ROUTINES FOR ENGINEERS AND SCIENTISTS

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The purpose of this package is to provide the scientific and engineering community with a library of programs useful for performing routine mathematical manipulations. This collection of programs will enable scientists to concentrate on their work without having to write their own routines for solving common problems, thus saving considerable amounts of time. This package contains sixteen subroutines. Each is separately documented with descriptions of the invoking subroutine call, its required parameters, and a sample test program. The functions available include: maxima, minima, and sort of vectors; factorials; random number generator (uniform or Gaussian distribution); complimentary error function; fast Fourier Transformation; Simpson's Rule integration; matrix determinate and inversion; Bessel function (J Bessel function for any order, and modified Bessel function for zero order); roots of a polynomial; roots of non-linear equation; and the solution of first order ordinary differential equations using Hamming's predictor-corrector method. There is also a subroutine for using a dot matrix printer to plot a given set of y values for a uniformly increasing x value. This package is written in FORTRAN 77 (Super Soft Small System FORTRAN compiler) for batch execution and has been implemented on the IBM PC computer series under MS-DOS with a central memory requirement of approximately 28K of 8 bit bytes for all subroutines. This program was developed in 1986.

  6. Rigorous Results for the Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  7. Traffic signal inventory project

    DOT National Transportation Integrated Search

    2001-06-01

    The purpose of this study was to determine the level of compliance with the "Manual on Uniform Traffic Control Devices" (MUTCD) and other industry standards of traffic signals on the Iowa state highway system. Signals were randomly selected in cities...

  8. Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2018-03-01

    A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or themore » giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.« less

  10. Large-size, high-uniformity, random silver nanowire networks as transparent electrodes for crystalline silicon wafer solar cells.

    PubMed

    Xie, Shouyi; Ouyang, Zi; Jia, Baohua; Gu, Min

    2013-05-06

    Metal nanowire networks are emerging as next generation transparent electrodes for photovoltaic devices. We demonstrate the application of random silver nanowire networks as the top electrode on crystalline silicon wafer solar cells. The dependence of transmittance and sheet resistance on the surface coverage is measured. Superior optical and electrical properties are observed due to the large-size, highly-uniform nature of these networks. When applying the nanowire networks on the solar cells with an optimized two-step annealing process, we achieved as large as 19% enhancement on the energy conversion efficiency. The detailed analysis reveals that the enhancement is mainly caused by the improved electrical properties of the solar cells due to the silver nanowire networks. Our result reveals that this technology is a promising alternative transparent electrode technology for crystalline silicon wafer solar cells.

  11. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  12. Rhythmogenic neuronal networks, emergent leaders, and k-cores.

    PubMed

    Schwab, David J; Bruinsma, Robijn F; Feldman, Jack L; Levine, Alex J

    2010-11-01

    Neuronal network behavior results from a combination of the dynamics of individual neurons and the connectivity of the network that links them together. We study a simplified model, based on the proposal of Feldman and Del Negro (FDN) [Nat. Rev. Neurosci. 7, 232 (2006)], of the preBötzinger Complex, a small neuronal network that participates in the control of the mammalian breathing rhythm through periodic firing bursts. The dynamics of this randomly connected network of identical excitatory neurons differ from those of a uniformly connected one. Specifically, network connectivity determines the identity of emergent leader neurons that trigger the firing bursts. When neuronal desensitization is controlled by the number of input signals to the neurons (as proposed by FDN), the network's collective desensitization--required for successful burst termination--is mediated by k-core clusters of neurons.

  13. Self-correcting random number generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Pooser, Raphael C.

    2016-09-06

    A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, tomore » provide a random number according to one or more performance criteria.« less

  14. A kinetic theory treatment of heat transfer in plane Poiseuille flow with uniform pressure

    NASA Technical Reports Server (NTRS)

    Bahrami, Parviz A.

    1992-01-01

    Plane compressible Poiseuille flow with uniform pressure (Couette flow with stationary boundaries) is revisited where the Lees two-steam method with the Enskog equation of change is applied. Single particle velocity distribution functions are chosen, which preserve the essential physical features of this flow with arbitrary but uniform plate temperatures and gas pressure. Lower moments are shown to lead to expressions for the parameter functions, molecular number densities, and temperatures which are entirely in agreement with those obtained in the analysis of Lees for compressible plane Couette flow in the limit of low Mach number and vanishing mean gas velocity. Important simplifications result, which are helpful in gaining insight into the power of kinetic theory in fluid mechanics. The temperature distribution, heat flux, as well as density, are completely determined for the whole range of Knudson numbers from free molecular flow to the continuum regime, when the pressure level is specified.

  15. Narrow linewidth short cavity Brillouin random laser based on Bragg grating array fiber and dynamical population inversion gratings

    NASA Astrophysics Data System (ADS)

    Popov, S. M.; Butov, O. V.; Chamorovski, Y. K.; Isaev, V. A.; Mégret, P.; Korobko, D. A.; Zolotovskii, I. O.; Fotiadi, A. A.

    2018-06-01

    We report on random lasing observed with 100-m-long fiber comprising an array of weak FBGs inscribed in the fiber core and uniformly distributed over the fiber length. Extended fluctuation-free oscilloscope traces highlight power dynamics typical for lasing. An additional piece of Er-doped fiber included into the laser cavity enables a stable laser generation with a linewidth narrower than 10 kHz.

  16. Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks

    DTIC Science & Technology

    2006-09-01

    time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by

  17. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  18. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  19. School-based interventions for preventing Hiv, sexually transmitted infections, and pregnancy in adolescents

    PubMed Central

    Mason-Jones, Amanda J; Sinclair, David; Mathews, Catherine; Kagee, Ashraf; Hillman, Alex; Lombard, Carl

    2016-01-01

    Background School-based sexual and reproductive health programmes are widely accepted as an approach to reducing high-risk sexual behaviour among adolescents. Many studies and systematic reviews have concentrated on measuring effects on knowledge or self-reported behaviour rather than biological outcomes, such as pregnancy or prevalence of sexually transmitted infections (STIs). Objectives To evaluate the effects of school-based sexual and reproductive health programmes on sexually transmitted infections (such as HIV, herpes simplex virus, and syphilis), and pregnancy among adolescents. Search methods We searched MEDLINE, Embase, and the Cochrane Central Register of Controlled Trials (CENTRAL) for published peer-reviewed journal articles; and ClinicalTrials.gov and the World Health Organization's (WHO) International Clinical Trials Registry Platform for prospective trials; AIDS Educaton and Global Information System (AEGIS) and National Library of Medicine (NLM) gateway for conference presentations; and the Centers for Disease Control and Prevention (CDC), UNAIDS, the WHO and the National Health Service (NHS) centre for Reviews and Dissemination (CRD) websites from 1990 to 7 April 2016. We handsearched the reference lists of all relevant papers. Selection criteria We included randomized controlled trials (RCTs), both individually randomized and cluster-randomized, that evaluated school-based programmes aimed at improving the sexual and reproductive health of adolescents. Data collection and analysis Two review authors independently assessed trials for inclusion, evaluated risk of bias, and extracted data. When appropriate, we obtained summary measures of treatment effect through a random-effects meta-analysis and we reported them using risk ratios (RR) with 95% confidence intervals (CIs). We assessed the certainty of the evidence using the GRADE approach. Main results We included eight cluster-RCTs that enrolled 55,157 participants. Five trials were conducted in sub-Saharan Africa (Malawi, South Africa, Tanzania, Zimbabwe, and Kenya), one in Latin America (Chile), and two in Europe (England and Scotland). Sexual and reproductive health educational programmes Six trials evaluated school-based educational interventions. In these trials, the educational programmes evaluated had no demonstrable effect on the prevalence of HIV (RR 1.03, 95% CI 0.80 to 1.32, three trials; 14,163 participants; low certainty evidence), or other STIs (herpes simplex virus prevalence: RR 1.04, 95% CI 0.94 to 1.15; three trials, 17,445 participants; moderate certainty evidence; syphilis prevalence: RR 0.81, 95% CI 0.47 to 1.39; one trial, 6977 participants; low certainty evidence). There was also no apparent effect on the number of young women who were pregnant at the end of the trial (RR 0.99, 95% CI 0.84 to 1.16; three trials, 8280 participants; moderate certainty evidence). Material or monetary incentive-based programmes to promote school attendance Two trials evaluated incentive-based programmes to promote school attendance. In these two trials, the incentives used had no demonstrable effect on HIV prevalence (RR 1.23, 95% CI 0.51 to 2.96; two trials, 3805 participants; low certainty evidence). Compared to controls, the prevalence of herpes simplex virus infection was lower in young women receiving a monthly cash incentive to stay in school (RR 0.30, 95% CI 0.11 to 0.85), but not in young people given free school uniforms (Data not pooled, two trials, 7229 participants; very low certainty evidence). One trial evaluated the effects on syphilis and the prevalence was too low to detect or exclude effects confidently (RR 0.41, 95% CI 0.05 to 3.27; one trial, 1291 participants; very low certainty evidence). However, the number of young women who were pregnant at the end of the trial was lower among those who received incentives (RR 0.76, 95% CI 0.58 to 0.99; two trials, 4200 participants; low certainty evidence). Combined educational and incentive-based programmes The single trial that evaluated free school uniforms also included a trial arm in which participants received both uniforms and a programme of sexual and reproductive education. In this trial arm herpes simplex virus infection was reduced (RR 0.82, 95% CI 0.68 to 0.99; one trial, 5899 participants; low certainty evidence), predominantly in young women, but no effect was detected for HIV or pregnancy (low certainty evidence). Authors' conclusions There is a continued need to provide health services to adolescents that include contraceptive choices and condoms and that involve them in the design of services. Schools may be a good place in which to provide these services. There is little evidence that educational curriculum-based programmes alone are effective in improving sexual and reproductive health outcomes for adolescents. Incentive-based interventions that focus on keeping young people in secondary school may reduce adolescent pregnancy but further trials are needed to confirm this. School-based interventions for preventing HIV, sexually transmitted infections, and pregnancy in adolescents Cochrane researchers conducted a review of the effects of school-based interventions for reducing HIV, sexually transmitted infections (STIs), and pregnancy in adolescents. After searching for relevant trials up to 7 April 2016, they included eight trials that had enrolled 55,157 adolescents. Why is this important and how might school-based programmes work? Sexually active adolescents, particularly young women, are at high risk in many countries of contracting HIV and other STIs. Early unintended pregnancy can also have a detrimental impact on young people's lives. The school environment plays an important role in the development of children and young people, and curriculum-based sexuality education programmes have become popular in many regions of the world. While there is some evidence that these programmes improve knowledge and reduce self-reported risk taking, this review evaluated whether they have any impact on the number of young people that contracted STIs or on the number of adolescent pregnancies. What the research says Sexual and reproductive health education programmes As they are currently configured, educational programmes alone probably have no effect on the number of young people infected with HIV during adolescence (low certainty evidence). They also probably have no effect on the number of young people infected with other STIs (herpes simplex virus: moderate certainty evidence; syphilis: low certainty evidence), or the number of adolescent pregnancies (moderate certainty evidence). Material or monetary incentive-based programmes to promote school attendance Giving monthly cash, or free school uniforms, to encourage students to stay in school may have no effect on the number of young people infected with HIV during adolescence (low certainty evidence). We do not currently know whether monthly cash or free school uniforms will reduce the number of young people infected with other STIs (very low certainty evidence). However, incentives to promote school attendance may reduce the number of adolescent pregnancies (low certainty evidence). Combined educational and incentive-based programmes Based on a single included trial, giving an incentive such as a free school uniform combined with a programme of sexual and reproductive health education may reduce STIs (herpes simplex virus; low certainty evidence) in young women, but no effect was detected for HIV or pregnancy (low certainty evidence). Authors' conclusions There is currently little evidence that educational programmes alone are effective at reducing STIs or adolescent pregnancy. Incentive-based interventions that focus on keeping young people, especially girls, in secondary school may reduce adolescent pregnancy but further high quality trials are needed to confirm this. PMID:27824221

  20. CD uniformity control for thick resist process

    NASA Astrophysics Data System (ADS)

    Huang, Chi-hao; Liu, Yu-Lin; Wang, Weihung; Yang, Mars; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2017-03-01

    In order to meet the increasing storage capacity demand and reduce bit cost of NAND flash memories, 3D stacked flash cell array has been proposed. In constructing 3D NAND flash memories, the higher bit number per area is achieved by increasing the number of stacked layers. Thus the so-called "staircase" patterning to form electrical connection between memory cells and word lines has become one of the primarily critical processes in 3D memory manufacture. To provide controllable critical dimension (CD) with good uniformity involving thick photo-resist has also been of particular concern for staircase patterning. The CD uniformity control has been widely investigated with relatively thinner resist associated with resolution limit dimension but thick resist coupling with wider dimension. This study explores CD uniformity control associated with thick photo-resist processing. Several critical parameters including exposure focus, exposure dose, baking condition, pattern size and development recipe, were found to strongly correlate with the thick photo-resist profile accordingly affecting the CD uniformity control. To minimize the within-wafer CD variation, the slightly tapered resist profile is proposed through well tailoring the exposure focus and dose together with optimal development recipe. Great improvements on DCD (ADI CD) and ECD (AEI CD) uniformity as well as line edge roughness were achieved through the optimization of photo resist profile.

  1. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    PubMed

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  2. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves

    NASA Astrophysics Data System (ADS)

    Paraskevov, A. V.; Zendrikov, D. K.

    2017-04-01

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  3. Diagnostic electrocardiography in epidemiological studies of Chagas' disease: multicenter evaluation of a standardized method.

    PubMed

    Lázzari, J O; Pereira, M; Antunes, C M; Guimarães, A; Moncayo, A; Chávez Domínguez, R; Hernández Pieretti, O; Macedo, V; Rassi, A; Maguire, J; Romero, A

    1998-11-01

    An electrocardiographic recording method with an associated reading guide, designed for epidemiological studies on Chagas' disease, was tested to assess its diagnostic reproducibility. Six cardiologists from five countries each read 100 electrocardiographic (ECG) tracings, including 30 from chronic chagasic patients, then reread them after an interval of 6 months. The readings were blind, with the tracings numbered randomly for the first reading and renumbered randomly for the second reading. The physicians, all experienced in interpreting ECGs from chagasic patients, followed printed instructions for reading the tracings. Reproducibility of the readings was evaluated using the kappa (kappa) index for concordance. The results showed a high degree of interobserver concordance with respect to the diagnosis of normal vs. abnormal tracings (kappa = 0.66; SE 0.02). While the interpretations of some categories of ECG abnormalities were highly reproducible, others, especially those having a low prevalence, showed lower levels of concordance. Intraobserver concordance was uniformly higher than interobserver concordance. The findings of this study justify the use by specialists of the recording of readings method proposed for epidemiological studies on Chagas' disease, but warrant caution in the interpretation of some categories of electrocardiographic alterations.

  4. Particle sensor array

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Blaes, Brent R. (Inventor); Lieneweg, Udo (Inventor)

    1994-01-01

    A particle sensor array which in a preferred embodiment comprises a static random access memory having a plurality of ion-sensitive memory cells, each such cell comprising at least one pull-down field effect transistor having a sensitive drain surface area (such as by bloating) and at least one pull-up field effect transistor having a source connected to an offset voltage. The sensitive drain surface area and the offset voltage are selected for memory cell upset by incident ions such as alpha-particles. The static random access memory of the present invention provides a means for selectively biasing the memory cells into the same state in which each of the sensitive drain surface areas is reverse biased and then selectively reducing the reversed bias on these sensitive drain surface areas for increasing the upset sensitivity of the cells to ions. The resulting selectively sensitive memory cells can be used in a number of applications. By way of example, the present invention can be used for measuring the linear energy transfer of ion particles, as well as a device for assessing the resistance of CMOS latches to Cosmic Ray induced single event upsets. The sensor of the present invention can also be used to determine the uniformity of an ion beam.

  5. The relations between network-operation and topological-property in a scale-free and small-world network with community structure

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Yao, Bing

    2017-10-01

    It is always an open, demanding and difficult task for generating available model to simulate dynamical functions and reveal inner principles from complex systems and networks. In this article, due to lots of real-life and artificial networks are built from series of simple and small groups (components), we discuss some interesting and helpful network-operation to generate more realistic network models. In view of community structure (modular topology), we present a class of sparse network models N(t , m) . At the moment, we capture the fact the N(t , 4) has not only scale-free feature, which means that the probability that a randomly selected vertex with degree k decays as a power-law, following P(k) ∼k-γ, where γ is the degree exponent, but also small-world property, which indicates that the typical distance between two uniform randomly chosen vertices grows proportionally to logarithm of the order of N(t , 4) , namely, relatively shorter diameter and lower average path length, simultaneously displays higher clustering coefficient. Next, as a new topological parameter correlating to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees over a network is studied in more detail, an exact analytical solution for the number of spanning trees of the N(t , 4) is obtained. Based on the network-operation, part hub-vertex linking with each other will be helpful for structuring various network models and investigating the rules related with real-life networks.

  6. Anticipated improvement in laser beam uniformity using distributed phase plates with quasirandom patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epstein, R.; Skupsky, S.

    1990-08-01

    The uniformity of focused laser beams, that has been modified with randomly phased distributed phase plates (C. B. Burckhardt, Appl. Opt. {bold 9}, 695 (1970); Kato and Mima, Appl. Phys. B {bold 29}, 186 (1982); Kato {ital et} {ital al}., Phys. Rev. Lett. {bold 53}, 1057 (1984); LLE Rev. {bold 33}, 1 (1987)), can be improved further by constructing patterns of phase elements which minimize phase correlations over small separations. Long-wavelength nonuniformities in the intensity distribution, which are relatively difficult to overcome in the target by thermal smoothing and in the laser by, e.g., spectral dispersion (Skupsky {ital et} {italmore » al}., J. Appl. Phys. {bold 66}, 3456 (1989); LLE Rev. {bold 36}, 158 (1989); {bold 37}, 29 (1989); {bold 37}, 40 (1989)), result largely from short-range phase correlations between phase plate elements. To reduce the long-wavelength structure, we have constructed phase patterns with smaller short-range correlations than would occur randomly. Calculations show that long-wavelength nonuniformities in single-beam intensity patterns can be reduced with these masks when the intrinsic phase error of the beam falls below certain limits. We show the effect of this improvement on uniformity for spherical irradiation by a multibeam system.« less

  7. 48 CFR 204.7003 - Basic PII number.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Basic PII number. 204.7003... OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Uniform Procurement Instrument Identification Numbers 204.7003 Basic PII number. (a) Elements of a number. The number consists of 13 alpha-numeric characters...

  8. 48 CFR 204.7003 - Basic PII number.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Basic PII number. 204.7003... OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Uniform Procurement Instrument Identification Numbers 204.7003 Basic PII number. (a) Elements of a number. The number consists of 13 alpha-numeric characters...

  9. Many-junction photovoltaic device performance under non-uniform high-concentration illumination

    NASA Astrophysics Data System (ADS)

    Valdivia, Christopher E.; Wilkins, Matthew M.; Chahal, Sanmeet S.; Proulx, Francine; Provost, Philippe-Olivier; Masson, Denis P.; Fafard, Simon; Hinzer, Karin

    2017-09-01

    A parameterized 3D distributed circuit model was developed to calculate the performance of III-V solar cells and photonic power converters (PPC) with a variable number of epitaxial vertically-stacked pn junctions. PPC devices are designed with many pn junctions to realize higher voltages and to operate under non-uniform illumination profiles from a laser or LED. Performance impacts of non-uniform illumination were greatly reduced with increasing number of junctions, with simulations comparing PPC devices with 3 to 20 junctions. Experimental results using Azastra Opto's 12- and 20-junction PPC illuminated by an 845 nm diode laser show high performance even with a small gap between the PPC and optical fiber output, until the local tunnel junction limit is reached.

  10. In Darwinian evolution, feedback from natural selection leads to biased mutations.

    PubMed

    Caporale, Lynn Helena; Doyle, John

    2013-12-01

    Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context).  This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.

  11. Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.

    2015-08-01

    In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.

  12. Kansas Adult Observational Safety Belt Usage Rates

    DOT National Transportation Integrated Search

    2011-07-01

    Methodology of Adult Survey - based on the federal guidelines in the Uniform Criteria manual. The Kansas survey is performed at 548 sites on 6 different road types in 20 randomly selected counties which encompass 85% of the population of Kansas. The ...

  13. A randomization approach to handling data scaling in nuclear medicine.

    PubMed

    Bai, Chuanyong; Conwell, Richard; Kindem, Joel

    2010-06-01

    In medical imaging, data scaling is sometimes desired to handle the system complexity, such as uniformity calibration. Since the data are usually saved in short integer, conventional data scaling will first scale the data in floating point format and then truncate or round the floating point data to short integer data. For example, when using truncation, scaling of 9 by 1.1 results in 9 and scaling of 10 by 1.1 results in 11. When the count level is low, such scaling may change the local data distribution and affect the intended application of the data. In this work, the authors use an example gated cardiac SPECT study to illustrate the effect of conventional scaling by factors of 1.1 and 1.2. The authors then scaled the data with the same scaling factors using a randomization approach, in which a random number evenly distributed between 0 and 1 is generated to determine how the floating point data will be saved as short integer data. If the random number is between 0 and 0.9, then 9.9 will be saved as 10, otherwise 9. In other words, the floating point value 9.9 will be saved in short integer value as 10 with 90% probability or 9 with 10% probability. For statistical analysis of the performance, the authors applied the conventional approach with rounding and the randomization approach to 50 consecutive gated studies from a clinical site. For the example study, the image reconstructed from the original data showed an apparent perfusion defect at the apex of the myocardium. The defect size was noticeably changed by scaling with 1.1 and 1.2 using the conventional approaches with truncation and rounding. Using the randomization approach, in contrast, the images from the scaled data appeared identical to the original image. Line profile analysis of the scaled data showed that the randomization approach introduced the least change to the data as compared to the conventional approaches. For the 50 gated data sets, significantly more studies showed quantitative differences between the original images and the images from the data scaled by 1.2 using the rounding approach than the randomization approach [46/50 (92%) versus 3/50 (6%), p < 0.05]. Likewise, significantly more studies showed visually noticeable differences between the original images and the images from the data scaled by 1.2 using the rounding approach than randomization [29/50 (58%) versus 1/50 (2%), p < 0.05]. In conclusion, the proposed randomization approach minimizes the scaling-introduced local data change as compared to the conventional approaches. It is preferred for nuclear medicine data scaling.

  14. Large-area imaging reveals biologically driven non-random spatial patterns of corals at a remote reef

    NASA Astrophysics Data System (ADS)

    Edwards, Clinton B.; Eynaud, Yoan; Williams, Gareth J.; Pedersen, Nicole E.; Zgliczynski, Brian J.; Gleason, Arthur C. R.; Smith, Jennifer E.; Sandin, Stuart A.

    2017-12-01

    For sessile organisms such as reef-building corals, differences in the degree of dispersion of individuals across a landscape may result from important differences in life-history strategies or may reflect patterns of habitat availability. Descriptions of spatial patterns can thus be useful not only for the identification of key biological and physical mechanisms structuring an ecosystem, but also by providing the data necessary to generate and test ecological theory. Here, we used an in situ imaging technique to create large-area photomosaics of 16 plots at Palmyra Atoll, central Pacific, each covering 100 m2 of benthic habitat. We mapped the location of 44,008 coral colonies and identified each to the lowest taxonomic level possible. Using metrics of spatial dispersion, we tested for departures from spatial randomness. We also used targeted model fitting to explore candidate processes leading to differences in spatial patterns among taxa. Most taxa were clustered and the degree of clustering varied by taxon. A small number of taxa did not significantly depart from randomness and none revealed evidence of spatial uniformity. Importantly, taxa that readily fragment or tolerate stress through partial mortality were more clustered. With little exception, clustering patterns were consistent with models of fragmentation and dispersal limitation. In some taxa, dispersion was linearly related to abundance, suggesting density dependence of spatial patterning. The spatial patterns of stony corals are non-random and reflect fundamental life-history characteristics of the taxa, suggesting that the reef landscape may, in many cases, have important elements of spatial predictability.

  15. Effects of Magnetic field on Peristalsis transport of a Carreau Fluid in a tapered asymmetric channel

    NASA Astrophysics Data System (ADS)

    Prakash, J.; Balaji, N.; Siva, E. P.; Kothandapani, M.; Govindarajan, A.

    2018-04-01

    The paper is concerned with effects of a uniform applied magnetic field on a Carreau fluid flow in a tapered asymmetric channel with peristalsis. The channel non-uniform & asymmetry are formed by choosing the peristaltic wave train on the tapered walls to have different amplitude and phase (ϕ). The governing equations of the Carreau model in two - dimensional peristaltic flow phenomena are constructed under assumptions of long wave length and low Reynolds number approximations. The simplified non - linear governing equations are solved by regular perturbation method. The expressions for pressure rise, frictional force, velocity and stream function are determined and the effects of different parameters like non-dimensional amplitudes walls (a and b), non - uniform parameter (m), Hartmann number (M), phase difference (ϕ),power law index (n) and Weissenberg numbers (We) on the flow characteristics are discussed. It is viewed that the rheological parameter for large (We), the curves of the pressure rise are not linear but it behaves like a Newtonian fluid for very small Weissenberg number.

  16. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  17. Quasirandom geometric networks from low-discrepancy sequences

    NASA Astrophysics Data System (ADS)

    Estrada, Ernesto

    2017-08-01

    We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.

  18. Generation of physical random numbers by using homodyne detection

    NASA Astrophysics Data System (ADS)

    Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro

    2016-10-01

    Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.

  19. Effects of fixture rotation on coating uniformity for high-performance optical filter fabrication

    NASA Astrophysics Data System (ADS)

    Rubin, Binyamin; George, Jason; Singhal, Riju

    2018-04-01

    Coating uniformity is critical in fabricating high-performance optical filters by various vacuum deposition methods. Simple and planetary rotation systems with shadow masks are used to achieve the required uniformity [J. B. Oliver and D. Talbot, Appl. Optics 45, 13, 3097 (2006); O. Lyngnes, K. Kraus, A. Ode and T. Erguder, in `Method for Designing Coating Thickness Uniformity Shadow Masks for Deposition Systems with a Planetary Fixture', 2014 Technical Conference Proceedings, Optical Coatings, August 13, 2014, DOI: 10.14332/svc14.proc.1817.]. In this work, we discuss the effect of rotation pattern and speed on thickness uniformity in an ion beam sputter deposition system. Numerical modeling is used to determine statistical distribution of random thickness errors in coating layers. The relationship between thickness tolerance and production yield are simulated theoretically and demonstrated experimentally. Production yields for different optical filters produced in an ion beam deposition system with planetary rotation are presented. Single-wavelength and broadband optical monitoring systems were used for endpoint monitoring during filter deposition. Limitations of thickness tolerances that can be achieved in systems with planetary rotation are shown. Paths for improving production yield in an ion beam deposition system are described.

  20. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  1. Boundaries, kinetic properties, and final domain structure of plane discrete uniform Poisson-Voronoi tessellations with von Neumann neighborhoods.

    PubMed

    Korobov, A

    2009-03-01

    Discrete random tessellations appear not infrequently in describing nucleation and growth transformations. Generally, several non-Euclidean metrics are possible in this case. Previously [A. Korobov, Phys. Rev. B 76, 085430 (2007)] continual analogs of such tessellations have been studied. Here one of the simplest discrete varieties of the Kolmogorov-Johnson-Mehl-Avrami model, namely, the model with von Neumann neighborhoods, has been examined per se, i.e., without continualization. The tessellation is uniform in the sense that domain boundaries consist of tiles. Similarities and distinctions between discrete and continual models are discussed.

  2. Development of Flame Resistant Combat Uniform Fabrics Made from Long Staple Wool and Aramid Blend Yarn

    DTIC Science & Technology

    2013-04-15

    Kentwool recombed the wool top ( wool is first combed during the production of wool top); a second combing process is an optional step sometimes used in...RESISTANT COMBAT UNIFORM FABRICS MADE FROM LONG STAPLE WOOL AND ARAMID BLEND YARN by Parvez Mehta* Mitchell Driggers* and Carole...SUBTITLE DEVELOPMENT OF FLAME RESISTANT COMBAT UNIFORM FABRICS MADE FROM LONG STAPLE WOOL AND ARAMID BLEND YARN 5a. CONTRACT NUMBER W911QY-11

  3. Automated Simultaneous Assembly for Multistage Testing

    ERIC Educational Resources Information Center

    Breithaupt, Krista; Ariel, Adelaide; Veldkamp, Bernard P.

    2005-01-01

    This article offers some solutions used in the assembly of the computerized Uniform Certified Public Accountancy (CPA) licensing examination as practical alternatives for operational programs producing large numbers of forms. The Uniform CPA examination was offered as an adaptive multistage test (MST) beginning in April of 2004. Examples of…

  4. Equality of Shapley value and fair proportion index in phylogenetic trees.

    PubMed

    Fuchs, Michael; Jin, Emma Yu

    2015-11-01

    The Shapley value and the fair proportion index of phylogenetic trees have been introduced recently for the purpose of making conservation decisions in genetics. Moreover, also very recently, Hartmann (J Math Biol 67:1163-1170, 2013) has presented data which shows that there is a strong correlation between a slightly modified version of the Shapley value (which we call the modified Shapley value) and the fair proportion index. He gave an explanation of this correlation by showing that the contribution of both indices to an edge of the tree becomes identical as the number of taxa tends to infinity. In this note, we show that the Shapley value and the fair proportion index are in fact the same. Moreover, we also consider the modified Shapley value and show that its covariance with the fair proportion index in random phylogenetic trees under the Yule-Harding model and uniform model is indeed close to one.

  5. Leveraging ecological theory to guide natural product discovery.

    PubMed

    Smanski, Michael J; Schlatter, Daniel C; Kinkel, Linda L

    2016-03-01

    Technological improvements have accelerated natural product (NP) discovery and engineering to the point that systematic genome mining for new molecules is on the horizon. NP biosynthetic potential is not equally distributed across organisms, environments, or microbial life histories, but instead is enriched in a number of prolific clades. Also, NPs are not equally abundant in nature; some are quite common and others markedly rare. Armed with this knowledge, random 'fishing expeditions' for new NPs are increasingly harder to justify. Understanding the ecological and evolutionary pressures that drive the non-uniform distribution of NP biosynthesis provides a rational framework for the targeted isolation of strains enriched in new NP potential. Additionally, ecological theory leads to testable hypotheses regarding the roles of NPs in shaping ecosystems. Here we review several recent strain prioritization practices and discuss the ecological and evolutionary underpinnings for each. Finally, we offer perspectives on leveraging microbial ecology and evolutionary biology for future NP discovery.

  6. Anomalies in the 1D Anderson model: Beyond the band-centre and band-edge cases

    NASA Astrophysics Data System (ADS)

    Tessieri, L.; Izrailev, F. M.

    2018-03-01

    We consider the one-dimensional Anderson model with weak disorder. Using the Hamiltonian map approach, we analyse the validity of the random-phase approximation for resonant values of the energy, E = 2 cos(πr) , with r a rational number. We expand the invariant measure of the phase variable in powers of the disorder strength and we show that, contrary to what happens at the centre and at the edges of the band, for all other resonant energies the leading term of the invariant measure is uniform. When higher-order terms are taken into account, a modulation of the invariant measure appears for all resonant values of the energy. This implies that, when the localisation length is computed within the second-order approximation in the disorder strength, the Thouless formula is valid everywhere except at the band centre and at the band edges.

  7. Benford’s Law: Textbook Exercises and Multiple-Choice Testbanks

    PubMed Central

    Slepkov, Aaron D.; Ironside, Kevin B.; DiBattista, David

    2015-01-01

    Benford’s Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford’s Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford’s Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford’s Law, the testbank is nonetheless secure against such a Benford’s attack for banal reasons. PMID:25689468

  8. Architecture and applications of a high resolution gated SPAD image sensor

    PubMed Central

    Burri, Samuel; Maruyama, Yuki; Michalet, Xavier; Regazzoni, Francesco; Bruschini, Claudio; Charbon, Edoardo

    2014-01-01

    We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers. PMID:25090572

  9. Benford's Law: textbook exercises and multiple-choice testbanks.

    PubMed

    Slepkov, Aaron D; Ironside, Kevin B; DiBattista, David

    2015-01-01

    Benford's Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford's Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford's Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford's Law, the testbank is nonetheless secure against such a Benford's attack for banal reasons.

  10. Weighted re-randomization tests for minimization with unbalanced allocation.

    PubMed

    Han, Baoguang; Yu, Menggang; McEntegart, Damian

    2013-01-01

    Re-randomization test has been considered as a robust alternative to the traditional population model-based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate-adaptive randomization method for ensuring balance among prognostic factors. Among various re-randomization tests, fixed-entry-order re-randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed-entry-order re-randomization test is biased and thus compromised in power. We find that the bias is due to non-uniform re-allocation probabilities incurred by the re-randomization in this case. We therefore propose a weighted fixed-entry-order re-randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re-randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.

  11. The effect of a uniform magnetic field on the onset of steady Benard-Marangoni convection in a layer of conducting fluid

    NASA Astrophysics Data System (ADS)

    Wilson, S. K.

    1993-05-01

    Analytical and numerical techniques are used to analyze the effect of a uniform vertical magnetic field on the onset of steady Benard-Marangoni convection in a horizontal layer of quiescent, electrically conducting fluid subject to a uniform vertical temperature gradient. Marangoni numbers for the onset of steady convection are found to be critically dependent on the nondimensional Crispation and Bond numbers. Two different asymptotic limits of strong surface tension and strong magnetic field are analyzed. Data obtained indicate that the presence of the magnetic field always has a stabilizing effect on the layer. Assuming that the Marangoni number is a critical parameter, it is shown that, if the free surface is nondeformable, then any particular disturbance can be stabilized with a sufficiently strong magnetic field. If the free surface is deformable and gravity waves are excluded, then the layer is always unstable to infinitely long wavelength disturbances with or without a magnetic field.

  12. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  13. Explicit equilibria in a kinetic model of gambling

    NASA Astrophysics Data System (ADS)

    Bassetti, F.; Toscani, G.

    2010-06-01

    We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.

  14. Development of a methodology to evaluate material accountability in pyroprocess

    NASA Astrophysics Data System (ADS)

    Woo, Seungmin

    This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).

  15. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  16. Ranking and clustering of nodes in networks with smart teleportation

    NASA Astrophysics Data System (ADS)

    Lambiotte, R.; Rosvall, M.

    2012-05-01

    Random teleportation is a necessary evil for ranking and clustering directed networks based on random walks. Teleportation enables ergodic solutions, but the solutions must necessarily depend on the exact implementation and parametrization of the teleportation. For example, in the commonly used PageRank algorithm, the teleportation rate must trade off a heavily biased solution with a uniform solution. Here we show that teleportation to links rather than nodes enables a much smoother trade-off and effectively more robust results. We also show that, by not recording the teleportation steps of the random walker, we can further reduce the effect of teleportation with dramatic effects on clustering.

  17. Thin films with disordered nanohole patterns for solar radiation absorbers

    NASA Astrophysics Data System (ADS)

    Fang, Xing; Lou, Minhan; Bao, Hua; Zhao, C. Y.

    2015-06-01

    The radiation absorption in thin films with three disordered nanohole patterns, i.e., random position, non-uniform radius, and amorphous pattern, are numerically investigated by finite-difference time-domain (FDTD) simulations. Disorder can alter the absorption spectra and has an impact on the broadband absorption performance. Compared to random position and non-uniform radius nanoholes, amorphous pattern can induce a much better integrated absorption. The power density spectra indicate that amorphous pattern nanoholes reduce the symmetry and provide more resonance modes that are desired for the broadband absorption. The application condition for amorphous pattern nanoholes shows that they are much more appropriate in absorption enhancement for weak absorption materials. Amorphous silicon thin films with disordered nanohole patterns are applied in solar radiation absorbers. Four configurations of thin films with different nanohole patterns show that interference between layers in absorbers will change the absorption performance. Therefore, it is necessary to optimize the whole radiation absorbers although single thin film with amorphous pattern nanohole has reached optimal absorption.

  18. Pattern-projected schlieren imaging method using a diffractive optics element

    NASA Astrophysics Data System (ADS)

    Min, Gihyeon; Lee, Byung-Tak; Kim, Nac Woo; Lee, Munseob

    2018-04-01

    We propose a novel schlieren imaging method by projecting a random dot pattern, which is generated in a light source module that includes a diffractive optical element. All apparatuses are located in the source side, which leads to one-body sensor applications. This pattern is distorted by the deflections of schlieren objects such that the displacement vectors of random dots in the pixels can be obtained using the particle image velocity algorithm. The air turbulences induced by a burning candle, boiling pot, heater, and gas torch were successfully imaged, and it was shown that imaging up to a size of 0.7 m  ×  0.57 m is possible. An algorithm to correct the non-uniform sensitivity according to the position of a schlieren object was analytically derived. This algorithm was applied to schlieren images of lenses. Comparing the corrected versions to the original schlieren images, we showed a corrected uniform sensitivity of 14.15 times on average.

  19. Combining numerical simulations with time-domain random walk for pathogen risk assessment in groundwater

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Molin, S.

    2012-02-01

    We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.

  20. 47 CFR 32.20 - Numbering convention.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Numbering convention. 32.20 Section 32.20 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES General Instructions § 32.20 Numbering convention. (a) The number “32...

  1. 47 CFR 32.20 - Numbering convention.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Numbering convention. 32.20 Section 32.20 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES General Instructions § 32.20 Numbering convention. (a) The number “32...

  2. 47 CFR 32.20 - Numbering convention.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Numbering convention. 32.20 Section 32.20 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES General Instructions § 32.20 Numbering convention. (a) The number “32...

  3. Universality of spectrum of passive scalar variance at very high Schmidt number in isotropic steady turbulence

    NASA Astrophysics Data System (ADS)

    Gotoh, Toshiyuki

    2012-11-01

    Spectrum of passive scalar variance at very high Schmidt number up to 1000 in isotropic steady turbulence has been studied by using very high resolution DNS. Gaussian random force and scalar source which are isotropic and white in time are applied at low wavenumber band. Since the Schmidt number is very large, the system was integrated for 72 large eddy turn over time for the system to forgot the initial state. It is found that the scalar spectrum attains the asymptotic k-1 spectrum in the viscous-convective range and the constant CB is found to be 5.7 which is larger than 4.9 obtained by DNS under the uniform mean scalar gradient. Reasons for the difference are inferred as the Reynolds number effect, anisotropy, difference in the scalar injection, duration of time average, and the universality of the constant is discussed. The constant CB is also compared with the prediction by the Lagrangian statistical theory for the passive scalar. The scalar spectrum in the far diffusive range is found to be exponential, which is consistent with the Kraichnan's spectrum. However, the Kraichnan spectrum was derived under the assumption that the velocity field is white in time, therefore theoretical explanation of the agreement needs to be explored. Grant-in-Aid for Scientific Research No. 21360082, Ministry of Education, Culture, Sports, Science and Technology of Japan.

  4. Uniformity of plants regenerated from orange (Citrus sinensis Osb.) protoplasts.

    PubMed

    Kobayashi, S

    1987-05-01

    Using 25 plants (protoclones) regenerated from orange (Citrus sinensis Osb.) protoplasts, several characters, including leaf and flower morphology, leaf oil, isozyme patterns and chromosome number, were examined. No significant variations in each character were recorded among the protoclones. Uniformity observed among protoclones was identical to that of nucellar seedlings.

  5. 76 FR 13699 - Reports, Forms and Recordkeeping Requirements Information Collection Activity Under OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-14

    ...: Uniform Financial Reporting Requirements. OMB Control Number: 2133-0005. Type of Request: Extension of...(s): MA-172. Abstract: The Uniform Financial Reporting Requirements are used as a basis for preparing and filing semi-annual and annual financial statements with the Maritime Administration. Regulations...

  6. Percolation Laws of a Fractal Fracture-Pore Double Medium

    NASA Astrophysics Data System (ADS)

    Zhao, Yangsheng; Feng, Zengchao; Lv, Zhaoxing; Zhao, Dong; Liang, Weiguo

    2016-12-01

    The fracture-pore double porosity medium is one of the most common media in nature, for example, rock mass in strata. Fracture has a more significant effect on fluid flow than a pore in a fracture-pore double porosity medium. Hence, the fracture effect on percolation should be considered when studying the percolation phenomenon in porous media. In this paper, based on the fractal distribution law, three-dimensional (3D) fracture surfaces, and two-dimensional (2D) fracture traces in rock mass, the locations of fracture surfaces or traces are determined using a random function of uniform distribution. Pores are superimposed to build a fractal fracture-pore double medium. Numerical experiments were performed to show percolation phenomena in the fracture-pore double medium. The percolation threshold can be determined from three independent variables (porosity n, fracture fractal dimension D, and initial value of fracture number N0). Once any two are determined, the percolation probability exists at a critical point with the remaining parameter changing. When the initial value of the fracture number is greater than zero, the percolation threshold in the fracture-pore medium is much smaller than that in a pore medium. When the fracture number equals zero, the fracture-pore medium degenerates to a pore medium, and both percolation thresholds are the same.

  7. Uniform color space is not homogeneous

    NASA Astrophysics Data System (ADS)

    Kuehni, Rolf G.

    2002-06-01

    Historical data of chroma scaling and hue scaling are compared and evidence is shown that we do not have a reliable basis in either case. Several data sets indicate explicitly or implicitly that the number of constant sized hue differences between unique hues as well as in the quadrants of the a*, b* diagram differs making what is commonly regarded as uniform color space inhomogeneous. This problem is also shown to affect the OSA-UCS space. A Euclidean uniform psychological or psychophysical color space appears to be impossible.

  8. Synthesis of stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Thornton, W. A.

    1974-01-01

    Computer programs for the synthesis of shells of various configurations were developed. The conditions considered are: (1) uniform shells (mainly cones) using a membrane buckling analysis, (2) completely uniform shells (cones, spheres, toroidal segments) using linear bending prebuckling analysis, and (3) revision of second design process to reduce the number of design variables to about 30 by considering piecewise uniform designs. A perturbation formula was derived and this allows exact derivatives of the general buckling load to be computed with little additional computer time.

  9. Stress and Coping with War: Support Providers and Casualties of Operations Desert Shield/Storm

    DTIC Science & Technology

    1992-07-01

    DEPARTMENT OF PSYCHIATRY F. EDWARD HEBERT SCHOOL OF MEDICINE UNIFORMED SERVICES UNIVERSITY OF THE HEALTH SCIENCES BETHESDA, MARYLAND 20814-4799 2v 92...NUMBER Uniformed Services University of the Health Sciences 4301 Jones Bridge Road Bethesda, MD 20814-4799 .TF-S~br6 Ftlf*,i’.* riIiPIi;; A;i or fIAMr...STORM DPRMN .... PSYCHIATRY ,I- F. EDWARD HEBERT SCHOOL OF MEDICINE UNIFORMED SERVICES UNIVERSTIY OF THE HEALTH SCIENCES BETHESDA, MARYLAND 20814-4799

  10. A simple algorithm to improve the performance of the WENO scheme on non-uniform grids

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong

    2018-02-01

    This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.

  11. Effect of Uniform Design on the Speed of Combat Tourniquet Application: A Simulation Study.

    PubMed

    Higgs, Andrew R; Maughon, Michael J; Ruland, Robert T; Reade, Michael C

    2016-08-01

    Tourniquets are issued to deployed members of both the United States (U.S. military and the Australian Defence Force (ADF). The ease of removing the tourniquet from the pocket of the combat uniform may influence its time to application. The ADF uniform uses buttons to secure the pocket, whereas the U.S. uniform uses a hook and loop fastener system. National differences in training may influence the time to and effectiveness of tourniquet application. To compare the time taken to retrieve and apply a tourniquet from the pocket of the Australian and the U.S. combat uniform and compare the effectiveness of tourniquet application. Twenty participants from both nations were randomly selected. Participants were timed on their ability to remove a tourniquet from their pockets and then apply it effectively. The U.S. personnel removed their tourniquets in shorter time (median 2.5 seconds) than Australians (median 5.72 seconds, p < 0.0001). ADF members (mean 41.36 seconds vs. 58.87 seconds, p < 0.037) applied the tourniquet more rapidly once removed from the pocket and trended to apply it more effectively (p = 0.1). The closure system of pockets on the combat uniform might influence the time taken to apply a tourniquet. Regular training might also reduce the time taken to apply a tourniquet effectively. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  12. Statistical metrology—measurement and modeling of variation for advanced process development and design rule generation

    NASA Astrophysics Data System (ADS)

    Boning, Duane S.; Chung, James E.

    1998-11-01

    Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.

  13. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.

  14. 32 CFR 21.560 - Must DoD Components assign numbers uniformly to awards?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... nonprocurement instrument. (c) The 9th position must be a number: (1) “1” for grants. (2) “2” for cooperative... assigning these numbers and may create multiple series of letters and numbers to meet internal needs for...

  15. Growth of alveoli during postnatal development in humans based on stereological estimation.

    PubMed

    Herring, Matt J; Putney, Lei F; Wyatt, Gregory; Finkbeiner, Walter E; Hyde, Dallas M

    2014-08-15

    Alveolarization in humans and nonhuman primates begins during prenatal development. Advances in stereological counting techniques allow accurate assessment of alveolar number; however, these techniques have not been applied to the developing human lung. Based on the recent American Thoracic Society guidelines for stereology, lungs from human autopsies, ages 2 mo to 15 yr, were fractionated and isometric uniform randomly sampled to count the number of alveoli. The number of alveoli was compared with age, weight, and height as well as growth between right and left lungs. The number of alveoli in the human lung increased exponentially during the first 2 yr of life but continued to increase albeit at a reduced rate through adolescence. Alveolar numbers also correlated with the indirect radial alveolar count technique. Growth curves for human alveolarization were compared using historical data of nonhuman primates and rats. The alveolar growth rate in nonhuman primates was nearly identical to the human growth curve. Rats were significantly different, showing a more pronounced exponential growth during the first 20 days of life. This evidence indicates that the human lung may be more plastic than originally thought, with alveolarization occurring well into adolescence. The first 20 days of life in rats implies a growth curve that may relate more to prenatal growth in humans. The data suggest that nonhuman primates are a better laboratory model for studies of human postnatal lung growth than rats. Copyright © 2014 the American Physiological Society.

  16. Ground States of Random Spanning Trees on a D-Wave 2X

    NASA Astrophysics Data System (ADS)

    Hall, J. S.; Hobl, L.; Novotny, M. A.; Michielsen, Kristel

    The performances of two D-Wave 2 machines (476 and 496 qubits) and of a 1097-qubit D-Wave 2X were investigated. Each chip has a Chimera interaction graph calG . Problem input consists of values for the fields hj and for the two-qubit interactions Ji , j of an Ising spin-glass problem formulated on calG . Output is returned in terms of a spin configuration {sj } , with sj = +/- 1 . We generated random spanning trees (RSTs) uniformly distributed over all spanning trees of calG . On the 476-qubit D-Wave 2, RSTs were generated on the full chip with Ji , j = - 1 and hj = 0 and solved one thousand times. The distribution of solution energies and the average magnetization of each qubit were determined. On both the 476- and 1097-qubit machines, four identical spanning trees were generated on each quadrant of the chip. The statistical independence of these regions was investigated. In another study, on the D-Wave 2X, one hundred RSTs with random Ji , j ∈ { - 1 , 1 } and hj = 0 were generated on the full chip. Each RST problem was solved one hundred times and the number of times the ground state energy was found was recorded. This procedure was repeated for square subgraphs, with dimensions ranging from 7 ×7 to 11 ×11. Supported in part by NSF Grants DGE-0947419 and DMR-1206233. D-Wave time provided by D-Wave Systems and by the USRA Quantum Artificial Intelligence Laboratory Research Opportunity.

  17. Polyelectrolyte multilayer-assisted fabrication of non-periodic silicon nanocolumn substrates for cellular interface applications

    NASA Astrophysics Data System (ADS)

    Lee, Seyeong; Kim, Dongyoon; Kim, Seong-Min; Kim, Jeong-Ah; Kim, Taesoo; Kim, Dong-Yu; Yoon, Myung-Han

    2015-08-01

    Recent advances in nanostructure-based biotechnology have resulted in a growing demand for vertical nanostructure substrates with elaborate control over the nanoscale geometry and a high-throughput preparation. In this work, we report the fabrication of non-periodic vertical silicon nanocolumn substrates via polyelectrolyte multilayer-enabled randomized nanosphere lithography. Owing to layer-by-layer deposited polyelectrolyte adhesives, uniformly-separated polystyrene nanospheres were securely attached on large silicon substrates and utilized as masks for the subsequent metal-assisted silicon etching in solution. Consequently, non-periodic vertical silicon nanocolumn arrays were successfully fabricated on a wafer scale, while each nanocolumn geometric factor, such as the diameter, height, density, and spatial patterning, could be fully controlled in an independent manner. Finally, we demonstrate that our vertical silicon nanocolumn substrates support viable cell culture with minimal cell penetration and unhindered cell motility due to the blunt nanocolumn morphology. These results suggest that vertical silicon nanocolumn substrates may serve as a useful cellular interface platform for performing a statistically meaningful number of cellular experiments in the fields of biomolecular delivery, stem cell research, etc.Recent advances in nanostructure-based biotechnology have resulted in a growing demand for vertical nanostructure substrates with elaborate control over the nanoscale geometry and a high-throughput preparation. In this work, we report the fabrication of non-periodic vertical silicon nanocolumn substrates via polyelectrolyte multilayer-enabled randomized nanosphere lithography. Owing to layer-by-layer deposited polyelectrolyte adhesives, uniformly-separated polystyrene nanospheres were securely attached on large silicon substrates and utilized as masks for the subsequent metal-assisted silicon etching in solution. Consequently, non-periodic vertical silicon nanocolumn arrays were successfully fabricated on a wafer scale, while each nanocolumn geometric factor, such as the diameter, height, density, and spatial patterning, could be fully controlled in an independent manner. Finally, we demonstrate that our vertical silicon nanocolumn substrates support viable cell culture with minimal cell penetration and unhindered cell motility due to the blunt nanocolumn morphology. These results suggest that vertical silicon nanocolumn substrates may serve as a useful cellular interface platform for performing a statistically meaningful number of cellular experiments in the fields of biomolecular delivery, stem cell research, etc. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02384j

  18. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  19. A probabilistic approach to randomness in geometric configuration of scalable origami structures

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Paulino, Glaucio; Gardoni, Paolo

    2015-03-01

    Origami, an ancient paper folding art, has inspired many solutions to modern engineering challenges. The demand for actual engineering applications motivates further investigation in this field. Although rooted from the historic art form, many applications of origami are based on newly designed origami patterns to match the specific requirenments of an engineering problem. The application of origami to structural design problems ranges from micro-structure of materials to large scale deployable shells. For instance, some origami-inspired designs have unique properties such as negative Poisson ratio and flat foldability. However, origami structures are typically constrained by strict mathematical geometric relationships, which in reality, can be easily violated, due to, for example, random imperfections introduced during manufacturing, or non-uniform deformations under working conditions (e.g. due to non-uniform thermal effects). Therefore, the effects of uncertainties in origami-like structures need to be studied in further detail in order to provide a practical guide for scalable origami-inspired engineering designs. Through reliability and probabilistic analysis, we investigate the effect of randomness in origami structures on their mechanical properties. Dislocations of vertices of an origami structure have different impacts on different mechanical properties, and different origami designs could have different sensitivities to imperfections. Thus we aim to provide a preliminary understanding of the structural behavior of some common scalable origami structures subject to randomness in their geometric configurations in order to help transition the technology toward practical applications of origami engineering.

  20. Bayesian bivariate meta-analysis of correlated effects: Impact of the prior distributions on the between-study correlation, borrowing of strength, and joint inferences

    PubMed Central

    Bujkiewicz, Sylwia; Riley, Richard D

    2016-01-01

    Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929

  1. Identifying uniformly mutated segments within repeats.

    PubMed

    Sahinalp, S Cenk; Eichler, Evan; Goldberg, Paul; Berenbrink, Petra; Friedetzky, Tom; Ergun, Funda

    2004-12-01

    Given a long string of characters from a constant size alphabet we present an algorithm to determine whether its characters have been generated by a single i.i.d. random source. More specifically, consider all possible n-coin models for generating a binary string S, where each bit of S is generated via an independent toss of one of the n coins in the model. The choice of which coin to toss is decided by a random walk on the set of coins where the probability of a coin change is much lower than the probability of using the same coin repeatedly. We present a procedure to evaluate the likelihood of a n-coin model for given S, subject a uniform prior distribution over the parameters of the model (that represent mutation rates and probabilities of copying events). In the absence of detailed prior knowledge of these parameters, the algorithm can be used to determine whether the a posteriori probability for n=1 is higher than for any other n>1. Our algorithm runs in time O(l4logl), where l is the length of S, through a dynamic programming approach which exploits the assumed convexity of the a posteriori probability for n. Our test can be used in the analysis of long alignments between pairs of genomic sequences in a number of ways. For example, functional regions in genome sequences exhibit much lower mutation rates than non-functional regions. Because our test provides means for determining variations in the mutation rate, it may be used to distinguish functional regions from non-functional ones. Another application is in determining whether two highly similar, thus evolutionarily related, genome segments are the result of a single copy event or of a complex series of copy events. This is particularly an issue in evolutionary studies of genome regions rich with repeat segments (especially tandemly repeated segments).

  2. Are randomly grown graphs really random?

    PubMed

    Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H

    2001-10-01

    We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.

  3. Apparatus for synthesis of a solar spectrum

    DOEpatents

    Sopori, Bhushan L.

    1993-01-01

    A xenon arc lamp and a tungsten filament lamp provide light beams that together contain all the wavelengths required to accurately simulate a solar spectrum. Suitable filter apparatus selectively direct visible and ultraviolet light from the xenon arc lamp into two legs of a trifurcated randomized fiber optic cable. Infrared light selectively filtered from the tungsten filament lamp is directed into the third leg of the fiber optic cable. The individual optic fibers from the three legs are brought together in a random fashion into a single output leg. The output beam emanating from the output leg of the trifurcated randomized fiber optic cable is extremely uniform and contains wavelengths from each of the individual filtered light beams. This uniform output beam passes through suitable collimation apparatus before striking the surface of the solar cell being tested. Adjustable aperture apparatus located between the lamps and the input legs of the trifurcated fiber optic cable can be selectively adjusted to limit the amount of light entering each leg, thereby providing a means of "fine tuning" or precisely adjusting the spectral content of the output beam. Finally, an adjustable aperture apparatus may also be placed in the output beam to adjust the intensity of the output beam without changing the spectral content and distribution of the output beam.

  4. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  5. Asteroid orbital inversion using uniform phase-space sampling

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.

    2014-07-01

    We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.

  6. Multidimensional Extension of Multiple Indicators Multiple Causes Models to Detect DIF

    ERIC Educational Resources Information Center

    Lee, Soo; Bulut, Okan; Suh, Youngsuk

    2017-01-01

    A number of studies have found multiple indicators multiple causes (MIMIC) models to be an effective tool in detecting uniform differential item functioning (DIF) for individual items and item bundles. A recently developed MIMIC-interaction model is capable of detecting both uniform and nonuniform DIF in the unidimensional item response theory…

  7. Effect of jet-to-mainstream momentum flux ratio on mixing process

    NASA Astrophysics Data System (ADS)

    Gupta, Alka; Ibrahim, Mohamed Saeed; Amano, R. S.

    2016-03-01

    Temperature uniformity after a mixing process plays a very important role in many applications. Non-uniform temperature at the entrance of the turbine in gas turbine systems has an adverse effect on the life of the blades. These temperature non-uniformities cause thermal stresses in the blades leading to higher maintenance costs. This paper presents experimental and numerical results for mixing process in coaxial ducts. The effect of increased jet-to-mainstream momentum flux ratio on the temperature uniformity of the exit flow was analyzed. It was found that better mixing of primary (or hot) stream and dilution (or cold) stream was achieved at higher flux ratio. Almost 85 % of the equilibrium mixture fraction was achieved at flux ratio of 0.85 after which no significant improvement was achieved while the exergy destruction kept on increasing. A new parameter, `Cooling Rate Number', was defined to identify the potential sites for presence of cold zones within the mixing section. Parametric study reveals that the cooling rate numbers were higher near the dilution holes which may result in rapid cooling of the gases.

  8. Within-wafer CD variation induced by wafer shape

    NASA Astrophysics Data System (ADS)

    Huang, Chi-hao; Yang, Mars; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2016-03-01

    In order to meet the increasing storage capacity demand and reduce bit cost of NAND flash memories, 3D stacked vertical flash cell array has been proposed. In constructing 3D NAND flash memories, the bit number per unit area is increased as increasing the number of stacked layers. However, the increased number of stacked layers has made the film stress control extremely important for maintaining good process quality. The residual film stress alters the wafer shape accordingly several process impacts have been readily observed across wafer, such as film deposition non-uniformity, etch rate non-uniformity, wafer chucking error on scanner, materials coating/baking defects, overlay degradation and critical dimension (CD) non-uniformity. The residual tensile and compressive stresses on wafers will result in concave and convex wafer shapes, respectively. This study investigates within-wafer CD uniformity (CDU) associated with wafer shape change induced by the 3D NAND flash memory processes. Within-wafer CDU was correlated with several critical parameters including different wafer bow heights of concave and convex wafer shapes, photo resists with different post exposure baking (PEB) temperature sensitivities, and DoseMapper compensation. The results indicated the trend of within-wafer CDU maintains flat for convex wafer shapes with bow height up to +230um and concave wafer shapes with bow height ranging from 0 ~ -70um, while the within-wafer CDU trends up from -70um to -246um wafer bow heights. To minimize the within-wafer CD distribution induced by wafer warpage, carefully tailoring the film stack and thermal budget in the process flow for maintaining the wafer shape at CDU friendly range is indispensable and using photo-resist materials with lower PEB temperature sensitivity is also suggested. In addition, DoseMapper compensation is also an alternative to greatly suppress the within-wafer CD non-uniformity but the photo-resist profile variation induced by across-wafer PEB temperature non-uniformity attributed to wafer warpage is uncorrectable, and the photo-resist profile variation is believed to affect across-wafer etch bias uniformity to some degree.

  9. Using machine learning to examine medication adherence thresholds and risk of hospitalization.

    PubMed

    Lo-Ciganic, Wei-Hsuan; Donohue, Julie M; Thorpe, Joshua M; Perera, Subashan; Thorpe, Carolyn T; Marcum, Zachary A; Gellad, Walid F

    2015-08-01

    Quality improvement efforts are frequently tied to patients achieving ≥80% medication adherence. However, there is little empirical evidence that this threshold optimally predicts important health outcomes. To apply machine learning to examine how adherence to oral hypoglycemic medications is associated with avoidance of hospitalizations, and to identify adherence thresholds for optimal discrimination of hospitalization risk. A retrospective cohort study of 33,130 non-dual-eligible Medicaid enrollees with type 2 diabetes. We randomly selected 90% of the cohort (training sample) to develop the prediction algorithm and used the remaining (testing sample) for validation. We applied random survival forests to identify predictors for hospitalization and fit survival trees to empirically derive adherence thresholds that best discriminate hospitalization risk, using the proportion of days covered (PDC). Time to first all-cause and diabetes-related hospitalization. The training and testing samples had similar characteristics (mean age, 48 y; 67% female; mean PDC=0.65). We identified 8 important predictors of all-cause hospitalizations (rank in order): prior hospitalizations/emergency department visit, number of prescriptions, diabetes complications, insulin use, PDC, number of prescribers, Elixhauser index, and eligibility category. The adherence thresholds most discriminating for risk of all-cause hospitalization varied from 46% to 94% according to patient health and medication complexity. PDC was not predictive of hospitalizations in the healthiest or most complex patient subgroups. Adherence thresholds most discriminating of hospitalization risk were not uniformly 80%. Machine-learning approaches may be valuable to identify appropriate patient-specific adherence thresholds for measuring quality of care and targeting nonadherent patients for intervention.

  10. 32 CFR Table 1 to Part 855 - Purpose of Use/Verification/Approval Authority/Fees

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... change of station, etc.) or for private, non revenue flights Social security number in block 1 on DD Form... of a uniformed service member Identification card (DD Form 1173) number or social security number... Form 1173) number or social security number, identification card expiration date, sponsor's retirement...

  11. 32 CFR Table 1 to Part 855 - Purpose of Use/Verification/Approval Authority/Fees

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... change of station, etc.) or for private, non revenue flights Social security number in block 1 on DD Form... of a uniformed service member Identification card (DD Form 1173) number or social security number... Form 1173) number or social security number, identification card expiration date, sponsor's retirement...

  12. Directional, seamless, and restriction enzyme-free construction of random-primed complementary DNA libraries using phosphorothioate-modified primers.

    PubMed

    Howland, Shanshan W; Poh, Chek-Meng; Rénia, Laurent

    2011-09-01

    Directional cloning of complementary DNA (cDNA) primed by oligo(dT) is commonly achieved by appending a restriction site to the primer, whereas the second strand is synthesized through the combined action of RNase H and Escherichia coli DNA polymerase I (PolI). Although random primers provide more uniform and complete coverage, directional cloning with the same strategy is highly inefficient. We report that phosphorothioate linkages protect the tail sequence appended to random primers from the 5'→3' exonuclease activity of PolI. We present a simple strategy for constructing a random-primed cDNA library using the efficient, size-independent, and seamless In-Fusion cloning method instead of restriction enzymes. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Impact of deposition-rate fluctuations on thin-film thickness and uniformity

    DOE PAGES

    Oliver, Joli B.

    2016-11-04

    Variations in deposition rate are superimposed on a thin-film–deposition model with planetary rotation to determine the impact on film thickness. Variations in magnitude and frequency of the fluctuations relative to the speed of planetary revolution lead to thickness errors and uniformity variations up to 3%. Sufficiently rapid oscillations in the deposition rate have a negligible impact, while slow oscillations are found to be problematic, leading to changes in the nominal film thickness. Finally, superimposing noise as random fluctuations in the deposition rate has a negligible impact, confirming the importance of any underlying harmonic oscillations in deposition rate or source operation.

  14. Semiconductor laser insert with uniform illumination for use in photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Charamisinau, Ivan; Happawana, Gemunu; Evans, Gary; Rosen, Arye; Hsi, Richard A.; Bour, David

    2005-08-01

    A low-cost semiconductor red laser light delivery system for esophagus cancer treatment is presented. The system is small enough for insertion into the patient's body. Scattering elements with nanoscale particles are used to achieve uniform illumination. The scattering element optimization calculations, with Mie theory, provide scattering and absorption efficiency factors for scattering particles composed of various materials. The possibility of using randomly deformed spheres and composite particles instead of perfect spheres is analyzed using an extension to Mie theory. The measured radiation pattern from a prototype light delivery system fabricated using these design criteria shows reasonable agreement with the theoretically predicted pattern.

  15. Severity of Organized Item Theft in Computerized Adaptive Testing: A Simulation Study

    ERIC Educational Resources Information Center

    Yi, Qing; Zhang, Jinming; Chang, Hua-Hua

    2008-01-01

    Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…

  16. SU-E-T-76: Comparing Homogeneity Between Gafchromic Film EBT2 and EBT3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizuno, H; Sumida, I; Ogawa, K

    2014-06-01

    Purpose: We found out that homogeneity of EBT2 was different among lot numbers in previous study. Variation in local homogeneity of EBT3 among several lot numbers has not been reported. In this study, we investigated film homogeneity of Gafcrhomic EBT3 films compared with EBT2 films. Methods: All sheets from five lots were cut into 12 pieces to investigate film homogeneity, and were irradiated at 0.5, 2, and 3 Gy. To investigate intra- and inter-sheet uniformity, five sheets from five lots were exposed to 2 Gy: intra-sheet uniformity was evaluated by the coefficient of variation of homogeneity for all pieces ofmore » a single sheet, and inter-sheet uniformity was evaluated by the coefficient of variation of homogeneity among the same piece numbers in the five sheets. To investigate the difference of ADC value in various doses, a single sheet from each of five lots was irradiated at 0.5 Gy and 3 Gy in addition to 2 Gy. A scan resolution of 72 dots per inch (dpi) and color depth of 48-bit RGB were used. Films were analyzed by the inhouse software; Average of ADC value in center ROI and profile X and Y axis were measured. Results and Conclusion: Intra-sheet uniformity of non-irradiated EBT2 films were ranged from 0.1% to 0.4%, however that of irradiated EBT2 films were ranged from 0.2% to 1.5%. On the other hand, intra-sheet uniformity of irradiated and non-irradiated EBT3 films were from 0.2% to 0.6%. Inter-sheet uniformity of all films were less than 0.5%. It was interesting point that homogeneity of EBT3 between no-irradiated and irradiated films were similar value, whereas EBT2 had dose dependence of homogeneity in ADC value evaluation. These results suggested that EBT3 homogeneity was corrected by this feature.« less

  17. Using Computer-Generated Random Numbers to Calculate the Lifetime of a Comet.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1991-01-01

    An educational technique to calculate the lifetime of a comet using software-generated random numbers is introduced to undergraduate physiques and astronomy students. Discussed are the generation and eligibility of the required random numbers, background literature related to the problem, and the solution to the problem using random numbers.…

  18. Measuring Symmetry, Asymmetry and Randomness in Neural Network Connectivity

    PubMed Central

    Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni

    2014-01-01

    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity. PMID:25006663

  19. Measuring symmetry, asymmetry and randomness in neural network connectivity.

    PubMed

    Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni

    2014-01-01

    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.

  20. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein-DNA interfaces.

  1. Radiation effects on the mixed convection flow induced by an inclined stretching cylinder with non-uniform heat source/sink.

    PubMed

    Hayat, Tasawar; Qayyum, Sajid; Alsaedi, Ahmed; Asghar, Saleem

    2017-01-01

    This study investigates the mixed convection flow of Jeffrey liquid by an impermeable inclined stretching cylinder. Thermal radiation and non-uniform heat source/sink are considered. The convective boundary conditions at surface are imposed. Nonlinear expressions of momentum, energy and concentration are transformed into dimensionless systems. Convergent homotopic solutions of the governing systems are worked out by employing homotopic procedure. Impact of physical variables on the velocity, temperature and concentration distributions are sketched and discussed. Numerical computations for skin friction coefficient, local Nusselt and Sherwood numbers are carried out. It is concluded that velocity field enhances for Deborah number while reverse situation is observed regarding ratio of relaxation to retardation times. Temperature and heat transfer rate are enhanced via larger thermal Biot number. Effect of Schmidt number on the concentration and local Sherwood number is quite reverse.

  2. Radiation effects on the mixed convection flow induced by an inclined stretching cylinder with non-uniform heat source/sink

    PubMed Central

    Hayat, Tasawar; Qayyum, Sajid; Alsaedi, Ahmed; Asghar, Saleem

    2017-01-01

    This study investigates the mixed convection flow of Jeffrey liquid by an impermeable inclined stretching cylinder. Thermal radiation and non-uniform heat source/sink are considered. The convective boundary conditions at surface are imposed. Nonlinear expressions of momentum, energy and concentration are transformed into dimensionless systems. Convergent homotopic solutions of the governing systems are worked out by employing homotopic procedure. Impact of physical variables on the velocity, temperature and concentration distributions are sketched and discussed. Numerical computations for skin friction coefficient, local Nusselt and Sherwood numbers are carried out. It is concluded that velocity field enhances for Deborah number while reverse situation is observed regarding ratio of relaxation to retardation times. Temperature and heat transfer rate are enhanced via larger thermal Biot number. Effect of Schmidt number on the concentration and local Sherwood number is quite reverse. PMID:28441392

  3. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  4. Performance Analysis of Modified Accelerative Preallocation MAC Protocol for Passive Star-Coupled WDMA Networks

    NASA Astrophysics Data System (ADS)

    Yun, Changho; Kim, Kiseon

    2006-04-01

    For the passive star-coupled wavelength-division multiple-access (WDMA) network, a modified accelerative preallocation WDMA (MAP-WDMA) media access control (MAC) protocol is proposed, which is based on AP-WDMA. To show the advantages of MAP-WDMA as an adequate MAC protocol for the network over AP-WDMA, the channel utilization, the channel-access delay, and the latency of MAP-WDMA are investigated and compared with those of AP-WDMA under various data traffic patterns, including uniform, quasi-uniform type, disconnected type, mesh type, and ring type data traffics, as well as the assumption that a given number of network stations is equal to that of channels, in other words, without channel sharing. As a result, the channel utilization of MAP-WDMA can be competitive with respect to that of AP-WDMA at the expense of insignificantly higher latency. Namely, if the number of network stations is small, MAP-WDMA provides better channel utilization for uniform, quasi-uniform-type, and disconnected-type data traffics at all data traffic loads, as well as for mesh and ring-type data traffics at low data traffic loads. Otherwise, MAP-WDMA only outperforms AP-WDMA for the first three data traffics at higher data traffic loads. In the aspect of channel-access delay, MAP-WDMA gives better performance than AP-WDMA, regardless of data traffic patterns and the number of network stations.

  5. Discrete element method (DEM) simulations of stratified sampling during solid dosage form manufacturing.

    PubMed

    Hancock, Bruno C; Ketterhagen, William R

    2011-10-14

    Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  7. Security authentication with a three-dimensional optical phase code using random forest classifier: an overview

    NASA Astrophysics Data System (ADS)

    Markman, Adam; Carnicer, Artur; Javidi, Bahram

    2017-05-01

    We overview our recent work [1] on utilizing three-dimensional (3D) optical phase codes for object authentication using the random forest classifier. A simple 3D optical phase code (OPC) is generated by combining multiple diffusers and glass slides. This tag is then placed on a quick-response (QR) code, which is a barcode capable of storing information and can be scanned under non-uniform illumination conditions, rotation, and slight degradation. A coherent light source illuminates the OPC and the transmitted light is captured by a CCD to record the unique signature. Feature extraction on the signature is performed and inputted into a pre-trained random-forest classifier for authentication.

  8. The concept of entropy in landscape evolution

    USGS Publications Warehouse

    Leopold, Luna Bergere; Langbein, Walter Basil

    1962-01-01

    The concept of entropy is expressed in terms of probability of various states. Entropy treats of the distribution of energy. The principle is introduced that the most probable condition exists when energy in a river system is as uniformly distributed as may be permitted by physical constraints. From these general considerations equations for the longitudinal profiles of rivers are derived that are mathematically comparable to those observed in the field. The most probable river profiles approach the condition in which the downstream rate of production of entropy per unit mass is constant. Hydraulic equations are insufficient to determine the velocity, depths, and slopes of rivers that are themselves authors of their own hydraulic geometries. A solution becomes possible by introducing the concept that the distribution of energy tends toward the most probable. This solution leads to a theoretical definition of the hydraulic geometry of river channels that agrees closely with field observations. The most probable state for certain physical systems can also be illustrated by random-walk models. Average longitudinal profiles and drainage networks were so derived and these have the properties implied by the theory. The drainage networks derived from random walks have some of the principal properties demonstrated by the Horton analysis; specifically, the logarithms of stream length and stream numbers are proportional to stream order.

  9. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  10. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  11. On the Complexity of Item Response Theory Models.

    PubMed

    Bonifay, Wes; Cai, Li

    2017-01-01

    Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.

  12. Spatial grain and the causes of regional diversity gradients in ants.

    PubMed

    Kaspari, Michael; Yuan, May; Alonso, Leeanne

    2003-03-01

    Gradients of species richness (S; the number of species of a given taxon in a given area and time) are ubiquitous. A key goal in ecology is to understand whether and how the many processes that generate these gradients act at different spatial scales. Here we evaluate six hypotheses for diversity gradients with 49 New World ant communities, from tundra to rain forest. We contrast their performance at three spatial grains from S(plot), the average number of ant species nesting in a m2 plot, through Fisher's alpha, an index that treats our 30 1-m2 plots as subsamples of a locality's diversity. At the smallest grain, S(plot), was tightly correlated (r2 = 0.99) with colony abundance in a fashion indistinguishable from the packing of randomly selected individuals into a fixed space. As spatial grain increased, the coaction of two factors linked to high net rates of diversification--warm temperatures and large areas of uniform climate--accounted for 75% of the variation in Fisher's alpha. However, the mechanisms underlying these correlations (i.e., precisely how temperature and area shape the balance of speciation to extinction) remain elusive.

  13. Asymptotic modal analysis and statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Dowell, Earl H.

    1988-01-01

    Statistical Energy Analysis (SEA) is defined by considering the asymptotic limit of Classical Modal Analysis, an approach called Asymptotic Modal Analysis (AMA). The general approach is described for both structural and acoustical systems. The theoretical foundation is presented for structural systems, and experimental verification is presented for a structural plate responding to a random force. Work accomplished subsequent to the grant initiation focusses on the acoustic response of an interior cavity (i.e., an aircraft or spacecraft fuselage) with a portion of the wall vibrating in a large number of structural modes. First results were presented at the ASME Winter Annual Meeting in December, 1987, and accepted for publication in the Journal of Vibration, Acoustics, Stress and Reliability in Design. It is shown that asymptotically as the number of acoustic modes excited becomes large, the pressure level in the cavity becomes uniform except at the cavity boundaries. However, the mean square pressure at the cavity corner, edge and wall is, respectively, 8, 4, and 2 times the value in the cavity interior. Also it is shown that when the portion of the wall which is vibrating is near a cavity corner or edge, the response is significantly higher.

  14. Modulation aware cluster size optimisation in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  15. Overlapping-image multimode interference couplers with a reduced number of self-images for uniform and nonuniform power splitting

    NASA Astrophysics Data System (ADS)

    Bachmann, M.; Besse, P. A.; Melchior, H.

    1995-10-01

    Overlapping-image multimode interference (MMI) couplers, a new class of devices, permit uniform and nonuniform power splitting. A theoretical description directly relates coupler geometry to image intensities, positions, and phases. Among many possibilities of nonuniform power splitting, examples of 1 \\times 2 couplers with ratios of 15:85 and 28:72 are given. An analysis of uniform power splitters includes the well-known 2 \\times N and 1 \\times N MMI couplers. Applications of MMI couplers include mode filters, mode splitters-combiners, and mode converters.

  16. Combined action of transverse oscillations and uniform cross-flow on vortex formation and pattern of a circular cylinder

    NASA Astrophysics Data System (ADS)

    Lam, K. M.; Liu, P.; Hu, J. C.

    2010-07-01

    This paper attempts to study the roles of lateral cylinder oscillations and a uniform cross-flow in the vortex formation and wake modes of an oscillating circular cylinder. A circular cylinder is given lateral oscillations of varying amplitudes (between 0.28 and 1.42 cylinder-diameters) in a slow uniform flow stream (Reynolds number=284) to produce the 2S, 2P and P+S wake modes. Detailed flow information is obtained with time-resolved particle-image velocimetry and the phase-locked averaging techniques. In the 2S and 2P mode, the flow speeds relative to the cylinder movement are less than the uniform flow velocity and it is found that initial formation of a vortex is caused by shear-layer separation of the uniform flow on the cylinder. Subsequent development of the shear-layer vortices is affected by the lateral cylinder movement. At small cylinder oscillation amplitudes, vortices are shed in synchronization with the cylinder movement, resulting in the 2S mode. The 2P mode occurs at larger cylinder oscillation amplitudes at which each shear-layer vortex is found to undergo intense stretching and eventual bifurcation into two separate vortices. The P+S mode occurs when the cylinder moving speeds are, for most of the time, higher than the speed of the uniform flow. These situations are found at fast and large-amplitude cylinder oscillations in which the flow relative to the cylinder movement takes over the uniform flow in governing the initial vortex formation. The formation stages of vortices from the cylinder are found to bear close resemblance to those of a vortex street pattern of a cylinder oscillating in an otherwise quiescent fluid at Keulegan-Carpenter numbers around 16. Vortices in the inclined vortex street pattern so formed are then convected downstream by the uniform flow as the vortex pairs in the 2P mode.

  17. 40 CFR Appendix C to Part 63 - Determination of the Fraction Biodegraded (Fbio) in a Biological Treatment Unit

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... designed and operated to approach or achieve uniform biomass distribution and organic compound... required by the rule. Unless otherwise specified, the procedures presented in this appendix are designed to... subdivided into a series of zones that have uniform characteristics within each zone. The number of zones...

  18. 40 CFR Appendix C to Part 63 - Determination of the Fraction Biodegraded (Fbio) in a Biological Treatment Unit

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... designed and operated to approach or achieve uniform biomass distribution and organic compound... required by the rule. Unless otherwise specified, the procedures presented in this appendix are designed to... subdivided into a series of zones that have uniform characteristics within each zone. The number of zones...

  19. 40 CFR Appendix C to Part 63 - Determination of the Fraction Biodegraded (Fbio) in a Biological Treatment Unit

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... designed and operated to approach or achieve uniform biomass distribution and organic compound... required by the rule. Unless otherwise specified, the procedures presented in this appendix are designed to... subdivided into a series of zones that have uniform characteristics within each zone. The number of zones...

  20. 40 CFR Appendix C to Part 63 - Determination of the Fraction Biodegraded (Fbio) in a Biological Treatment Unit

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... designed and operated to approach or achieve uniform biomass distribution and organic compound... required by the rule. Unless otherwise specified, the procedures presented in this appendix are designed to... subdivided into a series of zones that have uniform characteristics within each zone. The number of zones...

  1. 40 CFR Appendix C to Part 63 - Determination of the Fraction Biodegraded (Fbio) in a Biological Treatment Unit

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... designed and operated to approach or achieve uniform biomass distribution and organic compound... required by the rule. Unless otherwise specified, the procedures presented in this appendix are designed to... subdivided into a series of zones that have uniform characteristics within each zone. The number of zones...

  2. New reversing design method for LED uniform illumination.

    PubMed

    Wang, Kai; Wu, Dan; Qin, Zong; Chen, Fei; Luo, Xiaobing; Liu, Sheng

    2011-07-04

    In light-emitting diode (LED) applications, it is becoming a big issue that how to optimize light intensity distribution curve (LIDC) and design corresponding optical component to achieve uniform illumination when distance-height ratio (DHR) is given. A new reversing design method is proposed to solve this problem, including design and optimization of LIDC to achieve high uniform illumination and a new algorithm of freeform lens to generate the required LIDC by LED light source. According to this method, two new LED modules integrated with freeform lenses are successfully designed for slim direct-lit LED backlighting with thickness of 10mm, and uniformities of illuminance increase from 0.446 to 0.915 and from 0.155 to 0.887 when DHRs are 2 and 3 respectively. Moreover, the number of new LED modules dramatically decreases to 1/9 of the traditional LED modules while achieving similar uniform illumination in backlighting. Therefore, this new method provides a practical and simple way for optical design of LED uniform illumination when DHR is much larger than 1.

  3. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  4. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  5. 77 FR 17897 - National Uniform Emission Standards for Storage Vessel and Transfer Operations, Equipment Leaks...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-26

    ... Protection Agency, Research Triangle Park, North Carolina 27711; Telephone number: (919) 541-3608; Fax number... (E143-01), Office of Air Quality Planning and Standards, U.S. Environmental Protection Agency, Research..., Research Triangle Park, North Carolina 27711; Telephone number: (919) 541-5372; Fax number (919) 541-0246...

  6. 18 CFR 367.4 - Numbering system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Numbering system. 367.4... NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO THE PROVISIONS OF... Instructions § 367.4 Numbering system. (a) The account numbering plan used in this part consists of a system of...

  7. 18 CFR 367.4 - Numbering system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Numbering system. 367.4... NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO THE PROVISIONS OF... Instructions § 367.4 Numbering system. (a) The account numbering plan used in this part consists of a system of...

  8. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  9. Random numbers from vacuum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  10. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  11. Convergence of the Full Compressible Navier-Stokes-Maxwell System to the Incompressible Magnetohydrodynamic Equations in a Bounded Domain II: Global Existence Case

    NASA Astrophysics Data System (ADS)

    Fan, Jishan; Li, Fucai; Nakamura, Gen

    2018-06-01

    In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.

  12. Synchronization in oscillator networks with delayed coupling: a stability criterion.

    PubMed

    Earl, Matthew G; Strogatz, Steven H

    2003-03-01

    We derive a stability criterion for the synchronous state in networks of identical phase oscillators with delayed coupling. The criterion applies to any network (whether regular or random, low dimensional or high dimensional, directed or undirected) in which each oscillator receives delayed signals from k others, where k is uniform for all oscillators.

  13. Influence of tree spatial pattern and sample plot type and size on inventory

    Treesearch

    John-Pascall Berrill; Kevin L. O' Hara

    2012-01-01

    Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

  14. Probabilistic pathway construction.

    PubMed

    Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha

    2011-07-01

    Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.

  16. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems.

    PubMed

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.

  17. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    NASA Astrophysics Data System (ADS)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  18. Spatial pattern of Baccharis platypoda shrub as determined by sex and life stages

    NASA Astrophysics Data System (ADS)

    Fonseca, Darliana da Costa; de Oliveira, Marcio Leles Romarco; Pereira, Israel Marinho; Gonzaga, Anne Priscila Dias; de Moura, Cristiane Coelho; Machado, Evandro Luiz Mendonça

    2017-11-01

    Spatial patterns of dioecious species can be determined by their nutritional requirements and intraspecific competition, apart from being a response to environmental heterogeneity. The aim of the study was to evaluate the spatial pattern of populations of a dioecious shrub reporting to sex and reproductive stage patterns of individuals. Sampling was carried out in three areas located in the meridional portion of Serra do Espinhaço, where in individuals of the studied species were mapped. The spatial pattern was determined through O-ring analysis and Ripley's K-function and the distribution of individuals' frequencies was verified through x2 test. Populations in two areas showed an aggregate spatial pattern tending towards random or uniform according to the observed scale. Male and female adults presented an aggregate pattern at smaller scales, while random and uniform patterns were verified above 20 m for individuals of both sexes of the areas A2 and A3. Young individuals presented an aggregate pattern in all areas and spatial independence in relation to adult individuals, especially female plants. The interactions between individuals of both genders presented spatial independence with respect to spatial distribution. Baccharis platypoda showed characteristics in accordance with the spatial distribution of savannic and dioecious species, whereas the population was aggregated tending towards random at greater spatial scales. Young individuals showed an aggregated pattern at different scales compared to adults, without positive association between them. Female and male adult individuals presented similar characteristics, confirming that adult individuals at greater scales are randomly distributed despite their distinct preferences for environments with moisture variation.

  19. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  20. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  1. [The content-uniformity of dispensary-prepared rectal suppositories].

    PubMed

    Lüdde, H; Nestler, D

    1990-01-01

    Rectal suppositories, which are dispensed according prescription in small numbers up to N = 30, do not satisfy the demands in respect of content uniformity, if we consider the last N/10 poured ones. This by sedimentation caused problem is to be solved in increasing the amount of substances by N/10, so that you will get a safety-amount of 15% totally.

  2. Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.

    PubMed

    Dubois, J; Poupon, C; Lethimonnier, F; Le Bihan, D

    2006-08-01

    A method is proposed for generating schemes of diffusion gradient orientations which allow the diffusion tensor to be reconstructed from partial data sets in clinical DT-MRI, should the acquisition be corrupted or terminated before completion because of patient motion. A general energy-minimization electrostatic model was developed in which the interactions between orientations are weighted according to their temporal order during acquisition. In this report, two corruption scenarios were specifically considered for generating relatively uniform schemes of 18 and 60 orientations, with useful subsets of 6 and 15 orientations. The sets and subsets were compared to conventional sets through their energy, condition number and rotational invariance. Schemes of 18 orientations were tested on a volunteer. The optimized sets were similar to uniform sets in terms of energy, condition number and rotational invariance, whether the complete set or only a subset was considered. Diffusion maps obtained in vivo were close to those for uniform sets whatever the acquisition time was. This was not the case with conventional schemes, whose subset uniformity was insufficient. With the proposed approach, sets of orientations responding to several corruption scenarios can be generated, which is potentially useful for imaging uncooperative patients or infants.

  3. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  4. The Ciliate Paramecium Shows Higher Motility in Non-Uniform Chemical Landscapes

    PubMed Central

    Giuffre, Carl; Hinow, Peter; Vogel, Ryan; Ahmed, Tanvir; Stocker, Roman; Consi, Thomas R.; Strickler, J. Rudi

    2011-01-01

    We study the motility behavior of the unicellular protozoan Paramecium tetraurelia in a microfluidic device that can be prepared with a landscape of attracting or repelling chemicals. We investigate the spatial distribution of the positions of the individuals at different time points with methods from spatial statistics and Poisson random point fields. This makes quantitative the informal notion of “uniform distribution” (or lack thereof). Our device is characterized by the absence of large systematic biases due to gravitation and fluid flow. It has the potential to be applied to the study of other aquatic chemosensitive organisms as well. This may result in better diagnostic devices for environmental pollutants. PMID:21494596

  5. The uniform quantized electron gas revisited

    NASA Astrophysics Data System (ADS)

    Lomba, Enrique; Høye, Johan S.

    2017-11-01

    In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.

  6. IndeCut evaluates performance of network motif discovery algorithms.

    PubMed

    Ansariola, Mitra; Megraw, Molly; Koslicki, David

    2018-05-01

    Genomic networks represent a complex map of molecular interactions which are descriptive of the biological processes occurring in living cells. Identifying the small over-represented circuitry patterns in these networks helps generate hypotheses about the functional basis of such complex processes. Network motif discovery is a systematic way of achieving this goal. However, a reliable network motif discovery outcome requires generating random background networks which are the result of a uniform and independent graph sampling method. To date, there has been no method to numerically evaluate whether any network motif discovery algorithm performs as intended on realistically sized datasets-thus it was not possible to assess the validity of resulting network motifs. In this work, we present IndeCut, the first method to date that characterizes network motif finding algorithm performance in terms of uniform sampling on realistically sized networks. We demonstrate that it is critical to use IndeCut prior to running any network motif finder for two reasons. First, IndeCut indicates the number of samples needed for a tool to produce an outcome that is both reproducible and accurate. Second, IndeCut allows users to choose the tool that generates samples in the most independent fashion for their network of interest among many available options. The open source software package is available at https://github.com/megrawlab/IndeCut. megrawm@science.oregonstate.edu or david.koslicki@math.oregonstate.edu. Supplementary data are available at Bioinformatics online.

  7. A fast ergodic algorithm for generating ensembles of equilateral random polygons

    NASA Astrophysics Data System (ADS)

    Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.

    2009-03-01

    Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.

  8. Disease invasion risk in a growing population.

    PubMed

    Yuan, Sanling; van den Driessche, P; Willeboordse, Frederick H; Shuai, Zhisheng; Ma, Junling

    2016-09-01

    The spread of an infectious disease may depend on the population size. For simplicity, classic epidemic models assume homogeneous mixing, usually standard incidence or mass action. For standard incidence, the contact rate between any pair of individuals is inversely proportional to the population size, and so the basic reproduction number (and thus the initial exponential growth rate of the disease) is independent of the population size. For mass action, this contact rate remains constant, predicting that the basic reproduction number increases linearly with the population size, meaning that disease invasion is easiest when the population is largest. In this paper, we show that neither of these may be true on a slowly evolving contact network: the basic reproduction number of a short epidemic can reach its maximum while the population is still growing. The basic reproduction number is proportional to the spectral radius of a contact matrix, which is shown numerically to be well approximated by the average excess degree of the contact network. We base our analysis on modeling the dynamics of the average excess degree of a random contact network with constant population input, proportional deaths, and preferential attachment for contacts brought in by incoming individuals (i.e., individuals with more contacts attract more incoming contacts). In addition, we show that our result also holds for uniform attachment of incoming contacts (i.e., every individual has the same chance of attracting incoming contacts), and much more general population dynamics. Our results show that a disease spreading in a growing population may evade control if disease control planning is based on the basic reproduction number at maximum population size.

  9. Influence of "in series" elastic resistance on muscular performance during a biceps-curl set on the cable machine.

    PubMed

    García-López, David; Herrero, Azael J; González-Calvo, Gustavo; Rhea, Matthew R; Marín, Pedro J

    2010-09-01

    This study aimed to investigate the role of elastic resistance (ER) applied "in series" to a pulley-cable (PC) machine on the number of repetitions performed, kinematics parameters, and perceived exertion during a biceps-curl set to failure with a submaximal load (70% of the 1 repetition maximum). Twenty-one undergraduate students (17 men and 4 women) performed, on 2 different days, 1 biceps-curl set on the PC machine. Subjects were randomly assigned to complete 2 experimental conditions in a cross-over fashion: conventional PC mode or ER + PC mode. Results indicate ER applied "in series" to a PC machine significantly reduces (p < 0.05) the maximal number of repetitions and results in a smooth and consistent decline in mean acceleration throughout the set, in comparison to the conventional PC mode. Although no significant differences were found concerning intrarepetition kinematics, the ER trended to reduce (18.6%) the peak acceleration of the load. With a more uniformly distributed external resistance, a greater average muscle tension could have been achieved throughout the range of movement, leading to greater fatigue that could explain the lower number of maximal repetitions achieved. The application of force in a smooth, consistent fashion during each repetition of an exercise, while avoiding active deceleration, is expected to enhance the benefits of the resistance exercise, especially for those seeking greater increases in muscular hypertrophy.

  10. Quality of anthelminthic medicines available in Jimma Ethiopia.

    PubMed

    Belew, Sileshi; Suleman, Sultan; Wynendaele, Evelien; D'Hondt, Matthias; Kosgei, Anne; Duchateau, Luc; De Spiegeleer, Bart

    2018-01-01

    Soil-transmitted helminthiasis and schistosomiasis are major public health problems in Ethiopia. Mass deworming of at-risk population using a single dose administration of 400mg albendazole (ABZ) or 500mg mebendazole (MBZ) for treatment of common intestinal worms and 40mg of praziquantel (PZQ) per kg body weight for treatment of schistosomiasis is one of the strategies recommended by World Health Organization (WHO) in order to control the morbidity of soil-transmitted helminthiasis and schistosomiasis. Since storage condition, climate, way of transportation and distribution route could all affect the quality of medicines, regular assessment by surveys is very critical to ensure the therapeutic outcome, to minimize risk of toxicity to the patient and resistance of parasites. Therefore, this study was conducted to assess the pharmaceutical quality of ABZ, MBZ and PZQ tablet brands commonly available in Jimma town (south west Ethiopia). Retail pharmacies (n=10) operating in Jimma town were selected using simple random sampling method. Samples of anthelminthic medicines available in the selected pharmacies were collected. Sample information was recorded and encompassed trade name, active ingredient name, manufacturer's name and full address, labeled medicine strength, dosage form, number of units per container, dosage statement, batch/lot number, manufacturing and expiry dates, storage information and presence of leaflets/package insert. Moreover, a first visual inspection was performed encompassing uniformity of color, uniformity of size, breaks, cracks, splits, embedded surface spots or visual contaminations. Finally, physico-chemical quality attributes investigated encompassed mass uniformity, quantity of active pharmaceutical ingredient (API), disintegration and dissolution, all following Pharmacopoeial test methods The physical characteristics of dosage form, packaging and labeling information of all samples complied with criteria given in the WHO checklists. The mass uniformity of tablets of each brand of ABZ, MBZ and PZQ complied with the pharmacopoeial specification limits, i.e no more than 2 individual masses >5% of average tablet weight, and none deviate by more than 10%. The quantity of APIs in all investigated tablet brands were within the 90-110% label claim (l.c.) limits, ranging between 95.05 and 110.09% l.c. Disintegration times were in line with the pharmacopoeial specification limit for immediate release (IR) tablets, ranging between 0.5 and 13min. However, the dissolution results (mean±SD, n=6) of one ABZ brand (i.e. Wormin ® , Q=59.21±0.99% at 30min) and two PZQ brands (i.e. Bermoxel ® , Q=63.43%±0.7 and Distocide ® , Q=62.43%±1.67, at 75min) showed poor dissolution, failing the United States Pharmacopoeia (USP) dissolution specification limit. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Quantum random number generation for loophole-free Bell tests

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan; Abellan, Carlos; Amaya, Waldimar

    2015-05-01

    We describe the generation of quantum random numbers at multi-Gbps rates, combined with real-time randomness extraction, to give very high purity random numbers based on quantum events at most tens of ns in the past. The system satisfies the stringent requirements of quantum non-locality tests that aim to close the timing loophole. We describe the generation mechanism using spontaneous-emission-driven phase diffusion in a semiconductor laser, digitization, and extraction by parity calculation using multi-GHz logic chips. We pay special attention to experimental proof of the quality of the random numbers and analysis of the randomness extraction. In contrast to widely-used models of randomness generators in the computer science literature, we argue that randomness generation by spontaneous emission can be extracted from a single source.

  12. Stochastic species abundance models involving special copulas

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  13. Local Neighbourhoods for First-Passage Percolation on the Configuration Model

    NASA Astrophysics Data System (ADS)

    Dereich, Steffen; Ortgiese, Marcel

    2018-04-01

    We consider first-passage percolation on the configuration model. Once the network has been generated each edge is assigned an i.i.d. weight modeling the passage time of a message along this edge. Then independently two vertices are chosen uniformly at random, a sender and a recipient, and all edges along the geodesic connecting the two vertices are coloured in red (in the case that both vertices are in the same component). In this article we prove local limit theorems for the coloured graph around the recipient in the spirit of Benjamini and Schramm. We consider the explosive regime, in which case the random distances are of finite order, and the Malthusian regime, in which case the random distances are of logarithmic order.

  14. Calibration of the 13- by 13-inch adaptive wall test section for the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Hill, Acquilla S.

    1990-01-01

    A 13 by 13 inch adaptive wall test section was installed in the 0.3 Meter Transonic Cryogenic Tunnel circuit. This new test section is configured for 2-D airfoil testing. It has four solid walls. The top and bottom walls are flexible and movable whereas the sidewalls are rigid and fixed. The wall adaptation strategy employed requires the test section wall shapes associated with uniform test section Mach number distributions. Calibration tests with the test section empty were conducted with the top and bottom walls linearly diverged to approach a uniform Mach number distribution. Pressure distributions were measured in the contraction cone, the test section, and the high speed diffuser at Mach numbers from 0.20 to 0.95 and Reynolds numbers from 10 to 100 x 10 (exp 6)/per foot.

  15. 48 CFR 204.7107 - Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI). 204.7107 Section 204.7107 Federal... ADMINISTRATIVE MATTERS Uniform Contract Line Item Numbering System 204.7107 Contract accounting classification...

  16. 48 CFR 204.7107 - Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Contract accounting classification reference number (ACRN) and agency accounting identifier (AAI). 204.7107 Section 204.7107 Federal... ADMINISTRATIVE MATTERS Uniform Contract Line Item Numbering System 204.7107 Contract accounting classification...

  17. Multiphase contrast medium injection for optimization of computed tomographic coronary angiography.

    PubMed

    Budoff, Matthew Jay; Shinbane, Jerold S; Child, Janis; Carson, Sivi; Chau, Alex; Liu, Stephen H; Mao, SongShou

    2006-02-01

    Electron beam angiography is a minimally invasive imaging technique. Adequate vascular opacification throughout the study remains a critical issue for image quality. We hypothesized that vascular image opacification and uniformity of vascular enhancement between slices can be improved using multiphase contrast medium injection protocols. We enrolled 244 consecutive patients who were randomized to three different injection protocols: single-phase contrast medium injection (Group 1), dual-phase contrast medium injection with each phase at a different injection rate (Group 2), and a three-phase injection with two phases of contrast medium injection followed by a saline injection phase (Group 3). Parameters measured were aortic opacification based on Hounsfield units and uniformity of aortic enhancement at predetermined slices (locations from top [level 1] to base [level 60]). In Group 1, contrast opacification differed across seven predetermined locations (scan levels: 1st versus 60th, P < .05), demonstrating significant nonuniformity. In Group 2, there was more uniform vascular enhancement, with no significant differences between the first 50 slices (P > .05). In Group 3, there was greater uniformity of vascular enhancement and higher mean Hounsfield units value across all 60 images, from the aortic root to the base of the heart (P < .05). The three-phase injection protocol improved vascular opacification at the base of the heart, as well as uniformity of arterial enhancement throughout the study.

  18. Formation and evolution of magnetised filaments in wind-swept turbulent clumps

    NASA Astrophysics Data System (ADS)

    Banda-Barragan, Wladimir Eduardo; Federrath, Christoph; Crocker, Roland M.; Bicknell, Geoffrey Vincent; Parkin, Elliot Ross

    2015-08-01

    Using high-resolution three-dimensional simulations, we examine the formation and evolution of filamentary structures arising from magnetohydrodynamic interactions between supersonic winds and turbulent clumps in the interstellar medium. Previous numerical studies assumed homogenous density profiles, null velocity fields, and uniformly distributed magnetic fields as the initial conditions for interstellar clumps. Here, we have, for the first time, incorporated fractal clumps with log-normal density distributions, random velocity fields and turbulent magnetic fields (superimposed on top of a uniform background field). Disruptive processes, instigated by dynamical instabilities and akin to those observed in simulations with uniform media, lead to stripping of clump material and the subsequent formation of filamentary tails. The evolution of filaments in uniform and turbulent models is, however, radically different as evidenced by comparisons of global quantities in both scenarios. We show, for example, that turbulent clumps produce tails with higher velocity dispersions, increased gas mixing, greater kinetic energy, and lower plasma beta than their uniform counterparts. We attribute the observed differences to: 1) the turbulence-driven enhanced growth of dynamical instabilities (e.g. Kelvin-Helmholtz and Rayleigh-Taylor instabilities) at fluid interfaces, and 2) the localised amplification of magnetic fields caused by the stretching of field lines trapped in the numerous surface deformations of fractal clumps. We briefly discuss the implications of this work to the physics of the optical filaments observed in the starburst galaxy M82.

  19. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  20. A cross-sectional investigation of the quality of selected medicines in Cambodia in 2010

    PubMed Central

    2014-01-01

    Background Access to good-quality medicines in many countries is largely hindered by the rampant circulation of spurious/falsely labeled/falsified/counterfeit (SFFC) and substandard medicines. In 2006, the Ministry of Health of Cambodia, in collaboration with Kanazawa University, Japan, initiated a project to combat SFFC medicines. Methods To assess the quality of medicines and prevalence of SFFC medicines among selected products, a cross-sectional survey was carried out in Cambodia. Cefixime, omeprazole, co-trimoxazole, clarithromycin, and sildenafil were selected as candidate medicines. These medicines were purchased from private community drug outlets in the capital, Phnom Penh, and Svay Rieng and Kandal provinces through a stratified random sampling scheme in July 2010. Results In total, 325 medicine samples were collected from 111 drug outlets. Non-licensed outlets were more commonly encountered in rural than in urban areas (p < 0.01). Of all the samples, 93.5% were registered and 80% were foreign products. Samples without registration numbers were found more frequently among foreign-manufactured products than in domestic ones (p < 0.01). According to pharmacopeial analytical results, 14.5%, 4.6%, and 24.6% of the samples were unacceptable in quantity, content uniformity, and dissolution test, respectively. All the ultimately unacceptable samples in the content uniformity tests were of foreign origin. Following authenticity investigations conducted with the respective manufacturers and medicine regulatory authorities, an unregistered product of cefixime collected from a pharmacy was confirmed as an SFFC medicine. However, the sample was acceptable in quantity, content uniformity, and dissolution test. Conclusions The results of this survey indicate that medicine counterfeiting is not limited to essential medicines in Cambodia: newer-generation medicines are also targeted. Concerted efforts by both domestic and foreign manufacturers, wholesalers, retailers, and regulatory authorities should help improve the quality of medicines. PMID:24593851

  1. Helicon modes in uniform plasmas. III. Angular momentum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenzel, R. L.; Urrutia, J. M.

    Helicons are electromagnetic waves with helical phase fronts propagating in the whistler mode in magnetized plasmas and solids. They have similar properties to electromagnetic waves with angular momentum in free space. Helicons are circularly polarized waves carrying spin angular momentum and orbital angular momentum due to their propagation around the ambient magnetic field B{sub 0}. These properties have not been considered in the community of researchers working on helicon plasma sources, but are the topic of the present work. The present work focuses on the field topology of helicons in unbounded plasmas, not on helicon source physics. Helicons are excitedmore » in a large uniform laboratory plasma with a magnetic loop antenna whose dipole axis is aligned along or across B{sub 0}. The wave fields are measured in orthogonal planes and extended to three dimensions (3D) by interpolation. Since density and B{sub 0} are uniform, small amplitude waves from loops at different locations can be superimposed to generate complex antenna patterns. With a circular array of phase shifted loops, whistler modes with angular and axial wave propagation, i.e., helicons, are generated. Without boundaries radial propagation also arises. The azimuthal mode number m can be positive or negative while the field polarization remains right-hand circular. The conservation of energy and momentum implies that these field quantities are transferred to matter which causes damping or reflection. Wave-particle interactions with fast electrons are possible by Doppler shifted resonances. The transverse Doppler shift is demonstrated. Wave-wave interactions are also shown by showing collisions between different helicons. Whistler turbulence does not always have to be created by nonlinear wave-interactions but can also be a linear superposition of waves from random sources. In helicon collisions, the linear and/or orbital angular momenta can be canceled, which results in a great variety of field topologies. The work will be contrasted to the research on helicon plasma sources.« less

  2. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  3. Experimental investigation of an axisymmetric free jet with an initially uniform velocity profile

    NASA Technical Reports Server (NTRS)

    Labus, T. L.; Symons, E. P.

    1972-01-01

    An experimental investigation was conducted to determine the flow characteristics of a circular free helium jet having an initially uniform velocity profile. Complete velocity profiles are presented at Reynolds numbers of 1027 and 4571 at 0, 3, 6, 10, 15, and 20 nozzle diameters (where possible) from the nozzle exit. Centerline velocity decay and potential core length were obtained over a range of Reynolds numbers from 155 to 5349 at distances up to and including 25 nozzle diameters from the nozzle exit. The angles of spread associated with the diffusion of the jet downstream of the nozzle are also given. Axial jet momentum flux and entrained mass flux, at various distances downstream of the nozzle, are presented as a function of the jet Reynolds number.

  4. Natural convection in square cavity filled with ferrofluid saturated porous medium in the presence of uniform magnetic field

    NASA Astrophysics Data System (ADS)

    Javed, Tariq; Mehmood, Z.; Abbas, Z.

    2017-02-01

    This article contains numerical results for free convection through square enclosure enclosing ferrofluid saturated porous medium when uniform magnetic field is applied upon the flow along x-axis. Heat is provided through bottom wall and a square blockage placed near left or right bottom corner of enclosure as a heat source. Left and right vertical boundaries of the cavity are considered insulated while upper wall is taken cold. The problem is modelled in terms of system of nonlinear partial differential equations. Finite element method has been adopted to compute numerical simulations of mathematical problem for wide range of pertinent flow parameters including Rayleigh number, Hartman number, Darcy number and Prandtl number. Analysis of results reveals that the strength of streamline circulation is an increasing function of Darcy and Prandtl number where convection heat transfer is dominant for large values of these parameters whereas increase in Hartman number has opposite effects on isotherms and streamline circulations. Thermal conductivity and hence local heat transfer rate of fluid gets increased when ferroparticles are introduced in the fluid. Average Nusselt number increases with increase in Darcy and Rayleigh numbers while it is decreases when Hartman number is increased.

  5. A hybrid-type quantum random number generator

    NASA Astrophysics Data System (ADS)

    Hai-Qiang, Ma; Wu, Zhu; Ke-Jin, Wei; Rui-Xue, Li; Hong-Wei, Liu

    2016-05-01

    This paper proposes a well-performing hybrid-type truly quantum random number generator based on the time interval between two independent single-photon detection signals, which is practical and intuitive, and generates the initial random number sources from a combination of multiple existing random number sources. A time-to-amplitude converter and multichannel analyzer are used for qualitative analysis to demonstrate that each and every step is random. Furthermore, a carefully designed data acquisition system is used to obtain a high-quality random sequence. Our scheme is simple and proves that the random number bit rate can be dramatically increased to satisfy practical requirements. Project supported by the National Natural Science Foundation of China (Grant Nos. 61178010 and 11374042), the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China, and the Fundamental Research Funds for the Central Universities of China (Grant No. bupt2014TS01).

  6. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  7. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  8. Doing better by getting worse: posthypnotic amnesia improves random number generation.

    PubMed

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation.

  9. Doing Better by Getting Worse: Posthypnotic Amnesia Improves Random Number Generation

    PubMed Central

    Terhune, Devin Blair; Brugger, Peter

    2011-01-01

    Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation. PMID:22195022

  10. Axion-photon propagation in magnetized universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chen; Lai, Dong, E-mail: wangchen@nao.cas.cn, E-mail: dong@astro.cornell.edu

    Oscillations between photons and axion-like particles (ALP) travelling in intergalactic magnetic fields have been invoked to explain a number of astrophysical phenomena, or used to constrain ALP properties using observations. One example is the anomalous transparency of the universe to TeV gamma rays. The intergalactic magnetic field is usually modeled as patches of coherent domains, each with a uniform magnetic field, but the field orientation changes randomly from one domain to the next (''discrete-φ model''). We show in this paper that in more realistic situations, when the magnetic field direction varies continuously along the propagation path, the photon-to-ALP conversion probabilitymore » P can be significantly different from the discrete-φ model. In particular, P has a distinct dependence on the photon energy and ALP mass, and can be as large as 100%. This result can affect previous constraints on ALP properties based on ALP-photon propagation in intergalactic magnetic fields, such as TeV photons from distant Active Galactic Nucleus.« less

  11. Research of the Electron Cyclotron Emission with Vortex Property excited by high power high frequency Gyrotron

    NASA Astrophysics Data System (ADS)

    Goto, Yuki; Kubo, Shin; Tsujimura, Tohru; Takubo, Hidenori

    2017-10-01

    Recently, it has been shown that the radiation from a single electron in cyclotron motion has vortex property. Although the cyclotron emission exists universally in nature, the vortex property has not been featured because this property is normally cancelled out due to the randomness in gyro-phase of electrons and the development of detection of the vortex property has not been well motivated. In this research, we are developing a method to generate the vortex radiation from electrons in cyclotron motion with controlled gyro-phase. Electron that rotates around the uniform static magnetic field is accelerated by right-hand circular polarized (RHCP) radiation resonantly when the cyclotron frequency coincides with the applied RHCP radiation frequency. A large number of electrons can be coherently accelerated in gyro-phase by a RHCP high power radiation so that these electrons can radiate coherent emission with vortex feature. We will show that vortex radiation created by purely rotating electrons for the first time.

  12. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  13. Helical Turing patterns in the Lengyel-Epstein model in thin cylindrical layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bánsági, T.; Taylor, A. F., E-mail: A.F.Taylor@sheffield.ac.uk

    2015-06-15

    The formation of Turing patterns was investigated in thin cylindrical layers using the Lengyel-Epstein model of the chlorine dioxide-iodine-malonic acid reaction. The influence of the width of the layer W and the diameter D of the inner cylinder on the pattern with intrinsic wavelength l were determined in simulations with initial random noise perturbations to the uniform state for W < l/2 and D ∼ l or lower. We show that the geometric constraints of the reaction domain may result in the formation of helical Turing patterns with parameters that give stripes (b = 0.2) or spots (b = 0.37) in two dimensions. For b = 0.2, the helices weremore » composed of lamellae and defects were likely as the diameter of the cylinder increased. With b = 0.37, the helices consisted of semi-cylinders and the orientation of stripes on the outer surface (and hence winding number) increased with increasing diameter until a new stripe appeared.« less

  14. Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, V. A.

    2008-01-01

    There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.

  15. Exploring the effect of the spatial scale of fishery management.

    PubMed

    Takashina, Nao; Baskett, Marissa L

    2016-02-07

    For any spatially explicit management, determining the appropriate spatial scale of management decisions is critical to success at achieving a given management goal. Specifically, managers must decide how much to subdivide a given managed region: from implementing a uniform approach across the region to considering a unique approach in each of one hundred patches and everything in between. Spatially explicit approaches, such as the implementation of marine spatial planning and marine reserves, are increasingly used in fishery management. Using a spatially explicit bioeconomic model, we quantify how the management scale affects optimal fishery profit, biomass, fishery effort, and the fraction of habitat in marine reserves. We find that, if habitats are randomly distributed, the fishery profit increases almost linearly with the number of segments. However, if habitats are positively autocorrelated, then the fishery profit increases with diminishing returns. Therefore, the true optimum in management scale given cost to subdivision depends on the habitat distribution pattern. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. The influence of random indium alloy fluctuations in indium gallium nitride quantum wells on the device behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tsung-Jui; Wu, Yuh-Renn, E-mail: yrwu@ntu.edu.tw; Shivaraman, Ravi

    2014-09-21

    In this paper, we describe the influence of the intrinsic indium fluctuation in the InGaN quantum wells on the carrier transport, efficiency droop, and emission spectrum in GaN-based light emitting diodes (LEDs). Both real and randomly generated indium fluctuations were used in 3D simulations and compared to quantum wells with a uniform indium distribution. We found that without further hypothesis the simulations of electrical and optical properties in LEDs such as carrier transport, radiative and Auger recombination, and efficiency droop are greatly improved by considering natural nanoscale indium fluctuations.

  17. Quantum random number generator

    DOEpatents

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  18. Spectrum Gaps of Spin Waves Generated by Interference in a Uniform Nanostripe Waveguide

    PubMed Central

    Wang, Qi; Zhang, Huaiwu; Ma, Guokun; Liao, Yulong; Tang, Xiaoli; Zhong, Zhiyong

    2014-01-01

    We studied spin waves excited by two or more excitation sources in a uniform nanostripe waveguide without periodic structures. Several distinct spectrum gaps formed by spin waves interference rather than by Bragg reflection were observed. We found the center frequency and the number of spectrum gaps of spin waves can be controlled by modulating the distance, number and width of the excitation sources. The results obtained by micromagnetic simulations agree well with that of analytical calculations. Our work therefore paves a new way to control the spectrum gaps of spin waves, which is promising for future spin wave-based devices. PMID:25082001

  19. Trajectories of Dop Points on a Machining Wheel During Grinding of High Quality Plane Surfaces

    NASA Astrophysics Data System (ADS)

    Petrikova, I.; Vrzala, R.; Kafka, J.

    The basic requirement for plane grinding synthetic monocrystals is uniform wear of the grinding tool. This article deals with the case where the grinding process is carried out by relative motion between the front faces of rotating wheels with parallel axes. The dop is attached by the end of the pendulous arm, which movement is controlled by a cam. Kinematic relations have been drawn for the relative motion of the dop points in the reference to the abrasive wheel. The aim of the work is set the methodology for finding out of uniformity respectively nonuniformity of the motion of dop points on the abrasive wheel. The computational program was compiled in MATLAB. The sums of the number of passes were performed in the transmission range of 0.4-1. The number of passes of selected points on the dop passed over areas of the square mash was computed. The density of trajectory passes depends on four factors: the speed of both wheels, the number of arm operating cycles, the angle of the arm swings and the cam shape. All these dependencies were investigated. The uniformity the density of passes is one of the criteria for setting the grinding machine.

  20. The Micromechanics of the Moving Contact Line

    NASA Technical Reports Server (NTRS)

    Lichter, Seth

    1999-01-01

    A transient moving contact line is investigated experimentally. The dynamic interface shape between 20 and 800 microns from the contact line is compared with theory. A novel experiment is devised, in which the contact line is set into motion by electrically altering the solid-liquid surface tension gamma(sub SL). The contact line motion simulates that of spontaneous wetting along a vertical plate with a maximum capillary number Ca approx. = 4 x 10(exp -2). The images of the dynamic meniscus are analyzed as a funtion of Ca. For comparison, the steady-state hydrodynamic equation based on the creeping flow model in a wedge geometry and the three-region uniform perturbation expansion of Cox (1986) is adopted. The interface shape is well depicted by the uniform solutions for Ca <= 10(exp -3). However, for Ca > 10(exp -3), the uniform solution over-predicts the viscous bending. This over-prediction can be accounted for by modifying the slip coefficient within the intermediate solution. With this correction, the measured interface shape is seen to match the theoretical prediction for all capillary numbers. The amount of slip needed to fit the measurements does not scale with the capillary number.

  1. Design of a Mach-15 Total-Enthalpy Nozzle With Non-uniform Inflow Using Rotational MOC

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.

    2004-01-01

    A new computer program to design nozzles with non-uniform inflow has been developed using the rotational method of characteristics (MOC). This program has been used to design a nozzle for the NASA's HYPULSE shock-expansion tunnel for use in scramjet engine tests at a Mach-15 flight-enthalpy condition. The nozzle has an area ratio of 9.5:1 that expands the inflow from Mach 6 along the centerline to Mach 8.7. Although the density and Mach number vary radially at the exit due to the non-uniformities of the inflow, the MOC procedure produces exit flow that is parallel and has uniform static pressure. The design has been verified with CFD which compares favorably with the MOC solution.

  2. Fast physical-random number generation using laser diode's frequency noise: influence of frequency discriminator

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo

    2018-02-01

    Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.

  3. Ultra-low power, highly uniform polymer memory by inserted multilayer graphene electrode

    NASA Astrophysics Data System (ADS)

    Jang, Byung Chul; Seong, Hyejeong; Kim, Jong Yun; Koo, Beom Jun; Kim, Sung Kyu; Yang, Sang Yoon; Gap Im, Sung; Choi, Sung-Yool

    2015-12-01

    Filament type resistive random access memory (RRAM) based on polymer thin films is a promising device for next generation, flexible nonvolatile memory. However, the resistive switching nonuniformity and the high power consumption found in the general filament type RRAM devices present critical issues for practical memory applications. Here, we introduce a novel approach not only to reduce the power consumption but also to improve the resistive switching uniformity in RRAM devices based on poly(1,3,5-trimethyl-3,4,5-trivinyl cyclotrisiloxane) by inserting multilayer graphene (MLG) at the electrode/polymer interface. The resistive switching uniformity was thereby significantly improved, and the power consumption was markedly reduced by 250 times. Furthermore, the inserted MLG film enabled a transition of the resistive switching operation from unipolar resistive switching to bipolar resistive switching and induced self-compliance behavior. The findings of this study can pave the way toward a new area of application for graphene in electronic devices.

  4. Hypothesis: Impregnated school uniforms reduce the incidence of dengue infections in school children.

    PubMed

    Wilder-Smith, A; Lover, A; Kittayapong, P; Burnham, G

    2011-06-01

    Dengue infection causes a significant economic, social and medical burden in affected populations in over 100 countries in the tropics and sub-tropics. Current dengue control efforts have generally focused on vector control but have not shown major impact. School-aged children are especially vulnerable to infection, due to sustained human-vector-human transmission in the close proximity environments of schools. Infection in children has a higher rate of complications, including dengue hemorrhagic fever and shock syndromes, than infections in adults. There is an urgent need for integrated and complementary population-based strategies to protect vulnerable children. We hypothesize that insecticide-treated school uniforms will reduce the incidence of dengue in school-aged children. The hypothesis would need to be tested in a community based randomized trial. If proven to be true, insecticide-treated school uniforms would be a cost-effective and scalable community based strategy to reduce the burden of dengue in children. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Approaches to Reducing Federal Spending on Military Health Care

    DTIC Science & Technology

    2014-01-01

    medical school, the Uniformed Services University of the Health Sciences (USUHS), while expanding the number of scholarships provided to students... actuaries esti- 8. Department of Defense, Evaluation of the TRICARE Program— Access, Cost and Quality: Fiscal Year 2013 Report to Congress (February...DoD’s Uniformed Services University of the Health Sciences —would be closed. 4. See Congressional Budget Office, Lessons from Medicare’s Demonstration

  6. Application of Raman spectroscopy for on-line monitoring of low dose blend uniformity.

    PubMed

    Hausman, Debra S; Cambron, R Thomas; Sakr, Adel

    2005-07-14

    On-line Raman spectroscopy was used to evaluate the effect of blending time on low dose, 1%, blend uniformity of azimilide dihydrochloride. An 8 qt blender was used for the experiments and instrumented with a Raman probe through the I-bar port. The blender was slowed to 6.75 rpm to better illustrate the blending process (normal speed is 25 rpm). Uniformity was reached after 20 min of blending at 6.75 rpm (135 revolutions or 5.4 min at 25 rpm). On-line Raman analysis of blend uniformity provided more benefits than traditional thief sampling and off-line analysis. On-line Raman spectroscopy enabled generating data rich blend profiles, due to the ability to collect a large number of samples during the blending process (sampling every 20s). In addition, the Raman blend profile was rapidly generated, compared to the lengthy time to complete a blend profile with thief sampling and off-line analysis. The on-line Raman blend uniformity results were also significantly correlated (p-value < 0.05) to the HPLC uniformity results of thief samples.

  7. Severity of Organized Item Theft in Computerized Adaptive Testing: An Empirical Study. Research Report. ETS RR-06-22

    ERIC Educational Resources Information Center

    Yi, Qing; Zhang, Jinming; Chang, Hua-Hua

    2006-01-01

    Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused…

  8. Magnetic noise as the cause of the spontaneous magnetization reversal of RE–TM–B permanent magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitriev, A. I., E-mail: aid@icp.ac.ru; Talantsev, A. D., E-mail: artgtx32@mail.ru; Kunitsyna, E. I.

    2016-08-15

    The relation between the macroscopic spontaneous magnetization reversal (magnetic viscosity) of (NdDySm)(FeCo)B alloys and the spectral characteristics of magnetic noise, which is caused by the random microscopic processes of thermally activated domain wall motion in a potential landscape with uniformly distributed potential barrier heights, is found.

  9. Experimental Evaluation of Field Trips on Instruction in Vocational Agriculture.

    ERIC Educational Resources Information Center

    McCaslin, Norval L.

    To determine the effect of field trips on student achievement in each of four subject matter areas in vocational agriculture, 12 schools offering approved programs were randomly selected and divided into a treatment group and a control group. Uniform teaching outlines and reference materials were provided to each group. While no field trips were…

  10. Electrolytic plating apparatus for discrete microsized particles

    DOEpatents

    Mayer, Anton

    1976-11-30

    Method and apparatus are disclosed for electrolytically producing very uniform coatings of a desired material on discrete microsized particles. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with a powered cathode for a time sufficient for such to occur.

  11. Electroless plating apparatus for discrete microsized particles

    DOEpatents

    Mayer, Anton

    1978-01-01

    Method and apparatus are disclosed for producing very uniform coatings of a desired material on discrete microsized particles by electroless techniques. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with each other for a time sufficient for such to occur.

  12. Modeling emerald ash borer dispersal using percolation theory: estimating the rate of range expansion in a fragmented landscape

    Treesearch

    Robin A. J. Taylor; Daniel A. Herms; Louis R. Iverson

    2008-01-01

    The dispersal of organisms is rarely random, although diffusion processes can be useful models for movement in approximately homogeneous environments. However, the environments through which all organisms disperse are far from uniform at all scales. The emerald ash borer (EAB), Agrilus planipennis, is obligate on ash (Fraxinus spp...

  13. Fermilab | Science | Historic Results

    Science.gov Websites

    quark since the discovery of the bottom quark at Fermilab through fixed-target experiments in 1977. Both cosmic rays. Researchers previously had assumed that cosmic rays approach the Earth uniformly from random impact the Earth generally come from the direction of active galactic nuclei. Many large galaxies

  14. Random walks of colloidal probes in viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2014-04-01

    To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.

  15. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  16. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  17. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  18. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  19. Single-mode SOA-based 1kHz-linewidth dual-wavelength random fiber laser.

    PubMed

    Xu, Yanping; Zhang, Liang; Chen, Liang; Bao, Xiaoyi

    2017-07-10

    Narrow-linewidth multi-wavelength fiber lasers are of significant interests for fiber-optic sensors, spectroscopy, optical communications, and microwave generation. A novel narrow-linewidth dual-wavelength random fiber laser with single-mode operation, based on the semiconductor optical amplifier (SOA) gain, is achieved in this work for the first time, to the best of our knowledge. A simplified theoretical model is established to characterize such kind of random fiber laser. The inhomogeneous gain in SOA mitigates the mode competition significantly and alleviates the laser instability, which are frequently encountered in multi-wavelength fiber lasers with Erbium-doped fiber gain. The enhanced random distributed feedback from a 5km non-uniform fiber provides coherent feedback, acting as mode selection element to ensure single-mode operation with narrow linewidth of ~1kHz. The laser noises are also comprehensively investigated and studied, showing the improvements of the proposed random fiber laser with suppressed intensity and frequency noises.

  20. An invariance property of generalized Pearson random walks in bounded geometries

    NASA Astrophysics Data System (ADS)

    Mazzolo, Alain

    2009-03-01

    Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.

  1. Frequency-dependent scaling from mesoscale to macroscale in viscoelastic random composites

    PubMed Central

    Zhang, Jun

    2016-01-01

    This paper investigates the scaling from a statistical volume element (SVE; i.e. mesoscale level) to representative volume element (RVE; i.e. macroscale level) of spatially random linear viscoelastic materials, focusing on the quasi-static properties in the frequency domain. Requiring the material statistics to be spatially homogeneous and ergodic, the mesoscale bounds on the RVE response are developed from the Hill–Mandel homogenization condition adapted to viscoelastic materials. The bounds are obtained from two stochastic initial-boundary value problems set up, respectively, under uniform kinematic and traction boundary conditions. The frequency and scale dependencies of mesoscale bounds are obtained through computational mechanics for composites with planar random chessboard microstructures. In general, the frequency-dependent scaling to RVE can be described through a complex-valued scaling function, which generalizes the concept originally developed for linear elastic random composites. This scaling function is shown to apply for all different phase combinations on random chessboards and, essentially, is only a function of the microstructure and mesoscale. PMID:27274689

  2. Evaluation of the effects of patient arm attenuation in SPECT cardiac perfusion imaging

    NASA Astrophysics Data System (ADS)

    Luo, Dershan; King, M. A.; Pan, Tin-Su; Xia, Weishi

    1996-12-01

    It was hypothesized that the use of attenuation correction could compensate for degradation in the uniformity of apparent localization of imaging agents seen in cardiac walls when patients are imaged with arms at their sides. Noise-free simulations of the digital MCAT phantom were employed to investigate this hypothesis. Four variations in camera size and collimation scheme were investigated. We observed that: 1) without attenuation correction, the arms had little additional influences on the uniformity of the heart for 180/spl deg/ reconstructions and caused a small increase in nonuniformity for 360/spl deg/ reconstructions, where the impact of both arms was included; 2) change in patient size had more of an impact on count uniformity than the presence of the arms, either with or without attenuation correction; 3) for a low number of iterations and large patient size, slightly better uniformity was obtained from parallel emission data than from fan-beam emission data, independent of whether parallel or fan-beam transmission data was used to reconstruct the attenuation maps; and 4) for all camera configurations, uniformity was improved with attenuation correction and, given sufficient number of iterations, it was compatible among different imaging geometry combinations. Thus, iterative algorithms can compensate for the additional attenuation imposed by larger patients or having the arms on the sides. When the arms are at the sides of the patient, however, a larger radius of rotation may be required, resulting in decreased spatial resolution.

  3. Evaluation of the effects of patient arm attenuation in SPECT cardiac perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, D.; King, M.A.; Pan, T.S.

    1996-12-01

    It was hypothesized that the use of attenuation correction could compensate for degradation in the uniformity of apparent localization of imaging agents seen in cardiac walls when patients are imaged with arms at their sides. Noise-free simulations of the digital MCAT phantom were employed to investigate this hypothesis. Four variations in camera size and collimation scheme were investigated. The authors observed that: (1) without attenuation correction, the arms had little additional influences on the uniformity of the heart for 180{degree} reconstructions and caused a small increase in nonuniformity for 360{degree} reconstructions, where the impact of both arms was included; (2)more » change in patient size had more of an impact on count uniformity than the presence of the arms, either with or without attenuation correction; (3) for a low number of iterations and large patient size, slightly better uniformity was obtained from parallel emission data than from fan-beam emission data, independent of whether parallel or fan-beam transmission data was used to reconstruct the attenuation maps; and (4) for all camera configurations, uniformity was improved with attenuation correction and, given sufficient number of iterations, it was compatible among different imaging geometry combinations. Thus, iterative algorithms can compensate for the additional attenuation imposed by larger patients or having the arms on the sides. When the arms are at the sides of the patient, however, a larger radius of rotation may be required, resulting in decreased spatial resolution.« less

  4. Effect of heterogeneity on the characterization of cell membrane compartments: I. Uniform size and permeability.

    PubMed

    Hall, Damien

    2010-03-15

    Observations of the motion of individual molecules in the membrane of a number of different cell types have led to the suggestion that the outer membrane of many eukaryotic cells may be effectively partitioned into microdomains. A major cause of this suggested partitioning is believed to be due to the direct/indirect association of the cytosolic face of the cell membrane with the cortical cytoskeleton. Such intimate association is thought to introduce effective hydrodynamic barriers into the membrane that are capable of frustrating molecular Brownian motion over distance scales greater than the average size of the compartment. To date, the standard analytical method for deducing compartment characteristics has relied on observing the random walk behavior of a labeled lipid or protein at various temporal frequencies and different total lengths of time. Simple theoretical arguments suggest that the presence of restrictive barriers imparts a characteristic turnover to a plot of mean squared displacement versus sampling period that can be interpreted to yield the average dimensions of the compartment expressed as the respective side lengths of a rectangle. In the following series of articles, we used computer simulation methods to investigate how well the conventional analytical strategy coped with heterogeneity in size, shape, and barrier permeability of the cell membrane compartments. We also explored questions relating to the necessary extent of sampling required (with regard to both the recorded time of a single trajectory and the number of trajectories included in the measurement bin) for faithful representation of the actual distribution of compartment sizes found using the SPT technique. In the current investigation, we turned our attention to the analytical characterization of diffusion through cell membrane compartments having both a uniform size and permeability. For this ideal case, we found that (i) an optimum sampling time interval existed for the analysis and (ii) the total length of time for which a trajectory was recorded was a key factor. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  5. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade performance. Thus, for the use patterns studied here the database performance is not critically dependent on the exact choices of index or level.

  6. Towards a high-speed quantum random number generator

    NASA Astrophysics Data System (ADS)

    Stucki, Damien; Burri, Samuel; Charbon, Edoardo; Chunnilall, Christopher; Meneghetti, Alessio; Regazzoni, Francesco

    2013-10-01

    Randomness is of fundamental importance in various fields, such as cryptography, numerical simulations, or the gaming industry. Quantum physics, which is fundamentally probabilistic, is the best option for a physical random number generator. In this article, we will present the work carried out in various projects in the context of the development of a commercial and certified high speed random number generator.

  7. On the chromatic number of a space with forbidden equilateral triangle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zvonarev, A E; Raigorodskii, A M; Kharlamova, A A

    2014-09-30

    We improve the Frankl-Rödl estimate for the product of the numbers of edges in uniform hypergraphs with forbidden cardinalities of the intersection of edges. By using this estimate, we obtain explicit bounds for the chromatic number of a space with forbidden monochromatic equilateral triangles. Bibliography: 31 titles.

  8. Exposure of Athletic Trainers to Potentially Infectious Bodily Fluids in the High School Setting.

    ERIC Educational Resources Information Center

    Middlemas, David A.; Jessee, K. Brian; Mulder, Diane K.; Rehberg, Robb S.

    1997-01-01

    Examined high school athletic trainers' exposure to potentially infectious bodily fluids. Data on number of potential exposures per game and practice, number of athletes removed from competition for bleeding, and number of times athletes changed uniforms indicated that trainers had significant chances of being exposed to potentially infectious…

  9. Self-balanced real-time photonic scheme for ultrafast random number generation

    NASA Astrophysics Data System (ADS)

    Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang

    2018-06-01

    We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.

  10. Influence of the variable thermophysical properties on the turbulent buoyancy-driven airflow inside open square cavities

    NASA Astrophysics Data System (ADS)

    Zamora, Blas; Kaiser, Antonio S.

    2012-01-01

    The effects of the air variable properties (density, viscosity and thermal conductivity) on the buoyancy-driven flows established in open square cavities are investigated, as well as the influence of the stated boundary conditions at open edges and the employed differencing scheme. Two-dimensional, laminar, transitional and turbulent simulations are obtained, considering both uniform wall temperature and uniform heat flux heating conditions. In transitional and turbulent cases, the low-Reynolds k - ω turbulence model is employed. The average Nusselt number and the dimensionless mass-flow rate have been obtained for a wide and not yet covered range of the Rayleigh number varying from 103 to 1016. The results obtained taking into account variable properties effects are compared with those calculated assuming constant properties and the Boussinesq approximation. For uniform heat flux heating, a correlation for the critical heating parameter above which the burnout phenomenon can be obtained is presented, not reported in previous works. The effects of variable properties on the flow patterns are analyzed.

  11. Quantum random number generation

    DOE PAGES

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; ...

    2016-06-28

    Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  12. 9 CFR 55.1 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... nervous system disease or chronic wasting condition in the herd; maintaining records of the acquisition... numbering system for the official identification of individual animals in the United States. The AIN... claiming indemnity. National Uniform Eartagging System. A numbering system for the official identification...

  13. 9 CFR 55.1 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... nervous system disease or chronic wasting condition in the herd; maintaining records of the acquisition... numbering system for the official identification of individual animals in the United States. The AIN... claiming indemnity. National Uniform Eartagging System. A numbering system for the official identification...

  14. 48 CFR 204.7101 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Definitions. 204.7101... OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Uniform Contract Line Item Numbering System 204.7101 Definitions. Accounting classification reference number (ACRN) means any combination of a two position alpha...

  15. 48 CFR 204.7101 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Definitions. 204.7101... OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Uniform Contract Line Item Numbering System 204.7101 Definitions. Accounting classification reference number (ACRN) means any combination of a two position alpha...

  16. The one-dimensional asymmetric persistent random walk

    NASA Astrophysics Data System (ADS)

    Rossetto, Vincent

    2018-04-01

    Persistent random walks are intermediate transport processes between a uniform rectilinear motion and a Brownian motion. They are formed by successive steps of random finite lengths and directions travelled at a fixed speed. The isotropic and symmetric 1D persistent random walk is governed by the telegrapher’s equation, also called the hyperbolic heat conduction equation. These equations have been designed to resolve the paradox of the infinite speed in the heat and diffusion equations. The finiteness of both the speed and the correlation length leads to several classes of random walks: Persistent random walk in one dimension can display anomalies that cannot arise for Brownian motion such as anisotropy and asymmetries. In this work we focus on the case where the mean free path is anisotropic, the only anomaly leading to a physics that is different from the telegrapher’s case. We derive exact expression of its Green’s function, for its scattering statistics and distribution of first-passage time at the origin. The phenomenology of the latter shows a transition for quantities like the escape probability and the residence time.

  17. Psi experiments: do the best parapsychological experiments justify the claims for psi?

    PubMed

    Hyman, R

    1988-04-15

    Since the founding of the Society of Psychical Research in 1982, psychical researchers have, in each generation, generated research reports which they believed justified the existence of paranormal phenomena. Throughout this period the scientific establishment has either rejected or ignored such claims. The parapsychologists, with some justification, complained that their claims were being rejected without the benefit of a fair hearing. This paper asks the question of how well the best contemporary evidence for psi--the term used to designate ESP and psychokinetic phenomena--stands up to fair and unbiased appraisal. The results of the scrutiny of the three most widely heralded programs of research--the remote viewing experiments, the psi ganzfeld research, and the work with random number generators--indicates that parapsychological research falls short of the professed standards of the field. In particular, the available reports indicate that randomization is often inadequate, multiple statistical testing without adjustment for significance levels is prevalent, possibilities for sensory leakage are not uniformly prevented, errors in use of statistical tests are much too common, and documentation is typically inadequate. Although the responsible critic cannot argue that these observed departures from optimal experimental procedures have been the sole cause of the reported findings, it is reasonable to demand that the parapsychologists produce consistently significant findings from experiments that are methodologically adequate before their claims are taken seriously.

  18. Output Beam Polarisation of X-ray Lasers with Transient Inversion

    NASA Astrophysics Data System (ADS)

    Janulewicz, K. A.; Kim, C. M.; Matouš, B.; Stiel, H.; Nishikino, M.; Hasegawa, N.; Kawachi, T.

    It is commonly accepted that X-ray lasers, as the devices based on amplified spontaneous emission (ASE), did not show any specific polarization in the output beam. The theoretical analysis within the uniform (single-mode) approximation suggested that the output radiation should show some defined polarization feature, but randomly changing from shot-to-shot. This hypothesis has been verified by experiment using traditional double-pulse scheme of transient inversion. Membrane beam-splitter was used as a polarization selector. It was found that the output radiation has a significant component of p-polarisation in each shot. To explain the effect and place it in the line with available, but scarce data, propagation and kinetic effects in the non-uniform plasma have been analysed.

  19. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  20. Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.

  1. Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Salimi, S.; Jafarizadeh, M. A.

    2009-06-01

    In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.

  2. Transport properties of random media: A new effective medium theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, K.; Soukoulis, C.M.

    We present a new method for efficient, accurate calculations of transport properties of random media. It is based on the principle that the wave energy density should be uniform when averaged over length scales larger than the size of the scatterers. This scheme captures the effects of resonant scattering of the individual scatterer exactly, as well as the multiple scattering in a mean-field sense. It has been successfully applied to both ``scalar`` and ``vector`` classical wave calculations. Results for the energy transport velocity are in agreement with experiment. This approach is of general use and can be easily extended tomore » treat different types of wave propagation in random media. {copyright} {ital 1995} {ital The} {ital American} {ital Physical} {ital Society}.« less

  3. Deposition Uniformity of Coal Dust on Filters and Its Effect on the Accuracy of FTIR Analyses for Silica.

    PubMed

    Miller, Arthur L; Drake, Pamela L; Murphy, Nathaniel C; Cauda, Emanuele G; LeBouf, Ryan F; Markevicius, Gediminas

    Miners are exposed to silica-bearing dust which can lead to silicosis, a potentially fatal lung disease. Currently, airborne silica is measured by collecting filter samples and sending them to a laboratory for analysis. Since this may take weeks, a field method is needed to inform decisions aimed at reducing exposures. This study investigates a field-portable Fourier transform infrared (FTIR) method for end-of-shift (EOS) measurement of silica on filter samples. Since the method entails localized analyses, spatial uniformity of dust deposition can affect accuracy and repeatability. The study, therefore, assesses the influence of radial deposition uniformity on the accuracy of the method. Using laboratory-generated Minusil and coal dusts and three different types of sampling systems, multiple sets of filter samples were prepared. All samples were collected in pairs to create parallel sets for training and validation. Silica was measured by FTIR at nine locations across the face of each filter and the data analyzed using a multiple regression analysis technique that compared various models for predicting silica mass on the filters using different numbers of "analysis shots." It was shown that deposition uniformity is independent of particle type (kaolin vs. silica), which suggests the role of aerodynamic separation is negligible. Results also reflected the correlation between the location and number of shots versus the predictive accuracy of the models. The coefficient of variation (CV) for the models when predicting mass of validation samples was 4%-51% depending on the number of points analyzed and the type of sampler used, which affected the uniformity of radial deposition on the filters. It was shown that using a single shot at the center of the filter yielded predictivity adequate for a field method, (93% return, CV approximately 15%) for samples collected with 3-piece cassettes.

  4. Details of Exact Low Prandtl Number Boundary-Layer Solutions for Forced and For Free Convection

    NASA Technical Reports Server (NTRS)

    Sparrow, E. M.; Gregg, J. L.

    1959-01-01

    A detailed report is given of exact (numerical) solutions of the laminar-boundary-layer equations for the Prandtl number range appropriate to liquid metals (0.003 to 0.03). Consideration is given to the following situations: (1) forced convection over a flat plate for the conditions of uniform wall temperature and uniform wall heat flux, and (2) free convection over an isothermal vertical plate. Tabulations of the new solutions are given in detail. Results are presented for the heat-transfer and shear-stress characteristics; temperature and velocity distributions are also shown. The heat-transfer results are correlated in terms of dimensionless parameters that vary only slightly over the entire liquid-metal range. Previous analytical and experimental work on low Prandtl number boundary layers is surveyed and compared with the new exact solutions.

  5. Methods for the calculation of axial wave numbers in lined ducts with mean flow

    NASA Technical Reports Server (NTRS)

    Eversman, W.

    1981-01-01

    A survey is made of the methods available for the calculation of axial wave numbers in lined ducts. Rectangular and circular ducts with both uniform and non-uniform flow are considered as are ducts with peripherally varying liners. A historical perspective is provided by a discussion of the classical methods for computing attenuation when no mean flow is present. When flow is present these techniques become either impractical or impossible. A number of direct eigenvalue determination schemes which have been used when flow is present are discussed. Methods described are extensions of the classical no-flow technique, perturbation methods based on the no-flow technique, direct integration methods for solution of the eigenvalue equation, an integration-iteration method based on the governing differential equation for acoustic transmission, Galerkin methods, finite difference methods, and finite element methods.

  6. Military Review. Volume 81, Number 1, January-February 2001

    DTIC Science & Technology

    2001-02-01

    military personnel brought extra malaria pills, carried mosquito netting and wore permethrin-impregnated uniforms. Through these efforts�and good fortune...malaria pills, carried mosquito netting and wore permethrin-impregnated uniforms. Through these efforts�and good fortune�only one US soldier contracted a...Bierre for providing much of this overview of the Army�s transition planning during a roundtable interview on 16 June 2000. 7. COL Joseph Rodriguez

  7. Effects of nonuniform Mach-number entrance on scramjet nozzle flowfield and performance

    NASA Astrophysics Data System (ADS)

    Zhang, Pu; Xu, Jinglei; Quan, Zhibin; Mo, Jianwei

    2016-12-01

    Considering the non-uniformities of nozzle entrance influenced by the upstream, the effects of nonuniform Mach-number coupled with shock and expansion-wave on the flowfield and performances of single expansion ramp nozzle (SERN) are numerically studied using Reynolds-Averaged Navier-Stokes equations. The adopted Reynolds-averaged Navier-Stokes methodology is validated by comparing the numerical results with the cold experimental data, and the average method used in this paper is discussed. Uniform and nonuniform facility nozzles are designed to generate different Mach-number profile for the inlet of SERN, which is direct-connected with different facility nozzle, and the whole flowfield is simulated. Because of the coupling of shock and expansion-wave, flow direction of nonuniform SERN entrance is distorted. Compared with Mach contour of uniform case, the line is more curved for coupling shock-wave entrance (SWE) case, and flatter for the coupling expansion-wave entrance (EWE) case. Wall pressure distribution of SWE case appears rising region, whereas decreases like stairs of EWE case. The numerical results reveal that the coupled shock and expansion-wave play significant roles on nozzle performances. Compared with the SERN performances of uniform entrance case at the same work conditions, the thrust of nonuniform entrance cases reduces by 3-6%, pitch moment decreases by 2.5-7%. The negative lift presents an incremental trend with EWE while the situation is the opposite with SWE. These results confirm that considering the entrance flow parameter nonuniformities of a scramjet nozzle coupled with shock or expansion-wave from the upstream is necessary.

  8. The effect of stimulus intensity on response time and accuracy in dynamic, temporally constrained environments.

    PubMed

    Causer, J; McRobert, A P; Williams, A M

    2013-10-01

    The ability to make accurate judgments and execute effective skilled movements under severe temporal constraints are fundamental to elite performance in a number of domains including sport, military combat, law enforcement, and medicine. In two experiments, we examine the effect of stimulus strength on response time and accuracy in a temporally constrained, real-world, decision-making task. Specifically, we examine the effect of low stimulus intensity (black) and high stimulus intensity (sequin) uniform designs, worn by teammates, to determine the effect of stimulus strength on the ability of soccer players to make rapid and accurate responses. In both field- and laboratory-based scenarios, professional soccer players viewed developing patterns of play and were required to make a penetrative pass to an attacking player. Significant differences in response accuracy between uniform designs were reported in laboratory- and field-based experiments. Response accuracy was significantly higher in the sequin compared with the black uniform condition. Response times only differed between uniform designs in the laboratory-based experiment. These findings extend the literature into a real-world environment and have significant implications for the design of clothing wear in a number of domains. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    PubMed

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  10. Attribute Index and Uniform Design Based Multiobjective Association Rule Mining with Evolutionary Algorithm

    PubMed Central

    Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683

  11. Uniform Atmospheric Retrievals of Ultracool Late-T and Early-Y dwarfs

    NASA Astrophysics Data System (ADS)

    Garland, Ryan; Irwin, Patrick

    2017-10-01

    A significant number of ultracool (<600K) extrasolar objects have been discovered in the past decade thanks to wide-field surveys such as WISE. These objects present a perfect testbed for examining the evolution of atmospheric structure as we transition from typically hot extrasolar temperatures to the temperatures found within our Solar System.By examining these types of objects with a uniform retrieval method, we hope to elucidate any trends and (dis)similarities found in atmospheric parameters, such as chemical abundances, temperature-pressure profile, and cloud structure, for a sample of 7 ultracool brown dwarfs as we transition from hotter (~700K) to colder objects (~450K).We perform atmospheric retrievals on two late-T and five early-Y dwarfs. We use the NEMESIS atmospheric retrieval code coupled to a Nested Sampling algorithm, along with a standard uniform model for all of our retrievals. The uniform model assumes the atmosphere is described by a gray radiative-convective temperature profile, (optionally) a gray cloud, and a number of relevant gases. We first verify our methods by comparing it to a benchmark retrieval for Gliese 570D, which is found to be consistent. Furthermore, we present the retrieved gaseous composition, temperature structure, spectroscopic mass and radius, cloud structure and the trends associated with decreasing temperature found in this small sample of objects.

  12. Dynamical properties of the S =1/2 random Heisenberg chain

    NASA Astrophysics Data System (ADS)

    Shu, Yu-Rong; Dupont, Maxime; Yao, Dao-Xin; Capponi, Sylvain; Sandvik, Anders W.

    2018-03-01

    We study dynamical properties at finite temperature (T ) of Heisenberg spin chains with random antiferromagnetic exchange couplings, which realize the random singlet phase in the low-energy limit, using three complementary numerical methods: exact diagonalization, matrix-product-state algorithms, and stochastic analytic continuation of quantum Monte Carlo results in imaginary time. Specifically, we investigate the dynamic spin structure factor S (q ,ω ) and its ω →0 limit, which are closely related to inelastic neutron scattering and nuclear magnetic resonance (NMR) experiments (through the spin-lattice relaxation rate 1 /T1 ). Our study reveals a continuous narrow band of low-energy excitations in S (q ,ω ) , extending throughout the q space, instead of being restricted to q ≈0 and q ≈π as found in the uniform system. Close to q =π , the scaling properties of these excitations are well captured by the random-singlet theory, but disagreements also exist with some aspects of the predicted q dependence further away from q =π . Furthermore we also find spin diffusion effects close to q =0 that are not contained within the random-singlet theory but give non-negligible contributions to the mean 1 /T1 . To compare with NMR experiments, we consider the distribution of the local relaxation rates 1 /T1 . We show that the local 1 /T1 values are broadly distributed, approximately according to a stretched exponential. The mean 1 /T1 first decreases with T , but below a crossover temperature it starts to increase and likely diverges in the limit of a small nuclear resonance frequency ω0. Although a similar divergent behavior has been predicted and experimentally observed for the static uniform susceptibility, this divergent behavior of the mean 1 /T1 has never been experimentally observed. Indeed, we show that the divergence of the mean 1 /T1 is due to rare events in the disordered chains and is concealed in experiments, where the typical 1 /T1 value is accessed.

  13. Terahertz imaging with compressive sensing

    NASA Astrophysics Data System (ADS)

    Chan, Wai Lam

    Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.

  14. 75 FR 65405 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ... current terminology, records usage, policies, managing office location, and storage and retrieval... include the veteran's name, address, Social Security number, date of birth, phone number, medical history... including medical or beneficiary related information, to the veteran's or uniformed services member's legal...

  15. Leveraging Random Number Generation for Mastery of Learning in Teaching Quantitative Research Courses via an E-Learning Method

    ERIC Educational Resources Information Center

    Boonsathorn, Wasita; Charoen, Danuvasin; Dryver, Arthur L.

    2014-01-01

    E-Learning brings access to a powerful but often overlooked teaching tool: random number generation. Using random number generation, a practically infinite number of quantitative problem-solution sets can be created. In addition, within the e-learning context, in the spirit of the mastery of learning, it is possible to assign online quantitative…

  16. Benchmarking the x-ray phase contrast imaging for ICF DT ice characterization using roughened surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dewald, E; Kozioziemski, B; Moody, J

    2008-06-26

    We use x-ray phase contrast imaging to characterize the inner surface roughness of DT ice layers in capsules planned for future ignition experiments. It is therefore important to quantify how well the x-ray data correlates with the actual ice roughness. We benchmarked the accuracy of our system using surrogates with fabricated roughness characterized with high precision standard techniques. Cylindrical artifacts with azimuthally uniform sinusoidal perturbations with 100 um period and 1 um amplitude demonstrated 0.02 um accuracy limited by the resolution of the imager and the source size of our phase contrast system. Spherical surrogates with random roughness close tomore » that required for the DT ice for a successful ignition experiment were used to correlate the actual surface roughness to that obtained from the x-ray measurements. When comparing average power spectra of individual measurements, the accuracy mode number limits of the x-ray phase contrast system benchmarked against surface characterization performed by Atomic Force Microscopy are 60 and 90 for surrogates smoother and rougher than the required roughness for the ice. These agreement mode number limits are >100 when comparing matching individual measurements. We will discuss the implications for interpreting DT ice roughness data derived from phase-contrast x-ray imaging.« less

  17. Problems with the random number generator RANF implemented on the CDC cyber 205

    NASA Astrophysics Data System (ADS)

    Kalle, Claus; Wansleben, Stephan

    1984-10-01

    We show that using RANF may lead to wrong results when lattice models are simulated by Monte Carlo methods. We present a shift-register sequence random number generator which generates two random numbers per cycle on a two pipe CDC Cyber 205.

  18. The premixed flame in uniform straining flow

    NASA Technical Reports Server (NTRS)

    Durbin, P. A.

    1982-01-01

    Characteristics of the premixed flame in uniform straining flow are investigated by the technique of activation-energy asymptotics. An inverse method is used, which avoids some of the restrictions of previous analyses. It is shown that this method recovers known results for adiabatic flames. New results for flames with heat loss are obtained, and it is shown that, in the presence of finite heat loss, straining can extinguish flames. A stability analysis shows that straining can suppress the cellular instability of flames with Lewis number less than unity. Strain can produce instability of flames with Lewis number greater than unity. A comparison shows quite good agreement between theoretical deductions and experimental observations of Ishizuka, Miyasaka & Law (1981).

  19. Report on hard red spring wheat varieties grown in cooperative plot and nursery experiments in the spring wheat region in 2016

    USDA-ARS?s Scientific Manuscript database

    The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 86th year in 2016. The nursery contained 26 entries submitted by 8 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ...

  20. Sample-based estimation of tree species richness in a wet tropical forest compartment

    Treesearch

    Steen Magnussen; Raphael Pelissier

    2007-01-01

    Petersen's capture-recapture ratio estimator and the well-known bootstrap estimator are compared across a range of simulated low-intensity simple random sampling with fixed-area plots of 100 m? in a rich wet tropical forest compartment with 93 tree species in the Western Ghats of India. Petersen's ratio estimator was uniformly superior to the bootstrap...

  1. Electromagnetic properties of material coated surfaces

    NASA Technical Reports Server (NTRS)

    Beard, L.; Berrie, J.; Burkholder, R.; Dominek, A.; Walton, E.; Wang, N.

    1989-01-01

    The electromagnetic properties of material coated conducting surfaces were investigated. The coating geometries consist of uniform layers over a planar surface, irregularly shaped formations near edges and randomly positioned, electrically small, irregularly shaped formations over a surface. Techniques to measure the scattered field and constitutive parameters from these geometries were studied. The significance of the scattered field from these geometries warrants further study.

  2. Effects of Model Characteristics on Observational Learning of Inmates in a Pre-Release Center.

    ERIC Educational Resources Information Center

    Fliegel, Alan B.

    Subjects were 138 inmates from the pre-release unit of a Southwestern prison system, randomly divided into three groups of 46 each. Each group viewed a video-taped model delivering a speech. The independent variable had three levels: (1) lecturer attired in a shirt and tie; (2) lecturer attired in a correctional officer's uniform; and (3) model…

  3. Report on hard red spring wheat varieties grown in cooperative plot and nursery experiments in the spring wheat region in 2014

    USDA-ARS?s Scientific Manuscript database

    The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 84th year in 2014. The nursery contained 26 entries submitted by 6 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ex...

  4. Effect of feed supplement containing earthworm meal (Lumbricus rubellus) on production performance of quail (Coturnix coturnix japonica)

    NASA Astrophysics Data System (ADS)

    Istiqomah, L.; Sakti, A. A.; Suryani, A. E.; Karimy, M. F.; Anggraeni, A. S.; Herdian, H.

    2017-12-01

    The objective of this study was to evaluate the effect of feed supplement (FS) contained earthworm meal (EWM) on production performance of laying quails. Twenty weeks-old of 360 Coturnix coturnix japonica quails were used in a Completely Randomized Design (CRD) with three dietary treatments A = CD (control without FS), B = CD + 0.250 % of FS, and C = CD + 0.375 % of FS during 6 weeks of experimental period. Each treatment in 4 equal replicates in which 30 quails were randomly allocated into 12 units of cages. Variable measured were feed intake, feed conversion ratio, feed efficiency, mortality rate, hen day production, egg weight, and egg uniformity. Data were statistically analyzed by One Way ANOVA and the differences among mean treatments are analysed using Duncan’s Multiple Range Test (DMRT). The results showed that administration of 0.375% FS based on earthworm meal, fermented rice bran, and skim milk impaired the feed conversion ratio and increased the feed efficiency. The experimental treatments did not effect on feed intake, mortality, hen day production, egg weight, and egg uniformity of quail. It is concluded that administration of feed supplement improved the growth performance of quail.

  5. Electroforming free controlled bipolar resistive switching in Al/CoFe2O4/FTO device with self-compliance effect

    NASA Astrophysics Data System (ADS)

    Munjal, Sandeep; Khare, Neeraj

    2018-02-01

    Controlled bipolar resistive switching (BRS) has been observed in nanostructured CoFe2O4 (CFO) films using an Al (aluminum)/CoFe2O4/FTO (fluorine-doped tin oxide) device. The fabricated device shows electroforming-free uniform BRS with two clearly distinguished and stable resistance states without any application of compliance current, with a resistance ratio of the high resistance state (HRS) and the low resistance state (LRS) of >102. Small switching voltage (<1 volt) and lower current in both the resistance states confirm the fabrication of a low power consumption device. In the LRS, the conduction mechanism was found to be Ohmic in nature, while the high-resistance state (HRS/OFF state) was governed by the space charge-limited conduction mechanism, which indicates the presence of an interfacial layer with an imperfect microstructure near the top Al/CFO interface. The device shows nonvolatile behavior with good endurance properties, an acceptable resistance ratio, uniform resistive switching due to stable, less random filament formation/rupture, and a control over the resistive switching properties by choosing different stop voltages, which makes the device suitable for its application in future nonvolatile resistive random access memory.

  6. Generating and using truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2012-01-01

    The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.

  7. Source-Independent Quantum Random Number Generation

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  8. 19 CFR 24.17 - Reimbursable services of CBP employees.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... regular pay is computed as follows: Hours Hours Gross number of working hours in 52 40-hour weeks 2,080... Sick Leave—13 days 104 384 Net number of working hours 1,696 Gross number of working hours in 52 40-hour weeks 2,080 Working hour equivalent of Government contributions for employee uniform allowance...

  9. On numbers of clones needed for managing risks in clonal forestry

    Treesearch

    J. Bishir; J.H. Roberds

    1999-01-01

    An important question in clonal forestry concerns the number of clones needed in plantations to protect against catastrophic failure while at the same time achieving the uniform stands, high yields, and ease of management associated with this management system. This paper looks at how the required number of clones needed to achieve a predetermined maximum acceptable...

  10. Friction factor and heat transfer of nanofluids containing cylindrical nanoparticles in laminar pipe flow

    NASA Astrophysics Data System (ADS)

    Lin, Jianzhong; Xia, Yi; Ku, Xiaoke

    2014-10-01

    Numerical simulations of polyalphaolefins-Al2O3 nanofluids containing cylindrical nanoparticles in a laminar pipe flow are performed by solving the Navier-Stokes equation with term of cylindrical nanoparticles, the general dynamic equation for cylindrical nanoparticles, and equation for nanoparticle orientation. The distributions of particle number and volume concentration, the friction factor, and heat transfer are obtained and analyzed. The results show that distributions of nanoparticle number and volume concentration are non-uniform across the section, with larger and smaller values in the region near the pipe center and near the wall, respectively. The non-uniformity becomes significant with the increase in the axial distance from the inlet. The friction factor decreases with increasing Reynolds number. The relationships between the friction factor and the nanoparticle volume concentration as well as particle aspect ratio are dependent on the Reynolds number. The Nusselt number of nanofluids, directly proportional to the Reynolds number, particle volume concentration, and particle aspect ratio, is higher near the pipe entrance than at the downstream locations. The rate of increase in Nusselt number at lower particle volume concentration is more than that at higher concentration. Finally, the expressions of friction factor and Nusselt number as a function of particle volume concentration, particle aspect ratio, and Reynolds number are derived based on the numerical data.

  11. On the limiting characteristics of quantum random number generators at various clusterings of photocounts

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2017-03-01

    Various methods for the clustering of photocounts constituting a sequence of random numbers are considered. It is shown that the clustering of photocounts resulting in the Fermi-Dirac distribution makes it possible to achieve the theoretical limit of the random number generation rate.

  12. Anonymous authenticated communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Cheryl L; Schroeppel, Richard C; Snyder, Lillian A

    2007-06-19

    A method of performing electronic communications between members of a group wherein the communications are authenticated as being from a member of the group and have not been altered, comprising: generating a plurality of random numbers; distributing in a digital medium the plurality of random numbers to the members of the group; publishing a hash value of contents of the digital medium; distributing to the members of the group public-key-encrypted messages each containing a same token comprising a random number; and encrypting a message with a key generated from the token and the plurality of random numbers.

  13. Deployment-Related Factors, Mental Health, and Suicide: Review of the Literature

    DTIC Science & Technology

    2011-04-01

    Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law , no person shall be subject to a...5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Department of Medical & Clinical Psychology Uniformed Services University of the Health Sciences 4301 Jones

  14. The current impact flux on Mars and its seasonal variation

    NASA Astrophysics Data System (ADS)

    JeongAhn, Youngmin; Malhotra, Renu

    2015-12-01

    We calculate the present-day impact flux on Mars and its variation over the martian year, using the current data on the orbital distribution of known Mars-crossing minor planets. We adapt the Öpik-Wetherill formulation for calculating collision probabilities, paying careful attention to the non-uniform distribution of the perihelion longitude and the argument of perihelion owed to secular planetary perturbations. We find that, at the current epoch, the Mars crossers have an axial distribution of the argument of perihelion, and the mean direction of their eccentricity vectors is nearly aligned with Mars' eccentricity vector. These previously neglected angular non-uniformities have the effect of depressing the mean annual impact flux by a factor of about 2 compared to the estimate based on a uniform random distribution of the angular elements of Mars-crossers; the amplitude of the seasonal variation of the impact flux is likewise depressed by a factor of about 4-5. We estimate that the flux of large impactors (of absolute magnitude H < 16) within ±30° of Mars' aphelion is about three times larger than when the planet is near perihelion. Extrapolation of our results to a model population of meter-size Mars-crossers shows that if these small impactors have a uniform distribution of their angular elements, then their aphelion-to-perihelion impact flux ratio would be 11-15, but if they track the orbital distribution of the large impactors, including their non-uniform angular elements, then this ratio would be about 3. Comparison of our results with the current dataset of fresh impact craters on Mars (detected with Mars-orbiting spacecraft) appears to rule out the uniform distribution of angular elements.

  15. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  16. A distributed scheduling algorithm for heterogeneous real-time systems

    NASA Technical Reports Server (NTRS)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  17. Phase transition in nonuniform Josephson arrays: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lozovik, Yu. E.; Pomirchy, L. M.

    1994-01-01

    Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.

  18. An integral formulation for wave propagation on weakly non-uniform potential flows

    NASA Astrophysics Data System (ADS)

    Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel

    2016-12-01

    An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.

  19. Jet Velocity Profile Effects on Spray Characteristics of Impinging Jets at High Reynolds and Weber Numbers

    NASA Astrophysics Data System (ADS)

    Rodrigues, Neil S.; Kulkarni, Varun; Sojka, Paul E.

    2014-11-01

    While like-on-like doublet impinging jet atomization has been extensively studied in the literature, there is poor agreement between experimentally observed spray characteristics and theoretical predictions (Ryan et al. 1995, Anderson et al. 2006). Recent works (Bremond and Villermaux 2006, Choo and Kang 2007) have introduced a non-uniform jet velocity profile, which lead to a deviation from the standard assumptions for the sheet velocity and the sheet thickness parameter. These works have assumed a parabolic profile to serve as another limit to the traditional uniform jet velocity profile assumption. Incorporating a non-uniform jet velocity profile results in the sheet velocity and the sheet thickness parameter depending on the sheet azimuthal angle. In this work, the 1/7th power-law turbulent velocity profile is assumed to provide a closer match to the flow behavior of jets at high Reynolds and Weber numbers, which correspond to the impact wave regime. Predictions for the maximum wavelength, sheet breakup length, ligament diameter, and drop diameter are compared with experimental observations. The results demonstrate better agreement between experimentally measured values and predictions, compared to previous models. U.S. Army Research Office under the Multi-University Research Initiative Grant Number W911NF-08-1-0171.

  20. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    PubMed

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  1. Impact of recombination on polymorphism of genes encoding Kunitz-type protease inhibitors in the genus Solanum.

    PubMed

    Speranskaya, Anna S; Krinitsina, Anastasia A; Kudryavtseva, Anna V; Poltronieri, Palmiro; Santino, Angelo; Oparina, Nina Y; Dmitriev, Alexey A; Belenikin, Maxim S; Guseva, Marina A; Shevelev, Alexei B

    2012-08-01

    The group of Kunitz-type protease inhibitors (KPI) from potato is encoded by a polymorphic family of multiple allelic and non-allelic genes. The previous explanations of the KPI variability were based on the hypothesis of random mutagenesis as a key factor of KPI polymorphism. KPI-A genes from the genomes of Solanum tuberosum cv. Istrinskii and the wild species Solanum palustre were amplified by PCR with subsequent cloning in plasmids. True KPI sequences were derived from comparison of the cloned copies. "Hot spots" of recombination in KPI genes were independently identified by DnaSP 4.0 and TOPALi v2.5 software. The KPI-A sequence from potato cv. Istrinskii was found to be 100% identical to the gene from Solanum nigrum. This fact illustrates a high degree of similarity of KPI genes in the genus Solanum. Pairwise comparison of KPI A and B genes unambiguously showed a non-uniform extent of polymorphism at different nt positions. Moreover, the occurrence of substitutions was not random along the strand. Taken together, these facts contradict the traditional hypothesis of random mutagenesis as a principal source of KPI gene polymorphism. The experimentally found mosaic structure of KPI genes in both plants studied is consistent with the hypothesis suggesting recombination of ancestral genes. The same mechanism was proposed earlier for other resistance-conferring genes in the nightshade family (Solanaceae). Based on the data obtained, we searched for potential motifs of site-specific binding with plant DNA recombinases. During this work, we analyzed the sequencing data reported by the Potato Genome Sequencing Consortium (PGSC), 2011 and found considerable inconsistence of their data concerning the number, location, and orientation of KPI genes of groups A and B. The key role of recombination rather than random point mutagenesis in KPI polymorphism was demonstrated for the first time. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  2. Inconsistencies in emergency instructions on common household product labels.

    PubMed

    Cantrell, F Lee; Nordt, Sean Patrick; Krauss, Jamey R

    2013-10-01

    Human exposures to non-pharmaceutical products often results in serious injury and death annually in the United States. Studies performed more than 25 years ago described inadequate first aid advice on the majority of household products. The current study evaluates contemporary non-pharmaceutical products with respect to location, uniformity and type of their first aid and emergency contact instructions. A random, convenience sample of commercial product label information was obtained from local retail stores over an 8 month period. Twelve common non-pharmaceutical product categories, with large numbers of annual human exposures, were identified from National Poison Data Systems data. A minimum of 10 unique products for each category utilized. The following information identified: product name and manufacturer, location on container, presence and type of route-specific treatment, medical assistance referral information. A total of 259 product labels were examined. First aid/contact information was located on container: rear 162 (63 %), side 28 (11 %), front 3 (1 %), bottom 2 (0.77 %), behind label 14 (5 %), missing entirely 50 (19 %). Fifty-five products (21 %) lacked any first aid instructions. Suggested contacts for accidental poisoning: none listed 75 (29 %), physician 144 (56 %), poison control centers 102 (39 %), manufacturer 44 (17 %), "Call 911" 10 (4 %). Suggested contacts for unintentional exposure and content of first aid instructions on household products were inconsistent, frequently incomplete and at times absent. Instruction locations similarly lacked uniformity. Household product labels need to provide concise, accurate first aid and emergency contact instructions in easy-to-understand language in a universal format on product labels.

  3. Turbulent transport with intermittency: Expectation of a scalar concentration.

    PubMed

    Rast, Mark Peter; Pinton, Jean-François; Mininni, Pablo D

    2016-04-01

    Scalar transport by turbulent flows is best described in terms of Lagrangian parcel motions. Here we measure the Eulerian distance travel along Lagrangian trajectories in a simple point vortex flow to determine the probabilistic impulse response function for scalar transport in the absence of molecular diffusion. As expected, the mean squared Eulerian displacement scales ballistically at very short times and diffusively for very long times, with the displacement distribution at any given time approximating that of a random walk. However, significant deviations in the displacement distributions from Rayleigh are found. The probability of long distance transport is reduced over inertial range time scales due to spatial and temporal intermittency. This can be modeled as a series of trapping events with durations uniformly distributed below the Eulerian integral time scale. The probability of long distance transport is, on the other hand, enhanced beyond that of the random walk for both times shorter than the Lagrangian integral time and times longer than the Eulerian integral time. The very short-time enhancement reflects the underlying Lagrangian velocity distribution, while that at very long times results from the spatial and temporal variation of the flow at the largest scales. The probabilistic impulse response function, and with it the expectation value of the scalar concentration at any point in space and time, can be modeled using only the evolution of the lowest spatial wave number modes (the mean and the lowest harmonic) and an eddy based constrained random walk that captures the essential velocity phase relations associated with advection by vortex motions. Preliminary examination of Lagrangian tracers in three-dimensional homogeneous isotropic turbulence suggests that transport in that setting can be similarly modeled.

  4. Applying a Family-Level Economic Strengthening Intervention to Improve Education and Health-Related Outcomes of School-Going AIDS-Orphaned Children: Lessons from a Randomized Experiment in Southern Uganda.

    PubMed

    Ssewamala, Fred M; Karimli, Leyla; Torsten, Neilands; Wang, Julia Shu-Huah; Han, Chang-Keun; Ilic, Vilma; Nabunya, Proscovia

    2016-01-01

    Children comprise the largest proportion of the population in sub-Saharan Africa. Of these, millions are orphaned. Orphanhood increases the likelihood of growing up in poverty, dropping out of school, and becoming infected with HIV. Therefore, programs aimed at securing a healthy developmental trajectory for these orphaned children are desperately needed. We conducted a two-arm cluster-randomized controlled trial to evaluate the effectiveness of a family-level economic strengthening intervention with regard to school attendance, school grades, and self-esteem in AIDS-orphaned adolescents aged 12-16 years from 10 public rural primary schools in southern Uganda. Children were randomly assigned to receive usual care (counseling, school uniforms, school lunch, notebooks, and textbooks), "bolstered" with mentorship from a near-peer (control condition, n = 167), or to receive bolstered usual care plus a family-level economic strengthening intervention in the form of a matched Child Savings Account (Suubi-Maka treatment arm, n = 179). The two groups did not differ at baseline, but 24 months later, children in the Suubi-Maka treatment arm reported significantly better educational outcomes, lower levels of hopelessness, and higher levels of self-concept compared to participants in the control condition. Our study contributes to the ongoing debate on how to address the developmental impacts of the increasing numbers of orphaned and vulnerable children and adolescents in sub-Saharan Africa, especially those affected by HIV/AIDS. Our findings indicate that innovative family-level economic strengthening programs, over and above bolstered usual care that includes psychosocial interventions for young people, may have positive developmental impacts related to education, health, and psychosocial functioning.

  5. bFGF-containing electrospun gelatin scaffolds with controlled nano-architectural features for directed angiogenesis

    PubMed Central

    Montero, Ramon B.; Vial, Ximena; Nguyen, Dat Tat; Farhand, Sepehr; Reardon, Mark; Pham, Si M.; Tsechpenakis, Gavriil; Andreopoulos, Fotios M.

    2011-01-01

    Current therapeutic angiogenesis strategies are focused on the development of biologically responsive scaffolds that can deliver multiple angiogenic cytokines and/or cells in ischemic regions. Herein, we report on a novel electrospinning approach to fabricate cytokine-containing nanofibrous scaffolds with tunable architecture to promote angiogenesis. Fiber diameter and uniformity were controlled by varying the concentration of the polymeric (i.e. gelatin) solution, the feed rate, needle to collector distance, and electric field potential between the collector plate and injection needle. Scaffold fiber orientation (random vs. aligned) was achieved by alternating the polarity of two parallel electrodes placed on the collector plate thus dictating fiber deposition patterns. Basic fibroblast growth factor (bFGF) was physically immobilized within the gelatin scaffolds at variable concentrations and human umbilical vein endothelial cells (HUVEC) were seeded on the top of the scaffolds. Cell proliferation and migration was assessed as a function of growth factor loading and scaffold architecture. HUVECs successfully adhered onto gelatin B scaffolds and cell proliferation was directly proportional to the loading concentrations of the growth factor (0–100 bFGF ng/mL). Fiber orientation had a pronounced effect on cell morphology and orientation. Cells were spread along the fibers of the electrospun scaffolds with the aligned orientation and developed a spindle-like morphology parallel to the scaffold's fibers. In contrast, cells seeded onto the scaffolds with random fiber orientation, did not demonstrate any directionality and appeared to have a rounder shape. Capillary formation (i.e. sprouts length and number of sprouts per bead), assessed in a 3-D in vitro angiogenesis assay, was a function of bFGF loading concentration (0 ng, 50 ng and 100 ng per scaffold) for both types of electrospun scaffolds (i.e. with aligned or random fiber orientation). PMID:22200610

  6. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  7. Applying a Family-Level Economic Strengthening Intervention to Improve Education and Health-Related Outcomes of School-going AIDS-Orphaned Children: Lessons from a Randomized Experiment in Southern Uganda

    PubMed Central

    Ssewamala, Fred M.; Leyla, Karimli; Neilands, Torsten; Julia Shu-Huah, Wang; Chang-Keun, Han; Vilma, Ilic; Proscovia, Nabunya

    2015-01-01

    Children comprise the largest proportion of the population in sub-Saharan Africa. Of these, millions are orphaned. Orphanhood increases the likelihood of growing up in poverty, dropping out of school, and becoming infected with HIV. Therefore, programs aimed at securing a healthy developmental trajectory for these orphaned children are desperately needed. We conducted a two-arm cluster-randomized controlled trial to evaluate the effectiveness of a family-level economic strengthening intervention with regard to school-attendance, school grades, and self-esteem in AIDS-orphaned adolescents aged 12–16 years from 10 public rural primary schools in southern Uganda. Children were randomly assigned to receive usual care (counseling, school uniforms, school lunch, notebooks and textbooks), “bolstered” with mentorship from a near-peer (control condition, n=167), or to receive bolstered usual care plus a family-level economic strengthening intervention in the form of a matched Child Savings Account (Suubi-Maka treatment arm, n = 179). The two groups did not differ at baseline, but 24-months later, children in the Suubi-Maka treatment arm reported significantly better educational outcomes, lower levels of hopelessness, and higher levels of self-concept compared to participants in the control condition. Our study contributes to the ongoing debate on how to address the developmental impacts of the increasing numbers of orphaned, and vulnerable children and adolescents in sub-Saharan Africa, especially those affected by HIV/AIDS. Our findings indicate that innovative family-level economic strengthening programs, over and above bolstered usual care that includes psychosocial interventions for young people, may have positive developmental impacts related to education, health, and psychosocial functioning. PMID:26228480

  8. Not all numbers are equal: preferences and biases among children and adults when generating random sequences.

    PubMed

    Towse, John N; Loetscher, Tobias; Brugger, Peter

    2014-01-01

    We investigate the number preferences of children and adults when generating random digit sequences. Previous research has shown convincingly that adults prefer smaller numbers when randomly choosing between responses 1-6. We analyze randomization choices made by both children and adults, considering a range of experimental studies and task configurations. Children - most of whom are between 8 and 11~years - show a preference for relatively large numbers when choosing numbers 1-10. Adults show a preference for small numbers with the same response set. We report a modest association between children's age and numerical bias. However, children also exhibit a small number bias with a smaller response set available, and they show a preference specifically for the numbers 1-3 across many datasets. We argue that number space demonstrates both continuities (numbers 1-3 have a distinct status) and change (a developmentally emerging bias toward the left side of representational space or lower numbers).

  9. Length scale hierarchy and spatiotemporal change of alluvial morphologies over the Selenga River delta, Russia

    NASA Astrophysics Data System (ADS)

    Dong, T. Y.; Nittrouer, J.; McElroy, B. J.; Ma, H.; Czapiga, M. J.; Il'icheva, E.; Pavlov, M.; Parker, G.

    2017-12-01

    The movement of water and sediment in natural channels creates various types of alluvial morphologies that span length scales from dunes to deltas. The behavior of these morphologies is controlled microscopically by hydrodynamic conditions and bed material size, and macroscopically by hydrologic and geological settings. Alluvial morphologies can be modeled as either diffusive or kinematic waves, in accordance with their respective boundary conditions. Recently, it has been shown that the difference between these two dynamic behaviors of alluvial morphologies can be characterized by the backwater number, which is a dimensionless value normalizing the length scale of a morphological feature to its local hydrodynamic condition. Application of the backwater number has proven useful for evaluating the size of morphologies, including deltas (e.g., by assessing the preferential avulsion location of a lobe), and for comparing bedform types across different fluvial systems. Yet two critical questions emerge when applying the backwater number: First, how do different types of alluvial morphologies compare within a single deltaic system, where there is a hydrodynamic transition from uniform to non-uniform flow? Second, how do different types of morphologies evolve temporally within a system as a function of changing water discharge? This study addresses these questions by compiling and analyzing field data from the Selenga River delta, Russia, which include measurements of flow velocity, channel geometry, bed material grain size, and channel slope, as well as length scales of various morphologies, including dunes, island bars, meanders, bifurcations, and delta lobes. Data analyses reveal that the length scale of morphologies decrease and the backwater number increases as flow transitions from uniform to non-uniform conditions progressing downstream. It is shown that the evaluated length scale hierarchy and planform distribution of different morphologies can be used to estimate slope, shear velocity and sediment flux within this depositional system. The findings from this research can be applied to evaluate spatially and temporally varying morphodynamic conditions, based on structures measured from both modern systems and ancient sedimentary records.

  10. Breast Cancer Screening by Physical Examination: Randomized Trial in the Phillipines

    DTIC Science & Technology

    2005-10-01

    J -4327 TITLE: Breast Cancer Screening by Physical Examination: Randomized Trial in the Phillipines...Examination: Randomized Trial in the 5a. CONTRACT NUMBER Phillipines 5b. GRANT NUMBER DAMD17-94- J -4327 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...Grant DAMD17-94- J -4327 3 Table of

  11. Sequence requirement of the ade6-4095 meiotic recombination hotspot in Schizosaccharomyces pombe.

    PubMed

    Foulis, Steven J; Fowler, Kyle R; Steiner, Walter W

    2018-02-01

    Homologous recombination occurs at a greatly elevated frequency in meiosis compared to mitosis and is initiated by programmed double-strand DNA breaks (DSBs). DSBs do not occur at uniform frequency throughout the genome in most organisms, but occur preferentially at a limited number of sites referred to as hotspots. The location of hotspots have been determined at nucleotide-level resolution in both the budding and fission yeasts, and while several patterns have emerged regarding preferred locations for DSB hotspots, it remains unclear why particular sites experience DSBs at much higher frequency than other sites with seemingly similar properties. Short sequence motifs, which are often sites for binding of transcription factors, are known to be responsible for a number of hotspots. In this study we identified the minimum sequence required for activity of one of such motif identified in a screen of random sequences capable of producing recombination hotspots. The experimentally determined sequence, GGTCTRGACC, closely matches the previously inferred sequence. Full hotspot activity requires an effective sequence length of 9.5 bp, whereas moderate activity requires an effective sequence length of approximately 8.2 bp and shows significant association with DSB hotspots. In combination with our previous work, this result is consistent with a large number of different sequence motifs capable of producing recombination hotspots, and supports a model in which hotspots can be rapidly regenerated by mutation as they are lost through recombination.

  12. A high-throughput method for generating uniform microislands for autaptic neuronal cultures

    PubMed Central

    Sgro, Allyson E.; Nowak, Amy L.; Austin, Naola S.; Custer, Kenneth L.; Allen, Peter B.; Chiu, Daniel T.; Bajjalieh, Sandra M.

    2013-01-01

    Generating microislands of culture substrate on coverslips by spray application of poly-D lysine is a commonly used method for culturing isolated neurons that form self (autaptic) synapses. This preparation has multiple advantages for studying synaptic transmission in isolation; however, generating microislands by spraying produces islands of non-uniform size and thus cultures vary widely in the number of islands containing single neurons. To address these problems, we developed a high-throughput method for reliably generating uniformly-shaped microislands of culture substrate. Stamp molds formed of poly(dimethylsiloxane) (PDMS) were fabricated with arrays of circles and used to generate stamps made of 9.2% agarose. The agarose stamps were capable of loading sufficient poly D-lysine and collagen dissolved in acetic acid to rapidly generate coverslips containing at least 64 microislands per coverslip. When hippocampal neurons were cultured on these coverslips, there were significantly more single-neuron islands per coverslip. We noted that single neurons tended to form one of three distinct neurite-arbor morphologies, which varied with island size and the location of the cell body on the island. To our surprise, the number of synapses per autaptic neuron did not correlate with arbor shape or island size, suggesting that other factors regulate the number of synapses formed by isolated neurons. The stamping method we report can be used to increase the number of single-neuron islands per culture and aid in the rapid visualization of microislands. PMID:21515305

  13. Engineered surface scatterers in edge-lit slab waveguides to improve light delivery in algae cultivation.

    PubMed

    Ahsan, Syed Saad; Pereyra, Brandon; Jung, Erica E; Erickson, David

    2014-10-20

    Most existing photobioreactors do a poor job of distributing light uniformly due to shading effects. One method by which this could be improved is through the use of internal wave-guiding structures incorporating engineered light scattering schemes. By varying the density of these scatterers, one can control the spatial distribution of light inside the reactor enabling better uniformity of illumination. Here, we compare a number of light scattering schemes and evaluate their ability to enhance biomass accumulation. We demonstrate a design for a gradient distribution of surface scatterers with uniform lateral scattering intensity that is superior for algal biomass accumulation, resulting in a 40% increase in the growth rate.

  14. A forebody design technique for highly integrated bottom-mounted scramjets with application to a hypersonic research airplane

    NASA Technical Reports Server (NTRS)

    Edwards, C. L. W.

    1974-01-01

    An inviscid technique for designing forebodies which produce uniformly precompressed flows at the inlet entrance for bottom-mounted scramjets has been developed so that geometric constraints resulting from design trade-offs can be effectively evaluated. The flow fields resulting from several forebody designs generated in support of a hypersonic research airplane conceptual design study have been analyzed in detail with three-dimensional characteristics calculations to verify the uniform flow conditions. For the designs analyzed, uniform flow is maintained over a wide range of flight conditions (Mach number equals 4 to 10; angle of attack equals 6 deg to 10 deg) corresponding to scramjet operation flight envelope of the research airplane.

  15. Heat Transfer to Longitudinal Laminar Flow Between Cylinders

    NASA Technical Reports Server (NTRS)

    Sparrow, Ephraim M.; Loeffler, Albert L. Jr.; Hubbard, H. A.

    1960-01-01

    Consideration is given to the fully developed heat transfer characteristics for longitudinal laminar flow between cylinders arranged in an equilateral triangular array. The analysis is carried out for the condition of uniform heat transfer per unit length. Solutions are obtained for the temperature distribution, and from these, Nusselt numbers are derived for a wide range of spacing-to-diameter ratios. It is found that as the spacing ratio increases, so also does the wall-to-bulk temperature difference for a fixed heat transfer per unit length. Corresponding to a uniform surface temperature around the circumference of a cylinder, the circumferential variation of the local heat flux is computed. For spacing ratios of 1.5 - 2.0 and greater, uniform peripheral wall temperature and uniform peripheral heat flux are simultaneously achieved. A simplified analysis which neglects circumferential variations is also carried out, and the results are compared with those from the more exact formulation.

  16. An improved algorithm for de-striping of ocean colour monitor imageries aided by measured sensor characteristics

    NASA Astrophysics Data System (ADS)

    Dutt, Ashutosh; Mishra, Ashish; Goswami, D. R.; Kumar, A. S. Kiran

    2016-05-01

    The push-broom sensors in bands meant to study oceans, in general suffer from residual non uniformity even after radiometric correction. The in-orbit data from OCM-2 shows pronounced striping in lower bands. There have been many attempts and different approaches to solve the problem using image data itself. The success or lack of it of each algorithm lies on the quality of the uniform region identified. In this paper, an image based destriping algorithm is presented with constraints being derived from Ground Calibration exercise. The basis of the methodology is determination of pixel to pixel non-uniformity through uniform segments identified and collected from large number of images, covering the dynamic range of the sensor. The results show the effectiveness of the algorithm over different targets. The performance is qualitatively evaluated by visual inspection and quantitatively measured by two parameters.

  17. Research on Creep Relaxation Non-uniformity and Effect on Performance of Combined Rotor

    NASA Astrophysics Data System (ADS)

    Liu, Qingya; He, Jingfei; Zhao, Lijia

    2017-11-01

    The combined rotor of gas turbine is connected by a certain number of rod bolts. It works in the high temperature environment for a long time, and the rod bolts will creep and relax. Under the influence of elastic interaction, the loss of pretightening force of rod bolts at different positions is non-uniform, which will cause the connection of the combined rotor to be out of tune. In this paper, the creep relaxation non-uniformity model for a class F heavy duty gas turbine is established. On the basis of this, the performance degradation and structural strength change of combined rotor resulting from creep relaxation non-uniformity of rod bolts are studied. The results show that the ratio of preload mistuning increases with time and then converges, and there is a threshold inflection point in about seven thousand hours.

  18. Generation of pseudo-random numbers

    NASA Technical Reports Server (NTRS)

    Howell, L. W.; Rheinfurth, M. H.

    1982-01-01

    Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.

  19. 77 FR 39687 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ..., Form, and OMB Number: Defense Sexual Assault Incident Database (DSAID); OMB Control Number 0704-0482... sexual assault data collected by the Military Services. This database shall be a centralized, case-level database for the uniform collection of data regarding incidence of sexual assaults involving persons...

  20. 78 FR 46325 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-31

    ... to facilitate informed decision making regarding deposit accounts offered at depository institutions... information, such as account numbers or social security numbers, should not be included. FOR FURTHER... consumer ability to make informed decisions regarding deposit accounts by requiring uniformity in the...

  1. 77 FR 59372 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-27

    ... program objectives and most responsive to the solicitation. The selection criteria will be contained in... it displays a currently valid OMB control number. Food and Nutrition Service Title: Uniform Grant...: The Food and Nutrition Service (FNS) has a number of non-entitlement discretionary grant [[Page 59373...

  2. Direct generation of all-optical random numbers from optical pulse amplitude chaos.

    PubMed

    Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong

    2012-02-13

    We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.

  3. Symptoms related to new flight attendant uniforms.

    PubMed

    McNeely, Eileen; Staffa, Steven J; Mordukhovich, Irina; Coull, Brent

    2018-01-03

    Flight attendants at Alaska Airlines reported health symptoms after the introduction of new uniforms in 2011. The airline replaced the uniforms in 2014 without acknowledging harm. To understand possible uniform-related health effects, we analyzed self-reported health symptoms in crew who participated in the Harvard Flight Attendant Health Study between 2007 and 2015, the period before, during, and after the introduction of new uniforms. We calculated a standardized prevalence of respiratory, dermatological and allergic symptoms at baseline, as well as during and after uniform changes in 684 flight attendants with a varying number of surveys completed across each time point. We used Generalized Estimating Equations (GEE) to model the association between symptoms at baseline versus the exposure period after adjusting for age, gender and smoking status and weighting respondents for the likelihood of attrition over the course of the study period. We found the following symptom prevalence (per 100) increased after the introduction of new uniforms: multiple chemical sensitivity (10 vs 5), itchy/irritated skin (25 vs 13), rash/hives (23 vs 13), itchy eyes (24 vs 14), blurred vision (14 vs 6), sinus congestion (28 vs 24), ear pain (15 vs 12), sore throat (9 vs 5), cough (17 vs 7), hoarseness/loss of voice (12 vs 3), and shortness of breath (8 vs 3). The odds of several symptoms significantly increased compared to baseline after adjusting for potential confounders. This study found a relationship between health complaints and the introduction of new uniforms in this longitudinal occupational cohort.

  4. Differences between wafer and bake plate temperature uniformity in proximity bake: a theoretical and experimental study

    NASA Astrophysics Data System (ADS)

    Ramanan, Natarajan; Kozman, Austin; Sims, James B.

    2000-06-01

    As the lithography industry moves toward finer features, specifications on temperature uniformity of the bake plates are expected to become more stringent. Consequently, aggressive improvements are needed to conventional bake station designs to make them perform significantly better than current market requirements. To this end, we have conducted a rigorous study that combines state-of-the-art simulation tools and experimental methods to predict the impact of the parameters that influence the uniformity of the wafer in proximity bake. The key observation from this detailed study is that the temperature uniformity of the wafer in proximity mode depends on a number of parameters in addition to the uniformity of the bake plate itself. These parameters include the lid design, the air flow distribution around the bake chamber, bake plate design and flatness of the bake plate and wafer. By performing careful experimental studies that were guided by extensive numerical simulations, we were able to understand the relative importance of each of these parameters. In an orderly fashion, we made appropriate design changes to curtail or eliminate the nonuniformity caused by each of these parameters. After implementing all these changes, we have now been able to match or improve the temperature uniformity of the wafer in proximity with that of a contact measurement on the bake plate. The wafer temperature uniformity is also very close to the theoretically predicted uniformity of the wafer.

  5. Recommendations and illustrations for the evaluation of photonic random number generators

    NASA Astrophysics Data System (ADS)

    Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi

    2017-09-01

    The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.

  6. Scalable randomized benchmarking of non-Clifford gates

    NASA Astrophysics Data System (ADS)

    Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.

  7. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  8. Directed Random Markets: Connectivity Determines Money

    NASA Astrophysics Data System (ADS)

    Martínez-Martínez, Ismael; López-Ruiz, Ricardo

    2013-12-01

    Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.

  9. Visible digital watermarking system using perceptual models

    NASA Astrophysics Data System (ADS)

    Cheng, Qiang; Huang, Thomas S.

    2001-03-01

    This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.

  10. Hazard Function Estimation with Cause-of-Death Data Missing at Random.

    PubMed

    Wang, Qihua; Dinse, Gregg E; Liu, Chunling

    2012-04-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.

  11. Linear instability of compound liquid threads in the presence of surfactant

    NASA Astrophysics Data System (ADS)

    Ye, Han-yu; Yang, Li-jun; Fu, Qing-fei

    2017-08-01

    This paper investigates the linear instability of compound liquid threads in the presence of surfactant. The limitation of the one-dimensional approximation in previous work [Craster, Matar, and Papageorgiou, Phys. Fluids 15, 3409 (2003), 10.1063/1.1611879] is removed; hence the radial dependence of the axial velocity can be taken into account. Therefore both the stretching and the squeezing modes can be investigated. The disturbance growth rate is reduced with an increase of the dimensionless surface-tension gradient (whether in the stretching or squeezing mode). For the parameter range investigated, it is found that the squeezing mode is much more sensitive to the Marangoni effect than the stretching mode. The disturbance axial velocity and disturbance surfactant concentration for a typical case is investigated. It is found that the disturbance axial velocity is close to uniform in the stretching mode when the dimensionless surface-tension gradient and the wave number are small. In contrast, for wave numbers close to cutoff, or a large dimensionless surface-tension gradient, or in the squeezing mode, the disturbance axial velocity is not uniform. Analytical relations between growth rate and wave number valid in the long-wave limit are derived. In the stretching mode, the flow moves from an extension-dominated regime to a shear-dominated regime when β1+R σ β2 increases through 1 +R σ , where β1 and β2 are the dimensionless surface-tension gradient of the inner and outer interface, respectively, R is the radius ratio, and σ is the surface tension ratio. In the squeezing mode, whatever the values of β1 and β2, the flow is always in the shear-dominated regime. The expressions of the leading-order axial perturbation velocity in the long-wave limit are derived and they explain the applicability of one-dimensional models. It is found that the leading-order axial velocity in the extension-dominated regime is always uniform and one-dimensional models work well in this regime. For the shear-dominated regime, the leading-order axial velocity can be either nonuniform or close to uniform, depending on the ratio between the dimensionless surfactant diffusivity d1 and the Laplace number La : when d1≫La the velocity profile is close to uniform and one-dimensional models work well; otherwise the velocity profile is nonuniform and one-dimensional models fail.

  12. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  13. KMCLib 1.1: Extended random number support and technical updates to the KMCLib general framework for kinetic Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-11-01

    We here present a revised version, v1.1, of the KMCLib general framework for kinetic Monte-Carlo (KMC) simulations. The generation of random numbers in KMCLib now relies on the C++11 standard library implementation, and support has been added for the user to choose from a set of C++11 implemented random number generators. The Mersenne-twister, the 24 and 48 bit RANLUX and a 'minimal-standard' PRNG are supported. We have also included the possibility to use true random numbers via the C++11 std::random_device generator. This release also includes technical updates to support the use of an extended range of operating systems and compilers.

  14. Generalized Effective Medium Theory for Particulate Nanocomposite Materials

    PubMed Central

    Siddiqui, Muhammad Usama; Arif, Abul Fazal M.

    2016-01-01

    The thermal conductivity of particulate nanocomposites is strongly dependent on the size, shape, orientation and dispersion uniformity of the inclusions. To correctly estimate the effective thermal conductivity of the nanocomposite, all these factors should be included in the prediction model. In this paper, the formulation of a generalized effective medium theory for the determination of the effective thermal conductivity of particulate nanocomposites with multiple inclusions is presented. The formulated methodology takes into account all the factors mentioned above and can be used to model nanocomposites with multiple inclusions that are randomly oriented or aligned in a particular direction. The effect of inclusion dispersion non-uniformity is modeled using a two-scale approach. The applications of the formulated effective medium theory are demonstrated using previously published experimental and numerical results for several particulate nanocomposites. PMID:28773817

  15. Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1991-01-01

    The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.

  16. The Effect of Laziness in Group Chase and Escape

    NASA Astrophysics Data System (ADS)

    Masuko, Makoto; Hiraoka, Takayuki; Ito, Nobuyasu; Shimada, Takashi

    2017-08-01

    The effect of laziness in the group chase and escape problem is studied using a simple model. Laziness is introduced as random walks in two ways: uniformly and in a "division of labor" way. It is shown that while the former is always ineffective, the latter can improve the efficiency of catching, through the formation of pincer attack configuration by diligent and lazy chasers.

  17. Concurrent infection with sibling Trichinella species in a natural host.

    PubMed

    Pozio, E; Bandi, C; La Rosa, G; Järvis, T; Miller, I; Kapel, C M

    1995-10-01

    Random amplified polymorphic DNA (RAPD) analysis of individual Trichinella muscle larvae, collected from several sylvatic and domestic animals in Estonia, revealed concurrent infection of a racoon dog with Trichinella nativa and Trichinella britovi. This finding provides strong support for their taxonomic ranking as sibling species. These 2 species appear uniformly distributed among sylvatic animals through Estonia, while Trichinella spiralis appears restricted to the domestic habitat.

  18. Computationally Efficient Resampling of Nonuniform Oversampled SAR Data

    DTIC Science & Technology

    2010-05-01

    noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled

  19. Numerical investigation of the heat transfer of a ferrofluid inside a tube in the presence of a non-uniform magnetic field

    NASA Astrophysics Data System (ADS)

    Hariri, Saman; Mokhtari, Mojtaba; Gerdroodbary, M. Barzegar; Fallah, Keivan

    2017-02-01

    In this article, a three-dimensional numerical investigation is performed to study the effect of a magnetic field on a ferrofluid inside a tube. This study comprehensively analyzes the influence of a non-uniform magnetic field in the heat transfer of a tube while a ferrofluid (water with 0.86 vol% nanoparticles (Fe3O4) is let flow. The SIMPLEC algorithm is used for obtaining the flow and heat transfer inside the tube. The influence of various parameters, such as concentration of nanoparticles, intensity of the magnetic field, wire distance and Reynolds number, on the heat transfer is investigated. According to the obtained results, the presence of a non-uniform magnetic field significantly increases the Nusselt number (more than 300%) inside the tube. Also, the magnetic field induced by the parallel wire affects the average velocity of the ferrofluid and forms two strong eddies in the tube. Our findings show that the diffusion also raises as the concentration of the nanoparticle is increased.

  20. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  1. Helicon waves in uniform plasmas. IV. Bessel beams, Gendrin beams, and helicons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrutia, J. M.; Stenzel, R. L.

    Electromagnetic waves in the low frequency whistler mode regime are investigated experimentally and by digital data superposition. The radiation from a novel circular antenna array is shown to produce highly collimated helicon beams in a uniform unbounded plasma. The differences to Bessel beams in free space are remarked upon. Low divergence beams arise from the parallel group velocity of whistlers with phase velocity either along the guide field or at the Gendrin angle. Waves with angular momentum are produced by phasing the array in the circular direction. The differences in the field topologies for positive and negative modes numbers aremore » shown. It is also shown that in uniform plasmas, the radial amplitude profile of the waves depends on the antenna field topology. Thus, there are no helicon “eigenmodes” with radial Bessel function profiles in uniform plasmas. It is pointed out that phase measurements in helicon devices indicate radial wave propagation which is inconsistent with helicon eigenmode theory based on paraxial wave propagation. Trivelpiece-Gould modes also exist in uniform unbounded plasmas.« less

  2. Influence of movable test section elements configuration on its drag and flow field uniformity at transonic speeds

    NASA Astrophysics Data System (ADS)

    Glazkov, S. A.; Gorbushin, A. R.; Osipova, S. L.; Semenov, A. V.

    2016-10-01

    The report describes the results of flow field experimental research in TsAGI T-128 transonic wind tunnel. During the tests Mach number, stagnation pressure, test section wall perforation ratio, angles between the test section panels and mixing chamber flaps varied. Based on the test results one determined corrections to the free-stream Mach number related to the flow speed difference in the model location and in the zone of static pressure measurement on the test section walls, nonuniformity of the longitudinal velocity component in the model location, optimal position of the movable test section elements to provide flow field uniformity in the test section and minimize the test leg drag.

  3. Influence of Soret-Dufour and thermophoresis on hydromagnetic mixed convection heat and mass transfer over an inclined flat plate with non-uniform heat source/sink and chemical reaction

    NASA Astrophysics Data System (ADS)

    Pal, Dulal; Mondal, Hiranmoy

    2018-03-01

    The paper is devoted to the study of thermophoresis and Soret-Dufour effects on magnetohydrodynamic mixed convective heat and mass transfer over an inclined flat plate with non-uniform heat source/sink. Governing non-linear coupled ordinary differential equations are solved numerically using Runge-Kutta Fehlberg technique with shooting scheme. The effects of various physical parameters on the velocity, temperature, and concentration profiles are depicted graphically. The values of skin-friction coefficient, Nusselt number and Sherwood number are presented in a tabular form. It is found that increase in thermophoretic and chemical reaction parameters retard the velocity and concentration distributions in the boundary layer.

  4. An Assessment of the Uniform Funding Policy of DoD Directive 3200.11.

    DTIC Science & Technology

    1980-09-01

    34 Unpublished master’s thesis. GSM/SM/73-10, AFIT/EN, Wright-Patterson AFB OH 45433, 7 January 1974. Horngren , Charles T. Cost Accounting : A Management...reverse side if noceeeary aid identify by block number) Uniform Funding Policy Test Facilities Test and Evaluation Cost Accounting Accounting 20...segregated from overhead as a cost accounting device in both Government and industry. Historically, this distinc- tion has merely aided distribution of total

  5. Image enhancement of optical images for binary system of melanocytes and keratinocytes

    NASA Astrophysics Data System (ADS)

    Takanezawa, S.; Baba, A.; Sako, Y.; Ozaki, Y.; Date, A.; Toyama, K.; Morita, S.

    2013-05-01

    Automatic determination of the cell shapes of large numbers of melanocytes based on optical images of human skin models have been largely unsuccessful (the complexities introduced by dendrites and the melanin pigmentation over the keratinocytes to give unclear outlines). Here, we present an image enhancement procedure for enhancing the contrast of images with removing the non-uniformity of background. The brightness is normalized also for the non-uniform population density of melanocytes.

  6. Naval War College Review. Volume 60, Number 1, Winter 2007

    DTIC Science & Technology

    2007-01-01

    hospital system predates World War II, when each service provided for all of its own health care.1 In the sixty years since the conclusion of that conflict...Department of Military and Emergency Medicine at the Uniformed Services University of the Health Sciences in Bethesda, Mary- land. He is also professor...and former president of the Uniformed Services Univer- sity of the Health Sciences. Naval War College Review, Winter 2007, Vol. 60, No. 1 C:\\WIP\\NWCR

  7. Influence of beam efficiency through the patient-specific collimator on secondary neutron dose equivalent in double scattering and uniform scanning modes of proton therapy.

    PubMed

    Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K

    2010-06-01

    Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.

  8. Uniform Atmospheric Retrievals of Ultracool Late-T and Early-Y dwarfs

    NASA Astrophysics Data System (ADS)

    Garland, Ryan; Irwin, Patrick

    2018-01-01

    A significant number of ultracool (<600K) extrasolar objects have been unearthed in the past decade thanks to wide-field surveys such as WISE. These objects present a perfect testbed for examining the evolution of atmospheric structure as we transition from typically hot extrasolar temperatures to the temperatures found within our Solar System.By examining these types of objects with a uniform retrieval method, we hope to elucidate any trends and (dis)similarities found in atmospheric parameters, such as chemical abundances, temperature-pressure profile, and cloud structure, for a sample of 7 ultracool brown dwarfs as we transition from hotter (~700K) to colder objects (~450K).We perform atmospheric retrievals on two late-T and five early-Y dwarfs. We use the NEMESIS atmospheric retrieval code coupled to a Nested Sampling algorithm, along with a standard uniform model for all of our retrievals. The uniform model assumes the atmosphere is described by a gray radiative-convective temperature profile, (optionally) a self-consistent Mie scattering cloud, and a number of relevant gases. We first verify our methods by comparing it to a benchmark retrieval for Gliese 570D, which is found to be consistent. Furthermore, we present the retrieved gaseous composition, temperature structure, spectroscopic mass and radius, cloud structure and the trends associated with decreasing temperature found in this small sample of objects.

  9. Optimization of the inter-tablet coating uniformity for an active coating process at lab and pilot scale.

    PubMed

    Just, Sarah; Toschkoff, Gregor; Funke, Adrian; Djuric, Dejan; Scharrer, Georg; Khinast, Johannes; Knop, Klaus; Kleinebudde, Peter

    2013-11-30

    The objective of this study was to enhance the inter-tablet coating uniformity in an active coating process at lab and pilot scale by statistical design of experiments. The API candesartan cilexetil was applied onto gastrointestinal therapeutic systems containing the API nifedipine to obtain fixed dose combinations of these two drugs with different release profiles. At lab scale, the parameters pan load, pan speed, spray rate and number of spray nozzles were examined. At pilot scale, the parameters pan load, pan speed, spray rate, spray time, and spray pressure were investigated. A low spray rate and a high pan speed improved the coating uniformity at both scales. The number of spray nozzles was identified as the most influential variable at lab scale. With four spray nozzles, the highest CV value was equal to 6.4%, compared to 13.4% obtained with two spray nozzles. The lowest CV of 4.5% obtained with two spray nozzles was further reduced to 2.3% when using four spray nozzles. At pilot scale, CV values between 2.7% and 11.1% were achieved. Since the test of uniformity of dosage units accepts CV values of up to 6.25%, this active coating process is well suited to comply with the pharmacopoeial requirements. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Mutual coupling, channel model, and BER for curvilinear antenna arrays

    NASA Astrophysics Data System (ADS)

    Huang, Zhiyong

    This dissertation introduces a wireless communications system with an adaptive beam-former and investigates its performance with different antenna arrays. Mutual coupling, real antenna elements and channel models are included to examine the system performance. In a beamforming system, mutual coupling (MC) among the elements can significantly degrade the system performance. However, MC effects can be compensated if an accurate model of mutual coupling is available. A mutual coupling matrix model is utilized to compensate mutual coupling in the beamforming of a uniform circular array (UCA). Its performance is compared with other models in uplink and downlink beamforming scenarios. In addition, the predictions are compared with measurements and verified with results from full-wave simulations. In order to accurately investigate the minimum mean-square-error (MSE) of an adaptive array in MC, two different noise models, the environmental and the receiver noise, are modeled. The minimum MSEs with and without data domain MC compensation are analytically compared. The influence of mutual coupling on the convergence is also examined. In addition, the weight compensation method is proposed to attain the desired array pattern. Adaptive arrays with different geometries are implemented with the minimum MSE algorithm in the wireless communications system to combat interference at the same frequency. The bit-error-rate (BER) of systems with UCA, uniform rectangular array (URA) and UCA with center element are investigated in additive white Gaussian noise plus well-separated signals or random direction signals scenarios. The output SINR of an adaptive array with multiple interferers is analytically examined. The influence of the adaptive algorithm convergence on the BER is investigated. The UCA is then investigated in a narrowband Rician fading channel. The channel model is built and the space correlations are examined. The influence of the number of signal paths, number of the interferers, Doppler spread and convergence are investigated. The tracking mode is introduced to the adaptive array system, and it further improves the BER. The benefit of using faster data rate (wider bandwidth) is discussed. In order to have better performance in a 3D space, the geometries of uniform spherical array (USAs) are presented and different configurations of USAs are discussed. The LMS algorithm based on temporal a priori information is applied to UCAs and USAs to beamform the patterns. Their performances are compared based on simulation results. Based on the analytical and simulation results, it can be concluded that mutual coupling slightly influences the performance of the adaptive array in communication systems. In addition, arrays with curvilinear geometries perform well in AWGN and fading channels.

  11. 34 CFR 300.816 - Allocations to LEAs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... numbers of children enrolled in public and private elementary schools and secondary schools within the LEA... data. For the purpose of making grants under this section, States must apply on a uniform basis across... private elementary and secondary schools and the numbers of children living in poverty. (Authority: 20 U.S...

  12. 34 CFR 300.816 - Allocations to LEAs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... numbers of children enrolled in public and private elementary schools and secondary schools within the LEA... data. For the purpose of making grants under this section, States must apply on a uniform basis across... private elementary and secondary schools and the numbers of children living in poverty. (Authority: 20 U.S...

  13. 34 CFR 300.816 - Allocations to LEAs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... numbers of children enrolled in public and private elementary schools and secondary schools within the LEA... data. For the purpose of making grants under this section, States must apply on a uniform basis across... private elementary and secondary schools and the numbers of children living in poverty. (Authority: 20 U.S...

  14. 34 CFR 300.816 - Allocations to LEAs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... numbers of children enrolled in public and private elementary schools and secondary schools within the LEA... data. For the purpose of making grants under this section, States must apply on a uniform basis across... private elementary and secondary schools and the numbers of children living in poverty. (Authority: 20 U.S...

  15. Measuring attitudes about women's recourse after exposure to intimate partner violence: the ATT-RECOURSE scale.

    PubMed

    Yount, Kathryn M; VanderEnde, Kristin; Zureick-Brown, Sarah; Minh, Tran Hung; Schuler, Sidney Ruth; Anh, Hoang Tu

    2014-06-01

    Attitudes about intimate partner violence (IPV) against women are widely surveyed, but attitudes about women's recourse after exposure to IPV are understudied, despite their importance for intervention. Designed through qualitative research and administered in a probability sample of 1,054 married men and women 18 to 50 years in My Hao District, Vietnam, the ATT-RECOURSE scale measures men's and women's attitudes about a wife's recourse after exposure to physical IPV. Data were initially collected for nine items. Exploratory factor analysis (EFA) with one random split-half sample (N 1 = 526) revealed a one-factor model with significant loadings (0.316-0.686) for six items capturing a wife's silence, informal recourse, and formal recourse. A confirmatory factor analysis (CFA) with the other random split-half sample (N 2 = 528) showed adequate fit for the six-item model and significant factor loadings of similar magnitude to the EFA results (0.412-0.669). For the six items retained, men consistently favored recourse more often than did women (52.4%-66.0% of men vs. 41.9%-55.2% of women). Tests for uniform differential item functioning (DIF) by gender revealed one item with significant uniform DIF, and adjusting for this revealed an even larger gap in men's and women's attitudes, with men favoring recourse, on average, more than women. The six-item ATT-RECOURSE scale is reliable across independent samples and exhibits little uniform DIF by gender, supporting its use in surveys of men and women. Further methodological research is discussed. Research is needed in Vietnam about why women report less favorable attitudes than men regarding women's recourse after physical IPV.

  16. Generalized Entanglement Entropies of Quantum Designs.

    PubMed

    Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun

    2018-03-30

    The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.

  17. Random isotropic one-dimensional XY-model

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  18. Generalized Entanglement Entropies of Quantum Designs

    NASA Astrophysics Data System (ADS)

    Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun

    2018-03-01

    The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.

  19. Unbiased All-Optical Random-Number Generator

    NASA Astrophysics Data System (ADS)

    Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja

    2017-10-01

    The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.

  20. Random numbers certified by Bell's theorem.

    PubMed

    Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C

    2010-04-15

    Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory.

Top