Sample records for uniform random variables

  1. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  2. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  3. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  4. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  5. Some limit theorems for ratios of order statistics from uniform random variables.

    PubMed

    Xu, Shou-Fang; Miao, Yu

    2017-01-01

    In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.

  6. Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.

    2015-08-01

    In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.

  7. CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amos, D.E.

    1977-04-01

    A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.

  8. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  9. Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport

    NASA Astrophysics Data System (ADS)

    Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike

    2017-04-01

    Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.

  10. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  11. CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties

    DTIC Science & Technology

    2017-03-01

    inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse

  12. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  13. Influence of tree spatial pattern and sample plot type and size on inventory

    Treesearch

    John-Pascall Berrill; Kevin L. O' Hara

    2012-01-01

    Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

  14. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  15. Effects of Model Characteristics on Observational Learning of Inmates in a Pre-Release Center.

    ERIC Educational Resources Information Center

    Fliegel, Alan B.

    Subjects were 138 inmates from the pre-release unit of a Southwestern prison system, randomly divided into three groups of 46 each. Each group viewed a video-taped model delivering a speech. The independent variable had three levels: (1) lecturer attired in a shirt and tie; (2) lecturer attired in a correctional officer's uniform; and (3) model…

  16. Signs of universality in the structure of culture

    NASA Astrophysics Data System (ADS)

    Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego

    2017-11-01

    Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.

  17. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  18. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  19. Condensation with two constraints and disorder

    NASA Astrophysics Data System (ADS)

    Barré, J.; Mangeolle, L.

    2018-04-01

    We consider a set of positive random variables obeying two additive constraints, a linear and a quadratic one; these constraints mimic the conservation laws of a dynamical system. In the simplest setting, without disorder, it is known that such a system may undergo a ‘condensation’ transition, whereby one random variable becomes much larger than the others; this transition has been related to the spontaneous appearance of non linear localized excitations in certain nonlinear chains, called breathers. Motivated by the study of breathers in a disordered discrete nonlinear Schrödinger equation, we study different instances of this problem in presence of a quenched disorder. Unless the disorder is too strong, the phase diagram looks like the one without disorder, with a transition separating a fluid phase, where all variables have the same order of magnitude, and a condensed phase, where one variable is much larger than the others. We then show that the condensed phase exhibits various degrees of ‘intermediate symmetry breaking’: the site hosting the condensate is chosen neither uniformly at random, nor is it fixed by the disorder realization. Throughout the article, our heuristic arguments are complemented with direct Monte Carlo simulations.

  20. Linking of uniform random polygons in confined spaces

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.

    2007-03-01

    In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .

  1. Exact Markov chains versus diffusion theory for haploid random mating.

    PubMed

    Tyvand, Peder A; Thorvaldsen, Steinar

    2010-05-01

    Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.

  2. Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.

    DTIC Science & Technology

    The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)

  3. Tablet splitting and weight uniformity of half-tablets of 4 medications in pharmacy practice.

    PubMed

    Tahaineh, Linda M; Gharaibeh, Shadi F

    2012-08-01

    Tablet splitting is a common practice for multiple reasons including cost savings; however, it does not necessarily result in weight-uniform half-tablets. To determine weight uniformity of half-tablets resulting from splitting 4 products available in the Jordanian market and investigate the effect of tablet characteristics on weight uniformity of half-tablets. Ten random tablets each of warfarin 5 mg, digoxin 0.25 mg, phenobarbital 30 mg, and prednisolone 5 mg were weighed and split by 6 PharmD students using a knife. The resulting half-tablets were weighed and evaluated for weight uniformity. Other relevant physical characteristics of the 4 products were measured. The average tablet hardness of the sampled tablets ranged from 40.3 N to 68.9 N. Digoxin, phenobarbital, and prednisolone half-tablets failed the weight uniformity test; however, warfarin half-tablets passed. Digoxin, warfarin, and phenobarbital tablets had a score line and warfarin tablets had the deepest score line of 0.81 mm. Splitting warfarin tablets produces weight-uniform half-tablets that may possibly be attributed to the hardness and the presence of a deep score line. Digoxin, phenobarbital, and prednisolone tablet splitting produces highly weight variable half-tablets. This can be of clinical significance in the case of the narrow therapeutic index medication digoxin.

  4. Combining numerical simulations with time-domain random walk for pathogen risk assessment in groundwater

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Molin, S.

    2012-02-01

    We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.

  5. Stochastic species abundance models involving special copulas

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  6. Double Ramp Loss Based Reject Option Classifier

    DTIC Science & Technology

    2015-05-22

    choose 10% of these points uniformly at random and flip their labels. 2. Ionosphere Dataset [2] : This dataset describes the problem of discrimi- nating...good versus bad radars based on whether they send some useful infor- mation about the Ionosphere . There are 34 variables and 351 observations. 3... Ionosphere dataset (nonlinear classifiers using RBF kernel for both the approaches) d LDR (C = 2, γ = 0.125) LDH (C = 16, γ = 0.125) Risk RR Acc(unrej

  7. Generalized radiative transfer theory for scattering by particles in an absorbing gas: Addressing both spatial and spectral integration in multi-angle remote sensing of optically thin aerosol layers

    NASA Astrophysics Data System (ADS)

    Davis, Anthony B.; Xu, Feng; Diner, David J.

    2018-01-01

    We demonstrate the computational advantage gained by introducing non-exponential transmission laws into radiative transfer theory for two specific situations. One is the problem of spatial integration over a large domain where the scattering particles cluster randomly in a medium uniformly filled with an absorbing gas, and only a probabilistic description of the variability is available. The increasingly important application here is passive atmospheric profiling using oxygen absorption in the visible/near-IR spectrum. The other scenario is spectral integration over a region where the absorption cross-section of a spatially uniform gas varies rapidly and widely and, moreover, there are scattering particles embedded in the gas that are distributed uniformly, or not. This comes up in many applications, O2 A-band profiling being just one instance. We bring a common framework to solve these problems both efficiently and accurately that is grounded in the recently developed theory of Generalized Radiative Transfer (GRT). In GRT, the classic exponential law of transmission is replaced by one with a slower power-law decay that accounts for the unresolved spectral or spatial variability. Analytical results are derived in the single-scattering limit that applies to optically thin aerosol layers. In spectral integration, a modest gain in accuracy is obtained. As for spatial integration of near-monochromatic radiance, we find that, although both continuum and in-band radiances are affected by moderate levels of sub-pixel variability, only extreme variability will affect in-band/continuum ratios.

  8. Improved high-dimensional prediction with Random Forests by the use of co-data.

    PubMed

    Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A

    2017-12-28

    Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.

  9. Optimal hash arrangement of tentacles in jellyfish

    NASA Astrophysics Data System (ADS)

    Okabe, Takuya; Yoshimura, Jin

    2016-06-01

    At first glance, the trailing tentacles of a jellyfish appear to be randomly arranged. However, close examination of medusae has revealed that the arrangement and developmental order of the tentacles obey a mathematical rule. Here, we show that medusa jellyfish adopt the best strategy to achieve the most uniform distribution of a variable number of tentacles. The observed order of tentacles is a real-world example of an optimal hashing algorithm known as Fibonacci hashing in computer science.

  10. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  11. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    NASA Astrophysics Data System (ADS)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  12. Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1991-01-01

    The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.

  13. Correlations and path analysis among agronomic and technological traits of upland cotton.

    PubMed

    Farias, F J C; Carvalho, L P; Silva Filho, J L; Teodoro, P E

    2016-08-12

    To date, path analysis has been used with the aim of breeding different cultures. However, for cotton, there have been few studies using this analysis, and all of these have used fiber productivity as the primary dependent variable. Therefore, the aim of the present study was to identify agronomic and technological properties that can be used as criteria for direct and indirect phenotypes in selecting cotton genotypes with better fibers. We evaluated 16 upland cotton genotypes in eight trials conducted during the harvest 2008/2009 in the State of Mato Grosso, using a randomized block design with four replicates. The evaluated traits were: plant height, average boll weight, percentage of fiber, cotton seed yield, fiber length, uniformity of fiber, short fiber index, fiber strength, elongation, maturity of the fibers, micronaire, reflectance, and the degree of yellowing. Phenotypic correlations between the traits and cotton fiber yield (main dependent variable) were unfolded in direct and indirect effects through path analysis. Fiber strength, uniformity of fiber, and reflectance were found to influence fiber length, and therefore, these traits are recommended for both direct and indirect selection of cotton genotypes.

  14. A statistical, task-based evaluation method for three-dimensional x-ray breast imaging systems using variable-background phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Subok; Jennings, Robert; Liu Haimo

    Purpose: For the last few years, development and optimization of three-dimensional (3D) x-ray breast imaging systems, such as digital breast tomosynthesis (DBT) and computed tomography, have drawn much attention from the medical imaging community, either academia or industry. However, there is still much room for understanding how to best optimize and evaluate the devices over a large space of many different system parameters and geometries. Current evaluation methods, which work well for 2D systems, do not incorporate the depth information from the 3D imaging systems. Therefore, it is critical to develop a statistically sound evaluation method to investigate the usefulnessmore » of inclusion of depth and background-variability information into the assessment and optimization of the 3D systems. Methods: In this paper, we present a mathematical framework for a statistical assessment of planar and 3D x-ray breast imaging systems. Our method is based on statistical decision theory, in particular, making use of the ideal linear observer called the Hotelling observer. We also present a physical phantom that consists of spheres of different sizes and materials for producing an ensemble of randomly varying backgrounds to be imaged for a given patient class. Lastly, we demonstrate our evaluation method in comparing laboratory mammography and three-angle DBT systems for signal detection tasks using the phantom's projection data. We compare the variable phantom case to that of a phantom of the same dimensions filled with water, which we call the uniform phantom, based on the performance of the Hotelling observer as a function of signal size and intensity. Results: Detectability trends calculated using the variable and uniform phantom methods are different from each other for both mammography and DBT systems. Conclusions: Our results indicate that measuring the system's detection performance with consideration of background variability may lead to differences in system performance estimates and comparisons. For the assessment of 3D systems, to accurately determine trade offs between image quality and radiation dose, it is critical to incorporate randomness arising from the imaging chain including background variability into system performance calculations.« less

  15. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  16. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  17. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  18. Descriptive parameter for photon trajectories in a turbid medium

    NASA Astrophysics Data System (ADS)

    Gandjbakhche, Amir H.; Weiss, George H.

    2000-06-01

    In many applications of laser techniques for diagnostic or therapeutic purposes it is necessary to be able to characterize photon trajectories to know which parts of the tissue are being interrogated. In this paper, we consider the cw reflectance experiment on a semi-infinite medium with uniform optical parameters and having a planar interface. The analysis is carried out in terms of a continuous-time random walk and the relation between the occupancy of a plane parallel to the surface to the maximum depth reached by the random walker is studied. The first moment of the ratio of average depth to the average maximum depth yields information about the volume of tissue interrogated as well as giving some indication of the region of tissue that gets the most light. We have also calculated the standard deviation of this random variable. It is not large enough to qualitatively affect information contained in the first moment.

  19. Random isotropic one-dimensional XY-model

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  20. Prospective, randomized, blinded evaluation of donor semen quality provided by seven commercial sperm banks.

    PubMed

    Carrell, Douglas T; Cartmill, Deborah; Jones, Kirtly P; Hatasaka, Harry H; Peterson, C Matthew

    2002-07-01

    To evaluate variability in donor semen quality between seven commercial donor sperm banks, within sperm banks, and between intracervical insemination and intrauterine insemination. Prospective, randomized, blind evaluation of commercially available donor semen samples. An academic andrology laboratory. Seventy-five cryopreserved donor semen samples were evaluated. Samples were coded, then blindly evaluated for semen quality. Standard semen quality parameters, including concentration, motility parameters, World Health Organization criteria morphology, and strict criteria morphology. Significant differences were observed between donor semen banks for most semen quality parameters analyzed in intracervical insemination samples. In general, the greatest variability observed between banks was in percentage progressive sperm motility (range, 8.8 +/- 5.8 to 42.4 +/- 5.5) and normal sperm morphology (strict criteria; range, 10.1 +/- 3.3 to 26.6 +/- 4.7). Coefficients of variation within sperm banks were generally high. These data demonstrate the variability of donor semen quality provided by commercial sperm banks, both between banks and within a given bank. No relationship was observed between the size or type of sperm bank and the degree of variability. The data demonstrate the lack of uniformity in the criteria used to screen potential semen donors and emphasize the need for more stringent screening criteria and strict quality control in processing samples.

  1. Secure uniform random-number extraction via incoherent strategies

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Zhu, Huangjun

    2018-01-01

    To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.

  2. Probability in High Dimension

    DTIC Science & Technology

    2014-06-30

    b 1 , . . . , b0m, bm)  fm(b0) + Pm i=1 1bi 6=b0 i 1b i 6=b j for j<i. 4.8 ( Travelling salesman problem ). Let X 1 , . . . ,Xn be i.i.d. points that...are uniformly distributed in the unit square [0, 1]2. We think of Xi as the location of city i. The goal of the travelling salesman problem is to find... salesman problem , . . . • Probability in Banach spaces: probabilistic limit theorems for Banach- valued random variables, empirical processes, local

  3. Effect of feed supplement containing earthworm meal (Lumbricus rubellus) on production performance of quail (Coturnix coturnix japonica)

    NASA Astrophysics Data System (ADS)

    Istiqomah, L.; Sakti, A. A.; Suryani, A. E.; Karimy, M. F.; Anggraeni, A. S.; Herdian, H.

    2017-12-01

    The objective of this study was to evaluate the effect of feed supplement (FS) contained earthworm meal (EWM) on production performance of laying quails. Twenty weeks-old of 360 Coturnix coturnix japonica quails were used in a Completely Randomized Design (CRD) with three dietary treatments A = CD (control without FS), B = CD + 0.250 % of FS, and C = CD + 0.375 % of FS during 6 weeks of experimental period. Each treatment in 4 equal replicates in which 30 quails were randomly allocated into 12 units of cages. Variable measured were feed intake, feed conversion ratio, feed efficiency, mortality rate, hen day production, egg weight, and egg uniformity. Data were statistically analyzed by One Way ANOVA and the differences among mean treatments are analysed using Duncan’s Multiple Range Test (DMRT). The results showed that administration of 0.375% FS based on earthworm meal, fermented rice bran, and skim milk impaired the feed conversion ratio and increased the feed efficiency. The experimental treatments did not effect on feed intake, mortality, hen day production, egg weight, and egg uniformity of quail. It is concluded that administration of feed supplement improved the growth performance of quail.

  4. Dynamic interactions between musical, cardiovascular, and cerebral rhythms in humans.

    PubMed

    Bernardi, Luciano; Porta, Cesare; Casucci, Gaia; Balsamo, Rossella; Bernardi, Nicolò F; Fogari, Roberto; Sleight, Peter

    2009-06-30

    Reactions to music are considered subjective, but previous studies suggested that cardiorespiratory variables increase with faster tempo independent of individual preference. We tested whether compositions characterized by variable emphasis could produce parallel instantaneous cardiovascular/respiratory responses and whether these changes mirrored music profiles. Twenty-four young healthy subjects, 12 musicians (choristers) and 12 nonmusician control subjects, listened (in random order) to music with vocal (Puccini's "Turandot") or orchestral (Beethoven's 9th Symphony adagio) progressive crescendos, more uniform emphasis (Bach cantata), 10-second period (ie, similar to Mayer waves) rhythmic phrases (Giuseppe Verdi's arias "Va pensiero" and "Libiam nei lieti calici"), or silence while heart rate, respiration, blood pressures, middle cerebral artery flow velocity, and skin vasomotion were recorded.Common responses were recognized by averaging instantaneous cardiorespiratory responses regressed against changes in music profiles and by coherence analysis during rhythmic phrases. Vocal and orchestral crescendos produced significant (P=0.05 or better) correlations between cardiovascular or respiratory signals and music profile, particularly skin vasoconstriction and blood pressures, proportional to crescendo, in contrast to uniform emphasis, which induced skin vasodilation and reduction in blood pressures. Correlations were significant both in individual and group-averaged signals. Phrases at 10-second periods by Verdi entrained the cardiovascular autonomic variables. No qualitative differences in recorded measurements were seen between musicians and nonmusicians. Music emphasis and rhythmic phrases are tracked consistently by physiological variables. Autonomic responses are synchronized with music, which might therefore convey emotions through autonomic arousal during crescendos or rhythmic phrases.

  5. Pattern Selection and Super-Patterns in Opinion Dynamics

    NASA Astrophysics Data System (ADS)

    Ben-Naim, Eli; Scheel, Arnd

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.

  6. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  7. Does the central limit theorem always apply to phase noise? Some implications for radar problems

    NASA Astrophysics Data System (ADS)

    Gray, John E.; Addison, Stephen R.

    2017-05-01

    The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.

  8. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation

  9. Stochastic effects in EUV lithography: random, local CD variability, and printing failures

    NASA Astrophysics Data System (ADS)

    De Bisschop, Peter

    2017-10-01

    Stochastic effects in lithography are usually quantified through local CD variability metrics, such as line-width roughness or local CD uniformity (LCDU), and these quantities have been measured and studied intensively, both in EUV and optical lithography. Next to the CD-variability, stochastic effects can also give rise to local, random printing failures, such as missing contacts or microbridges in spaces. When these occur, there often is no (reliable) CD to be measured locally, and then such failures cannot be quantified with the usual CD-measuring techniques. We have developed algorithms to detect such stochastic printing failures in regular line/space (L/S) or contact- or dot-arrays from SEM images, leading to a stochastic failure metric that we call NOK (not OK), which we consider a complementary metric to the CD-variability metrics. This paper will show how both types of metrics can be used to experimentally quantify dependencies of stochastic effects to, e.g., CD, pitch, resist, exposure dose, etc. As it is also important to be able to predict upfront (in the OPC verification stage of a production-mask tape-out) whether certain structures in the layout are likely to have a high sensitivity to stochastic effects, we look into the feasibility of constructing simple predictors, for both stochastic CD-variability and printing failure, that can be calibrated for the process and exposure conditions used and integrated into the standard OPC verification flow. Finally, we briefly discuss the options to reduce stochastic variability and failure, considering the entire patterning ecosystem.

  10. Impact of Uniform Methods on Interlaboratory Antibody Titration Variability: Antibody Titration and Uniform Methods.

    PubMed

    Bachegowda, Lohith S; Cheng, Yan H; Long, Thomas; Shaz, Beth H

    2017-01-01

    -Substantial variability between different antibody titration methods prompted development and introduction of uniform methods in 2008. -To determine whether uniform methods consistently decrease interlaboratory variation in proficiency testing. -Proficiency testing data for antibody titration between 2009 and 2013 were obtained from the College of American Pathologists. Each laboratory was supplied plasma and red cells to determine anti-A and anti-D antibody titers by their standard method: gel or tube by uniform or other methods at different testing phases (immediate spin and/or room temperature [anti-A], and/or anti-human globulin [AHG: anti-A and anti-D]) with different additives. Interlaboratory variations were compared by analyzing the distribution of titer results by method and phase. -A median of 574 and 1100 responses were reported for anti-A and anti-D antibody titers, respectively, during a 5-year period. The 3 most frequent (median) methods performed for anti-A antibody were uniform tube room temperature (147.5; range, 119-159), uniform tube AHG (143.5; range, 134-150), and other tube AHG (97; range, 82-116); for anti-D antibody, the methods were other tube (451; range, 431-465), uniform tube (404; range, 382-462), and uniform gel (137; range, 121-153). Of the larger reported methods, uniform gel AHG phase for anti-A and anti-D antibodies had the most participants with the same result (mode). For anti-A antibody, 0 of 8 (uniform versus other tube room temperature) and 1 of 8 (uniform versus other tube AHG), and for anti-D antibody, 0 of 8 (uniform versus other tube) and 0 of 8 (uniform versus other gel) proficiency tests showed significant titer variability reduction. -Uniform methods harmonize laboratory techniques but rarely reduce interlaboratory titer variance in comparison with other methods.

  11. Arctic storms simulated in atmospheric general circulation models under uniform high, uniform low, and variable resolutions

    NASA Astrophysics Data System (ADS)

    Roesler, E. L.; Bosler, P. A.; Taylor, M.

    2016-12-01

    The impact of strong extratropical storms on coastal communities is large, and the extent to which storms will change with a warming Arctic is unknown. Understanding storms in reanalysis and in climate models is important for future predictions. We know that the number of detected Arctic storms in reanalysis is sensitive to grid resolution. To understand Arctic storm sensitivity to resolution in climate models, we describe simulations designed to identify and compare Arctic storms at uniform low resolution (1 degree), at uniform high resolution (1/8 degree), and at variable resolution (1 degree to 1/8 degree). High-resolution simulations resolve more fine-scale structure and extremes, such as storms, in the atmosphere than a uniform low-resolution simulation. However, the computational cost of running a globally uniform high-resolution simulation is often prohibitive. The variable resolution tool in atmospheric general circulation models permits regional high-resolution solutions at a fraction of the computational cost. The storms are identified using the open-source search algorithm, Stride Search. The uniform high-resolution simulation has over 50% more storms than the uniform low-resolution and over 25% more storms than the variable resolution simulations. Storm statistics from each of the simulations is presented and compared with reanalysis. We propose variable resolution as a cost-effective means of investigating physics/dynamics coupling in the Arctic environment. Future work will include comparisons with observed storms to investigate tuning parameters for high resolution models. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2016-7402 A

  12. The Newcomb-Benford law in its relation to some common distributions.

    PubMed

    Formann, Anton K

    2010-05-07

    An often reported, but nevertheless persistently striking observation, formalized as the Newcomb-Benford law (NBL), is that the frequencies with which the leading digits of numbers occur in a large variety of data are far away from being uniform. Most spectacular seems to be the fact that in many data the leading digit 1 occurs in nearly one third of all cases. Explanations for this uneven distribution of the leading digits were, among others, scale- and base-invariance. Little attention, however, found the interrelation between the distribution of the significant digits and the distribution of the observed variable. It is shown here by simulation that long right-tailed distributions of a random variable are compatible with the NBL, and that for distributions of the ratio of two random variables the fit generally improves. Distributions not putting most mass on small values of the random variable (e.g. symmetric distributions) fail to fit. Hence, the validity of the NBL needs the predominance of small values and, when thinking of real-world data, a majority of small entities. Analyses of data on stock prices, the areas and numbers of inhabitants of countries, and the starting page numbers of papers from a bibliography sustain this conclusion. In all, these findings may help to understand the mechanisms behind the NBL and the conditions needed for its validity. That this law is not only of scientific interest per se, but that, in addition, it has also substantial implications can be seen from those fields where it was suggested to be put into practice. These fields reach from the detection of irregularities in data (e.g. economic fraud) to optimizing the architecture of computers regarding number representation, storage, and round-off errors.

  13. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  14. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  15. Feedback shift register sequences versus uniformly distributed random sequences for correlation chromatography

    NASA Technical Reports Server (NTRS)

    Kaljurand, M.; Valentin, J. R.; Shao, M.

    1996-01-01

    Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.

  16. Pattern selection and super-patterns in the bounded confidence model

    DOE PAGES

    Ben-Naim, E.; Scheel, A.

    2015-10-26

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes ofmore » the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. Furthermore, the spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.« less

  17. Role of corticosteroid as a prophylactic measure in fat embolism syndrome: a literature review.

    PubMed

    Sen, Ramesh K; Tripathy, Sujit K; Krishnan, Vibhu

    2012-06-01

    Despite a number of studies on steroid therapy as a prophylactic measure in fat embolism syndrome (FES), there is no universal agreement about its role in this critical situation. The present article attempts to search the available literature, and provides a more lucid picture to the readers on this issue. Seven articles (total 483 patients) were reviewed and analyzed. Total of 223 patients received steroid (methyl prednisolone sodium succinate), while the remaining 260 patients formed the control population. Among these subjects, 9 patients in steroid-receiving group and 60 patients in the control group developed FES (P < 0.05). The lack of uniformities in these studies, variable dose and single-center trial are the principal limitations and confuses the surgeons to have definite conclusion. Large-scale, more uniformly designed, multi-centered, randomized, prospective trials are needed to determine the correct situations and dosage in which steroids provide the maximum benefit (with the least possible risk).

  18. Pattern selection and super-patterns in the bounded confidence model

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Scheel, A.

    2015-10-01

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.

  19. Turbulent, Extreme Multi-zone Model for Simulating Flux and Polarization Variability in Blazars

    NASA Astrophysics Data System (ADS)

    Marscher, Alan P.

    2014-01-01

    The author presents a model for variability of the flux and polarization of blazars in which turbulent plasma flowing at a relativistic speed down a jet crosses a standing conical shock. The shock compresses the plasma and accelerates electrons to energies up to γmax >~ 104 times their rest-mass energy, with the value of γmax determined by the direction of the magnetic field relative to the shock front. The turbulence is approximated in a computer code as many cells, each with a uniform magnetic field whose direction is selected randomly. The density of high-energy electrons in the plasma changes randomly with time in a manner consistent with the power spectral density of flux variations derived from observations of blazars. The variations in flux and polarization are therefore caused by continuous noise processes rather than by singular events such as explosive injection of energy at the base of the jet. Sample simulations illustrate the behavior of flux and linear polarization versus time that such a model produces. The variations in γ-ray flux generated by the code are often, but not always, correlated with those at lower frequencies, and many of the flares are sharply peaked. The mean degree of polarization of synchrotron radiation is higher and its timescale of variability shorter toward higher frequencies, while the polarization electric vector sometimes randomly executes apparent rotations. The slope of the spectral energy distribution exhibits sharper breaks than can arise solely from energy losses. All of these results correspond to properties observed in blazars.

  20. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

    PubMed

    Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

    2011-03-01

    During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

  1. Stochastic transport in the presence of spatial disorder: Fluctuation-induced corrections to homogenization

    NASA Astrophysics Data System (ADS)

    Russell, Matthew J.; Jensen, Oliver E.; Galla, Tobias

    2016-10-01

    Motivated by uncertainty quantification in natural transport systems, we investigate an individual-based transport process involving particles undergoing a random walk along a line of point sinks whose strengths are themselves independent random variables. We assume particles are removed from the system via first-order kinetics. We analyze the system using a hierarchy of approaches when the sinks are sparsely distributed, including a stochastic homogenization approximation that yields explicit predictions for the extrinsic disorder in the stationary state due to sink strength fluctuations. The extrinsic noise induces long-range spatial correlations in the particle concentration, unlike fluctuations due to the intrinsic noise alone. Additionally, the mean concentration profile, averaged over both intrinsic and extrinsic noise, is elevated compared with the corresponding profile from a uniform sink distribution, showing that the classical homogenization approximation can be a biased estimator of the true mean.

  2. Interpretation of Time Series from Nonlinear Systems. Volume 58. Proceedings of the IUTAM Symposium and NATO Advanced Research Workshop on the Interpretation of Time Series from Nonlinear Mechanical Systems Held in England on 26 - 30 August 1991,

    DTIC Science & Technology

    1992-01-01

    VM and the correlation entropy K,(M) versus the embedding dimension M for both the linear and non-linear signals. Crosses refer to the linear signal...mensions, leading to a correlation dimension v=2.7. A similar structure was observed bv Voges et al. [461 in the analysis of the X-ray variability of...0 + 7 1j, and its recurrence plots often indicates whether a where A 0 = 10 and 71, is uniformly random dis- meaningful correlation integral analysis

  3. Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1992-01-01

    In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.

  4. The contribution of simple random sampling to observed variations in faecal egg counts.

    PubMed

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Emergence of an optimal search strategy from a simple random walk

    PubMed Central

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-01-01

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445

  6. Emergence of an optimal search strategy from a simple random walk.

    PubMed

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  7. Impact of Temporally Variable and Uniform Pumping Regimes on Contaminant Transport in Heterogeneous Aquifers

    NASA Astrophysics Data System (ADS)

    Libera, A.; de Barros, F.; Guadagnini, A.

    2015-12-01

    We study and compare the effect of temporally variable and uniform pumping regimes on key features of contaminant transport in a randomly heterogeneous aquifer. Pumping wells are used for groundwater supply in the context of urban, agricultural, and industrial activities. Groundwater management agencies typically schedule groundwater extraction through a predefined sequence of pumping periods to balance benefits to anthropogenic activities and environmental needs. The impact of the spatial variability of aquifer hydraulic properties, such as hydraulic conductivity, on contaminant transport and associated solute residence times are widely studied. Only a limited number of studies address the way a given pumping schedule affects contaminant plume behavior in heterogeneous aquifers. In this context, the feedback between a transient pumping regime and contaminant breakthrough curves is largely unexplored. Our goal is to investigate the way diverse groundwater extraction strategies affect the history of solute concentration recovered at the well while accounting for the natural variability of the geological system, in the presence of incomplete information on hydraulic conductivity distribution. Considering the joint effects of spatially heterogeneous hydraulic conductivity and temporally varying well pumping rates, this work offers a realistic evaluation of groundwater contamination risk. The latter is here considered in the context of human health and is quantified in terms of the probability that harm will result from exposure to a contaminant found in groundwater. Two scenarios are considered: a pumping well that extracts a given amount of water operating (a) at a constant pumping rate and (b) under transient conditions. The analysis is performed within a numerical Monte Carlo framework. We probe the impact of diverse geostatistical structures to describe aquifer heterogeneity on solute breakthrough curves and the statistics of target environmental performance metrics, including, e.g., peak concentration and the time at which peak breakthrough at well occurs.

  8. The Association between Tax Structure and Cigarette Price Variability: Findings from the International Tobacco Control Policy Evaluation (ITC) Project

    PubMed Central

    Shang, Ce; Chaloupka, Frank J.; Fong, Geoffrey T; Thompson, Mary; O’Connor, Richard J

    2015-01-01

    Background Recent studies have shown that more opportunities exist for tax avoidance when cigarette excise tax structure departs from a uniform specific structure. However, the association between tax structure and cigarette price variability has not been thoroughly studied in the existing literature. Objective To examine how cigarette tax structure is associated with price variability. The variability of self-reported prices is measured using the ratios of differences between higher and lower prices to the median price such as the IQR-to-median ratio. Methods We used survey data taken from the International Tobacco Control Policy Evaluation (ITC) Project in 17 countries to conduct the analysis. Cigarette prices were derived using individual purchase information and aggregated to price variability measures for each surveyed country and wave. The effect of tax structures on price variability was estimated using Generalised Estimating Equations after adjusting for year and country attributes. Findings Our study provides empirical evidence of a relationship between tax structure and cigarette price variability. We find that, compared to the specific uniform tax structure, mixed uniform and tiered (specific, ad valorem or mixed) structures are associated with greater price variability (p≤0.01). Moreover, while a greater share of the specific component in total excise taxes is associated with lower price variability (p≤0.05), a tiered tax structure is associated with greater price variability (p≤0.01). The results suggest that a uniform and specific tax structure is the most effective tax structure for reducing tobacco consumption and prevalence by limiting price variability and decreasing opportunities for tax avoidance. PMID:25855641

  9. Expert Assessment of Stigmergy: A Report for the Department of National Defence

    DTIC Science & Technology

    2005-10-01

    pheromone table may be reduced by implementing a clustering scheme. Termite can take advantage of the wireless broadcast medium, since it is possible for...comparing it with any other routing scheme. The Termite scheme [RW] differs from the source routing [ITT] by applying pheromone trails or random walks...rather than uniform or probabilistic ones. Random walk ants differ from uniform ants since they follow pheromone trails, if any. Termite [RW] also

  10. Influence of travel speed on spray deposition uniformity from an air-assisted variable-rate sprayer

    USDA-ARS?s Scientific Manuscript database

    A newly developed LiDAR-guided air-assisted variable-rate sprayer for nursery and orchard applications was tested at various travel speeds to compare its spray deposition and coverage uniformity with constant-rate applications. Spray samplers, including nylon screens and water-sensitive papers (WSP)...

  11. Assessing application uniformity of a variable rate irrigation system in a windy location

    USDA-ARS?s Scientific Manuscript database

    Variable rate irrigation (VRI) systems are commercially available and can easily be retrofitted onto moving sprinkler systems. However, there are few reports on the application performance of such equipment. In this study, application uniformity of two center pivots equipped with a commercial VRI sy...

  12. Bubble Detachment in Variable Gravity Under the Influence of a Non-Uniform Electric Field

    NASA Technical Reports Server (NTRS)

    Chang, Shinan; Herman, Cila; Iacona, Estelle

    2002-01-01

    The objective of the study reported in this paper is to investigate the effects of variable, reduced gravity on the formation and detachment behavior of individual air bubbles under the influence of a non-uniform electric field. For this purpose, variable gravity experiments were carried out in parabolic nights. The non-uniform electric field was generated by a spherical electrode and a plate electrode. The effect of the magnitude of the non-uniform electric field and gravity level on bubble formation, development and detachment at an orifice was investigated. An image processing code was developed that allows the measurement of bubble volume, dimensions and contact angle at detachment. The results of this research can be used to explore the possibility of enhancing boiling heat transfer in the variable and low gravity environments by substituting the buoyancy force with a force induced by the electric field. The results of experiments and measurements indicate that the level of gravity significantly affects bubble shape, size and frequency. The electric field magnitude also influences bubble detachment, however, its impact is not as profound as that of variable gravity for the range of electric field magnitudes investigated in the present study.

  13. Impact of AlO x layer on resistive switching characteristics and device-to-device uniformity of bilayered HfO x -based resistive random access memory devices

    NASA Astrophysics Data System (ADS)

    Chuang, Kai-Chi; Chung, Hao-Tung; Chu, Chi-Yan; Luo, Jun-Dao; Li, Wei-Shuo; Li, Yi-Shao; Cheng, Huang-Chung

    2018-06-01

    An AlO x layer was deposited on HfO x , and bilayered dielectric films were found to confine the formation locations of conductive filaments (CFs) during the forming process and then improve device-to-device uniformity. In addition, the Ti interposing layer was also adopted to facilitate the formation of oxygen vacancies. As a result, the resistive random access memory (RRAM) device with TiN/Ti/AlO x (1 nm)/HfO x (6 nm)/TiN stack layers demonstrated excellent device-to-device uniformity although it achieved slightly larger resistive switching characteristics, which were forming voltage (V Forming) of 2.08 V, set voltage (V Set) of 1.96 V, and reset voltage (V Reset) of ‑1.02 V, than the device with TiN/Ti/HfO x (6 nm)/TiN stack layers. However, the device with a thicker 2-nm-thick AlO x layer showed worse uniformity than the 1-nm-thick one. It was attributed to the increased oxygen atomic percentage in the bilayered dielectric films of the 2-nm-thick one. The difference in oxygen content showed that there would be less oxygen vacancies to form CFs. Therefore, the random growth of CFs would become severe and the device-to-device uniformity would degrade.

  14. The association between tax structure and cigarette price variability: findings from the ITC Project.

    PubMed

    Shang, Ce; Chaloupka, Frank J; Fong, Geoffrey T; Thompson, Mary; O'Connor, Richard J

    2015-07-01

    Recent studies have shown that more opportunities exist for tax avoidance when cigarette excise tax structure departs from a uniform specific structure. However, the association between tax structure and cigarette price variability has not been thoroughly studied in the existing literature. To examine how cigarette tax structure is associated with price variability. The variability of self-reported prices is measured using the ratios of differences between higher and lower prices to the median price such as the IQR-to-median ratio. We used survey data taken from the International Tobacco Control Policy Evaluation (ITC) Project in 17 countries to conduct the analysis. Cigarette prices were derived using individual purchase information and aggregated to price variability measures for each surveyed country and wave. The effect of tax structures on price variability was estimated using Generalised Estimating Equations after adjusting for year and country attributes. Our study provides empirical evidence of a relationship between tax structure and cigarette price variability. We find that, compared to the specific uniform tax structure, mixed uniform and tiered (specific, ad valorem or mixed) structures are associated with greater price variability (p≤0.01). Moreover, while a greater share of the specific component in total excise taxes is associated with lower price variability (p≤0.05), a tiered tax structure is associated with greater price variability (p≤0.01). The results suggest that a uniform and specific tax structure is the most effective tax structure for reducing tobacco consumption and prevalence by limiting price variability and decreasing opportunities for tax avoidance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. Response of moderately thick laminated cross-ply composite shells subjected to random excitation

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaak; Cederbaum, Gabriel; Librescu, Liviu

    1989-01-01

    This study deals with the dynamic response of transverse shear deformable laminated shells subjected to random excitation. The analysis encompasses the following problems: (1) the dynamic response of circular cylindrical shells of finite length excited by an axisymmetric uniform ring loading, stationary in time, and (2) the response of spherical and cylindrical panels subjected to stationary random loadings with uniform spatial distribution. The associated equations governing the structural theory of shells are derived upon discarding the classical Love-Kirchhoff (L-K) assumptions. In this sense, the theory is formulated in the framework of the first-order transverse shear deformation theory (FSDT).

  16. A statistical model for radar images of agricultural scenes

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  17. How do formulation and process parameters impact blend and unit dose uniformity? Further analysis of the product quality research institute blend uniformity working group industry survey.

    PubMed

    Hancock, Bruno C; Garcia-Munoz, Salvador

    2013-03-01

    Responses from the second Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) survey of industry have been reanalyzed to identify potential links between formulation and processing variables and the measured uniformity of blends and unit dosage forms. As expected, the variability of the blend potency and tablet potency data increased with a decrease in the loading of the active pharmaceutical ingredient (API). There was also an inverse relationship between the nominal strength of the unit dose and the blend uniformity data. The data from the PQRI industry survey do not support the commonly held viewpoint that granulation processes are necessary to create and sustain tablet and capsule formulations with a high degree of API uniformity. There was no correlation between the blend or tablet potency variability and the type of process used to manufacture the product. Although it is commonly believed that direct compression processes should be avoided for low API loading formulations because of blend and tablet content uniformity concerns, the data for direct compression processes reported by the respondents to the PQRI survey suggest that such processes are being used routinely to manufacture solid dosage forms of acceptable quality even when the drug loading is quite low. Copyright © 2012 Wiley Periodicals, Inc.

  18. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  19. Cluster pattern analysis of energy deposition sites for the brachytherapy sources 103Pd, 125I, 192Ir, 137Cs, and 60Co.

    PubMed

    Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders

    2014-09-21

    Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.

  20. Cluster pattern analysis of energy deposition sites for the brachytherapy sources 103Pd, 125I, 192Ir, 137Cs, and 60Co

    NASA Astrophysics Data System (ADS)

    Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders

    2014-09-01

    Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.

  1. Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps

    NASA Astrophysics Data System (ADS)

    Brooks, E. M.; Stein, S. A.; Spencer, B. D.

    2015-12-01

    The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.

  2. Radio Occultation Investigation of the Rings of Saturn and Uranus

    NASA Technical Reports Server (NTRS)

    Marouf, Essam A.

    1997-01-01

    The proposed work addresses two main objectives: (1) to pursue the development of the random diffraction screen model for analytical/computational characterization of the extinction and near-forward scattering by ring models that include particle crowding, uniform clustering, and clustering along preferred orientations (anisotropy). The characterization is crucial for proper interpretation of past (Voyager) and future (Cassini) ring, occultation observations in terms of physical ring properties, and is needed to address outstanding puzzles in the interpretation of the Voyager radio occultation data sets; (2) to continue the development of spectral analysis techniques to identify and characterize the power scattered by all features of Saturn's rings that can be resolved in the Voyager radio occultation observations, and to use the results to constrain the maximum particle size and its abundance. Characterization of the variability of surface mass density among the main ring, features and within individual features is important for constraining the ring mass and is relevant to investigations of ring dynamics and origin. We completed the developed of the stochastic geometry (random screen) model for the interaction of electromagnetic waves with of planetary ring models; used the model to relate the oblique optical depth and the angular spectrum of the near forward scattered signal to statistical averages of the stochastic geometry of the randomly blocked area. WE developed analytical results based on the assumption of Poisson statistics for particle positions, and investigated the dependence of the oblique optical depth and angular spectrum on the fractional area blocked, vertical ring profile, and incidence angle when the volume fraction is small. Demonstrated agreement with the classical radiative transfer predictions for oblique incidence. Also developed simulation procedures to generate statistical realizations of random screens corresponding to uniformly packed ring models, and used the results to characterize dependence of the extinction and near-forward scattering on ring thickness, packing fraction, and the ring opening angle.

  3. A Bayesian Approach to the Paleomagnetic Conglomerate Test

    NASA Astrophysics Data System (ADS)

    Heslop, David; Roberts, Andrew P.

    2018-02-01

    The conglomerate test has served the paleomagnetic community for over 60 years as a means to detect remagnetizations. The test states that if a suite of clasts within a bed have uniformly random paleomagnetic directions, then the conglomerate cannot have experienced a pervasive event that remagnetized the clasts in the same direction. The current form of the conglomerate test is based on null hypothesis testing, which results in a binary "pass" (uniformly random directions) or "fail" (nonrandom directions) outcome. We have recast the conglomerate test in a Bayesian framework with the aim of providing more information concerning the level of support a given data set provides for a hypothesis of uniformly random paleomagnetic directions. Using this approach, we place the conglomerate test in a fully probabilistic framework that allows for inconclusive results when insufficient information is available to draw firm conclusions concerning the randomness or nonrandomness of directions. With our method, sample sets larger than those typically employed in paleomagnetism may be required to achieve strong support for a hypothesis of random directions. Given the potentially detrimental effect of unrecognized remagnetizations on paleomagnetic reconstructions, it is important to provide a means to draw statistically robust data-driven inferences. Our Bayesian analysis provides a means to do this for the conglomerate test.

  4. An investigation of the internal and external aerodynamics of cattle trucks

    NASA Technical Reports Server (NTRS)

    Muirhead, V. U.

    1983-01-01

    Wind tunnel tests were conducted on a one-tenth scale model of a conventional tractor trailer livestock hauler to determine the air flow through the trailer and the drag of the vehicle. These tests were conducted with the trailer empty and with a full load of simulated cattle. Additionally, the drag was determined for six configurations, of which details for three are documented herein. These are: (1) conventional livestock trailer empty, (2) conventional trailer with smooth sides (i.e., without ventilation openings), and (3) a stream line tractor with modified livestock trailer (cab streamlining and gap fairing). The internal flow of the streamlined modification with simulated cattle was determined with two different ducting systems: a ram air inlet over the cab and NACA submerged inlets between the cab and trailer. The air flow within the conventional trailer was random and variable. The streamline vehicle with ram air inlet provided a nearly uniform air flow which could be controlled. The streamline vehicle with NACA submerged inlets provided better flow conditions than the conventional livestock trailer but not as uniform or controllable as the ram inlet configuration.

  5. Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators

    NASA Astrophysics Data System (ADS)

    Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.

    2015-11-01

    A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.

  6. Investigating the origins of high multilevel resistive switching in forming free Ti/TiO2-x-based memory devices through experiments and simulations

    NASA Astrophysics Data System (ADS)

    Bousoulas, P.; Giannopoulos, I.; Asenov, P.; Karageorgiou, I.; Tsoukalas, D.

    2017-03-01

    Although multilevel capability is probably the most important property of resistive random access memory (RRAM) technology, it is vulnerable to reliability issues due to the stochastic nature of conducting filament (CF) creation. As a result, the various resistance states cannot be clearly distinguished, which leads to memory capacity failure. In this work, due to the gradual resistance switching pattern of TiO2-x-based RRAM devices, we demonstrate at least six resistance states with distinct memory margin and promising temporal variability. It is shown that the formation of small CFs with high density of oxygen vacancies enhances the uniformity of the switching characteristics in spite of the random nature of the switching effect. Insight into the origin of the gradual resistance modulation mechanisms is gained by the application of a trap-assisted-tunneling model together with numerical simulations of the filament formation physical processes.

  7. Improved Results for Route Planning in Stochastic Transportation Networks

    NASA Technical Reports Server (NTRS)

    Boyan, Justin; Mitzenmacher, Michael

    2000-01-01

    In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.

  8. Evidentiary Pluralism as a Strategy for Research and Evidence-Based Practice in Rehabilitation Psychology

    PubMed Central

    Tucker, Jalie A.; Reed, Geoffrey M.

    2008-01-01

    This paper examines the utility of evidentiary pluralism, a research strategy that selects methods in service of content questions, in the context of rehabilitation psychology. Hierarchical views that favor randomized controlled clinical trials (RCTs) over other evidence are discussed, and RCTs are considered as they intersect with issues in the field. RCTs are vital for establishing treatment efficacy, but whether they are uniformly the best evidence to inform practice is critically evaluated. We argue that because treatment is only one of several variables that influence functioning, disability, and participation over time, an expanded set of conceptual and data analytic approaches should be selected in an informed way to support an expanded research agenda that investigates therapeutic and extra-therapeutic influences on rehabilitation processes and outcomes. The benefits of evidentiary pluralism are considered, including helping close the gap between the narrower clinical rehabilitation model and a public health disability model. KEY WORDS: evidence-based practice, evidentiary pluralism, rehabilitation psychology, randomized controlled trials PMID:19649150

  9. Enhanced hyperuniformity from random reorganization.

    PubMed

    Hexner, Daniel; Chaikin, Paul M; Levine, Dov

    2017-04-25

    Diffusion relaxes density fluctuations toward a uniform random state whose variance in regions of volume [Formula: see text] scales as [Formula: see text] Systems whose fluctuations decay faster, [Formula: see text] with [Formula: see text], are called hyperuniform. The larger [Formula: see text], the more uniform, with systems like crystals achieving the maximum value: [Formula: see text] Although finite temperature equilibrium dynamics will not yield hyperuniform states, driven, nonequilibrium dynamics may. Such is the case, for example, in a simple model where overlapping particles are each given a small random displacement. Above a critical particle density [Formula: see text], the system evolves forever, never finding a configuration where no particles overlap. Below [Formula: see text], however, it eventually finds such a state, and stops evolving. This "absorbing state" is hyperuniform up to a length scale [Formula: see text], which diverges at [Formula: see text] An important question is whether hyperuniformity survives noise and thermal fluctuations. We find that hyperuniformity of the absorbing state is not only robust against noise, diffusion, or activity, but that such perturbations reduce fluctuations toward their limiting behavior, [Formula: see text], a uniformity similar to random close packing and early universe fluctuations, but with arbitrary controllable density.

  10. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  11. Improving Unipolar Resistive Switching Uniformity with Cone-Shaped Conducting Filaments and Its Logic-In-Memory Application.

    PubMed

    Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei

    2018-02-21

    Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.

  12. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  13. Variability in seeds: biological, ecological, and agricultural implications.

    PubMed

    Mitchell, Jack; Johnston, Iain G; Bassel, George W

    2017-02-01

    Variability is observed in biology across multiple scales, ranging from populations, individuals, and cells to the molecular components within cells. This review explores the sources and roles of this variability across these scales, focusing on seeds. From a biological perspective, the role and the impact this variability has on seed behaviour and adaptation to the environment is discussed. The consequences of seed variability on agricultural production systems, which demand uniformity, are also examined. We suggest that by understanding the basis and underlying mechanisms of variability in seeds, strategies to increase seed population uniformity can be developed, leading to enhanced agricultural production across variable climatic conditions. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Random myosin loss along thick-filaments increases myosin attachment time and the proportion of bound myosin heads to mitigate force decline in skeletal muscle

    PubMed Central

    Tanner, Bertrand C.W.; McNabb, Mark; Palmer, Bradley M.; Toth, Michael J.; Miller, Mark S.

    2014-01-01

    Diminished skeletal muscle performance with aging, disuse, and disease may be partially attributed to the loss of myofilament proteins. Several laboratories have found a disproportionate loss of myosin protein content relative to other myofilament proteins, but due to methodological limitations, the structural manifestation of this protein loss is unknown. To investigate how variations in myosin content affect ensemble cross-bridge behavior and force production we simulated muscle contraction in the half-sarcomere as myosin was removed either i) uniformly, from the Z-line end of thick-filaments, or ii) randomly, along the length of thick-filaments. Uniform myosin removal decreased force production, showing a slightly steeper force-to-myosin content relationship than the 1:1 relationship that would be expected from the loss of cross-bridges. Random myosin removal also decreased force production, but this decrease was less than observed with uniform myosin loss, largely due to increased myosin attachment time (ton) and fractional cross-bridge binding with random myosin loss. These findings support our prior observations that prolonged ton may augment force production in single fibers with randomly reduced myosin content from chronic heart failure patients. These simulation also illustrate that the pattern of myosin loss along thick-filaments influences ensemble cross-bridge behavior and maintenance of force throughout the sarcomere. PMID:24486373

  15. Development of a methodology to evaluate material accountability in pyroprocess

    NASA Astrophysics Data System (ADS)

    Woo, Seungmin

    This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the criticalmore » value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).« less

  17. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  18. Systematic and random variations in digital Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Duggin, M. J. (Principal Investigator); Sakhavat, H.

    1985-01-01

    Radiance recorded by any remote sensing instrument will contain noise which will consist of both systematic and random variations. Systematic variations may be due to sun-target-sensor geometry, atmospheric conditions, and the interaction of the spectral characteristics of the sensor with those of upwelling radiance. Random variations in the data may be caused by variations in the nature and in the heterogeneity of the ground cover, by variations in atmospheric transmission, and by the interaction of these variations with the sensing device. It is important to be aware of the extent of random and systematic errors in recorded radiance data across ostensibly uniform ground areas in order to assess the impact on quantative image analysis procedures for both the single date and the multidate cases. It is the intention here to examine the systematic and the random variations in digital radiance data recorded in each band by the thematic mapper over crop areas which are ostensibly uniform and which are free from visible cloud.

  19. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  20. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Population pharmacokinetics of valnemulin in swine.

    PubMed

    Zhao, D H; Zhang, Z; Zhang, C Y; Liu, Z C; Deng, H; Yu, J J; Guo, J P; Liu, Y H

    2014-02-01

    This study was carried out in 121 pigs to develop a population pharmacokinetic (PPK) model by oral (p.o.) administration of valnemulin at a single dose of 10 mg/kg. Serum biochemistry parameters of each pig were determined prior to drug administration. Three to five blood samples were collected at random time points, but uniformly distributed in the absorption, distribution, and elimination phases of drug disposition. Plasma concentrations of valnemulin were determined by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The concentration-time data were fitted to PPK models using nonlinear mixed effect modeling (NONMEM) with G77 FORTRAN compiler. NONMEM runs were executed using Wings for NONMEM. Fixed effects of weight, age, sex as well as biochemistry parameters, which may influence the PK of valnemulin, were investigated. The drug concentration-time data were adequately described by a one-compartmental model with first-order absorption. A random effect model of valnemulin revealed a pattern of log-normal distribution, and it satisfactorily characterized the observed interindividual variability. The distribution of random residual errors, however, suggested an additive model for the initial phase (<12 h) followed by a combined model that consists of both proportional and additive features (≥ 12 h), so that the intra-individual variability could be sufficiently characterized. Covariate analysis indicated that body weight had a conspicuous effect on valnemulin clearance (CL/F). The featured population PK values of Ka , V/F and CL/F were 0.292/h, 63.0 L and 41.3 L/h, respectively. © 2013 John Wiley & Sons Ltd.

  2. Pathwise upper semi-continuity of random pullback attractors along the time axis

    NASA Astrophysics Data System (ADS)

    Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke

    2018-07-01

    The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.

  3. Influence of the variable thermophysical properties on the turbulent buoyancy-driven airflow inside open square cavities

    NASA Astrophysics Data System (ADS)

    Zamora, Blas; Kaiser, Antonio S.

    2012-01-01

    The effects of the air variable properties (density, viscosity and thermal conductivity) on the buoyancy-driven flows established in open square cavities are investigated, as well as the influence of the stated boundary conditions at open edges and the employed differencing scheme. Two-dimensional, laminar, transitional and turbulent simulations are obtained, considering both uniform wall temperature and uniform heat flux heating conditions. In transitional and turbulent cases, the low-Reynolds k - ω turbulence model is employed. The average Nusselt number and the dimensionless mass-flow rate have been obtained for a wide and not yet covered range of the Rayleigh number varying from 103 to 1016. The results obtained taking into account variable properties effects are compared with those calculated assuming constant properties and the Boussinesq approximation. For uniform heat flux heating, a correlation for the critical heating parameter above which the burnout phenomenon can be obtained is presented, not reported in previous works. The effects of variable properties on the flow patterns are analyzed.

  4. Automatic identification of variables in epidemiological datasets using logic regression.

    PubMed

    Lorenz, Matthias W; Abdi, Negin Ashtiani; Scheckenbach, Frank; Pflug, Anja; Bülbül, Alpaslan; Catapano, Alberico L; Agewall, Stefan; Ezhov, Marat; Bots, Michiel L; Kiechl, Stefan; Orth, Andreas

    2017-04-13

    For an individual participant data (IPD) meta-analysis, multiple datasets must be transformed in a consistent format, e.g. using uniform variable names. When large numbers of datasets have to be processed, this can be a time-consuming and error-prone task. Automated or semi-automated identification of variables can help to reduce the workload and improve the data quality. For semi-automation high sensitivity in the recognition of matching variables is particularly important, because it allows creating software which for a target variable presents a choice of source variables, from which a user can choose the matching one, with only low risk of having missed a correct source variable. For each variable in a set of target variables, a number of simple rules were manually created. With logic regression, an optimal Boolean combination of these rules was searched for every target variable, using a random subset of a large database of epidemiological and clinical cohort data (construction subset). In a second subset of this database (validation subset), this optimal combination rules were validated. In the construction sample, 41 target variables were allocated on average with a positive predictive value (PPV) of 34%, and a negative predictive value (NPV) of 95%. In the validation sample, PPV was 33%, whereas NPV remained at 94%. In the construction sample, PPV was 50% or less in 63% of all variables, in the validation sample in 71% of all variables. We demonstrated that the application of logic regression in a complex data management task in large epidemiological IPD meta-analyses is feasible. However, the performance of the algorithm is poor, which may require backup strategies.

  5. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  6. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  7. Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei

    2018-04-01

    We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.

  8. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  9. Effect of heterogeneous investments on the evolution of cooperation in spatial public goods game.

    PubMed

    Huang, Keke; Wang, Tao; Cheng, Yuan; Zheng, Xiaoping

    2015-01-01

    Understanding the emergence of cooperation in spatial public goods game remains a grand challenge across disciplines. In most previous studies, it is assumed that the investments of all the cooperators are identical, and often equal to 1. However, it is worth mentioning that players are diverse and heterogeneous when choosing actions in the rapidly developing modern society and researchers have shown more interest to the heterogeneity of players recently. For modeling the heterogeneous players without loss of generality, it is assumed in this work that the investment of a cooperator is a random variable with uniform distribution, the mean value of which is equal to 1. The results of extensive numerical simulations convincingly indicate that heterogeneous investments can promote cooperation. Specifically, a large value of the variance of the random variable can decrease the two critical values for the result of behavioral evolution effectively. Moreover, the larger the variance is, the better the promotion effect will be. In addition, this article has discussed the impact of heterogeneous investments when the coevolution of both strategy and investment is taken into account. Comparing the promotion effect of coevolution of strategy and investment with that of strategy imitation only, we can conclude that the coevolution of strategy and investment decreases the asymptotic fraction of cooperators by weakening the heterogeneity of investments, which further demonstrates that heterogeneous investments can promote cooperation in spatial public goods game.

  10. Geometric evolution of complex networks with degree correlations

    NASA Astrophysics Data System (ADS)

    Murphy, Charles; Allard, Antoine; Laurence, Edward; St-Onge, Guillaume; Dubé, Louis J.

    2018-03-01

    We present a general class of geometric network growth mechanisms by homogeneous attachment in which the links created at a given time t are distributed homogeneously between a new node and the existing nodes selected uniformly. This is achieved by creating links between nodes uniformly distributed in a homogeneous metric space according to a Fermi-Dirac connection probability with inverse temperature β and general time-dependent chemical potential μ (t ) . The chemical potential limits the spatial extent of newly created links. Using a hidden variable framework, we obtain an analytical expression for the degree sequence and show that μ (t ) can be fixed to yield any given degree distributions, including a scale-free degree distribution. Additionally, we find that depending on the order in which nodes appear in the network—its history—the degree-degree correlations can be tuned to be assortative or disassortative. The effect of the geometry on the structure is investigated through the average clustering coefficient 〈c 〉 . In the thermodynamic limit, we identify a phase transition between a random regime where 〈c 〉→0 when β <βc and a geometric regime where 〈c 〉>0 when β >βc .

  11. Vertical uniformity of cells and nuclei in epithelial monolayers.

    PubMed

    Neelam, Srujana; Hayes, Peter Robert; Zhang, Qiao; Dickinson, Richard B; Lele, Tanmay P

    2016-01-22

    Morphological variability in cytoskeletal organization, organelle position and cell boundaries is a common feature of cultured cells. Remarkable uniformity and reproducibility in structure can be accomplished by providing cells with defined geometric cues. Cells in tissues can also self-organize in the absence of directing extracellular cues; however the mechanical principles for such self-organization are not understood. We report that unlike horizontal shapes, the vertical shapes of the cell and nucleus in the z-dimension are uniform in cells in cultured monolayers compared to isolated cells. Apical surfaces of cells and their nuclei in monolayers were flat and heights were uniform. In contrast, isolated cells, or cells with disrupted cell-cell adhesions had nuclei with curved apical surfaces and variable heights. Isolated cells cultured within micron-sized square wells displayed flat cell and nuclear shapes similar to cells in monolayers. Local disruption of nuclear-cytoskeletal linkages resulted in spatial variation in vertical uniformity. These results suggest that competition between cell-cell pulling forces that expand and shorten the vertical cell cross-section, thereby widening and flattening the nucleus, and the resistance of the nucleus to further flattening results in uniform cell and nuclear cross-sections. Our results reveal the mechanical principles of self-organized vertical uniformity in cell monolayers.

  12. Optimizing the LSST Dither Pattern for Survey Uniformity

    NASA Astrophysics Data System (ADS)

    Awan, Humna; Gawiser, Eric J.; Kurczynski, Peter; Carroll, Christopher M.; LSST Dark Energy Science Collaboration

    2015-01-01

    The Large Synoptic Survey Telescope (LSST) will gather detailed data of the southern sky, enabling unprecedented study of Baryonic Acoustic Oscillations, which are an important probe of dark energy. These studies require a survey with highly uniform depth, and we aim to find an observation strategy that optimizes this uniformity. We have shown that in the absence of dithering (large telescope-pointing offsets), the LSST survey will vary significantly in depth. Hence, we implemented various dithering strategies, including random and repulsive random pointing offsets and spiral patterns with the spiral reaching completion in either a few months or the entire ten-year run. We employed three different implementations of dithering strategies: a single offset assigned to all fields observed on each night, offsets assigned to each field independently whenever the field is observed, and offsets assigned to each field only when the field is observed on a new night. Our analysis reveals that large dithers are crucial to guarantee survey uniformity and that assigning dithers to each field independently whenever the field is observed significantly increases this uniformity. These results suggest paths towards an optimal observation strategy that will enable LSST to achieve its science goals.We gratefully acknowledge support from the National Science Foundation REU program at Rutgers, PHY-1263280, and the Department of Energy, DE-SC0011636.

  13. Dosage variability of topical ocular hypotensive products: a densitometric assessment.

    PubMed

    Gaynes, Bruce I; Singa, Ramesh M; Cao, Ying

    2009-02-01

    To ascertain consequence of variability in drop volume obtained from multiuse topical ocular hypotensive products in terms of uniformity of product dosage. Densitometric assessment of drop volume dispensed from 2 alternative bottle positions. All except one product demonstrated a statistically significant difference in drop volume when administered at either a 45-degree or 90-degree bottle angle (Student t test, P<0.001). Product-specific drop volume ranged from a nadir of 22.36 microL to a high of 53.54 microL depending on bottle angle of administration. Deviation in drop dose was directly proportional to variability in drop volume. Variability in per drop dosage was conspicuous among products with a coefficient of variation from 1.49% to 15.91%. In accordance with drop volume, all products demonstrated a statistically significant difference in drop dose at 45-degree versus 90-degree administration angles. Drop volume was found unrelated to drop uniformity (Spearman r=0.01987 and P=0.9463). Variability and lack of uniformity in drop dosage is clearly evident among select ocular hypotensive products and is related to angle of drop administration. Erratic dosage of topical ocular hypotensive therapy may contribute in part to therapeutic failure and/or toxicity.

  14. A new approach for the description of discharge extremes in small catchments

    NASA Astrophysics Data System (ADS)

    Pavia Santolamazza, Daniela; Lebrenz, Henning; Bárdossy, András

    2017-04-01

    Small catchment basins in Northwestern Switzerland, characterized by small concentration times, are frequently targeted by floods. The peak and the volume of these floods are commonly estimated by a frequency analysis of occurrence and described by a random variable, assuming a uniform distributed probability and stationary input drivers (e.g. precipitation, temperature). For these small catchments, we attempt to describe and identify the underlying mechanisms and dynamics at the occurrence of extremes by means of available high temporal resolution (10 min) observations and to explore the possibilities to regionalize hydrological parameters for short intervals. Therefore, we investigate new concepts for the flood description such as entropy as a measure of disorder and dispersion of precipitation. First findings and conclusions of this ongoing research are presented.

  15. Type-curve estimation of statistical heterogeneity

    NASA Astrophysics Data System (ADS)

    Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica

    2004-04-01

    The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.

  16. The Supermarket Model with Bounded Queue Lengths in Equilibrium

    NASA Astrophysics Data System (ADS)

    Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.

    2018-04-01

    In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.

  17. Measured acoustic properties of variable and low density bulk absorbers

    NASA Technical Reports Server (NTRS)

    Dahl, M. D.; Rice, E. J.

    1985-01-01

    Experimental data were taken to determine the acoustic absorbing properties of uniform low density and layered variable density samples using a bulk absober with a perforated plate facing to hold the material in place. In the layered variable density case, the bulk absorber was packed such that the lowest density layer began at the surface of the sample and progressed to higher density layers deeper inside. The samples were placed in a rectangular duct and measurements were taken using the two microphone method. The data were used to calculate specific acoustic impedances and normal incidence absorption coefficients. Results showed that for uniform density samples the absorption coefficient at low frequencies decreased with increasing density and resonances occurred in the absorption coefficient curve at lower densities. These results were confirmed by a model for uniform density bulk absorbers. Results from layered variable density samples showed that low frequency absorption was the highest when the lowest density possible was packed in the first layer near the exposed surface. The layers of increasing density within the sample had the effect of damping the resonances.

  18. Variable area fuel cell process channels

    DOEpatents

    Kothmann, Richard E.

    1981-01-01

    A fuel cell arrangement having a non-uniform distribution of fuel and oxidant flow paths, on opposite sides of an electrolyte matrix, sized and positioned to provide approximately uniform fuel and oxidant utilization rates, and cell conditions, across the entire cell.

  19. 1RXS J180834.7+101041 is a new cataclysmic variable with non-uniform disc

    NASA Astrophysics Data System (ADS)

    Yakin, D. G.; Suleimanov, V. F.; Shimansky, V. V.; Borisov, N. V.; Bikmaev, I. F.; Sakhibullin, N. A.

    2010-11-01

    Results of photometric and spectroscopic investigations of the recently discovered disc cataclysmic variable star 1RXS J180834.7+101041 are presented. Emission spectra of the system show broad double peaked hydrogen and helium emission lines. Doppler maps for the hydrogen lines demonstrate strongly non-uniform emissivity distribution in the disc, similar to that found in IP Peg. It means that the system is a new cataclysmic variable with a spiral density wave in the disc. Masses of the components (MWD = 0.8+/-0.22 Msolar and MRD = 0.14+/-0.02 Msolar), and the orbit inclination (i = 78°+/- 1.°5) were estimated using the various well-known relations for cataclysmic variables.

  20. Development of uniform and predictable battery materials for nickel-cadmium aerospace cells

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Battery materials and manufacturing methods were analyzed with the aim of developing uniform and predictable battery plates for nickel cadmium aerospace cells. A study is presented for the high temperature electrochemical impregnation process for the preparation of nickel cadmium battery plates. This comparative study is set up as a factorially designed experiment to examine both manufacturing and operational variables and any interaction that might exist between them. The manufacturing variables in the factorial design include plaque preparative method, plaque porosity and thickness, impregnation method, and loading, The operational variables are type of duty cycle, charge and discharge rate, extent of overcharge, and depth of discharge.

  1. The effect of uniform color on judging athletes' aggressiveness, fairness, and chance of winning.

    PubMed

    Krenn, Bjoern

    2015-04-01

    In the current study we questioned the impact of uniform color in boxing, taekwondo and wrestling. On 18 photos showing two athletes competing, the hue of each uniform was modified to blue, green or red. For each photo, six color conditions were generated (blue-red, blue-green, green-red and vice versa). In three experiments these 108 photos were randomly presented. Participants (N = 210) had to select the athlete that seemed to be more aggressive, fairer or more likely to win the fight. Results revealed that athletes wearing red in boxing and wrestling were judged more aggressive and more likely to win than athletes wearing blue or green uniforms. In addition, athletes wearing green were judged fairer in boxing and wrestling than athletes wearing red. In taekwondo we did not find any significant impact of uniform color. Results suggest that uniform color in combat sports carries specific meanings that affect others' judgments.

  2. Mining TCGA Data Using Boolean Implications

    PubMed Central

    Sinha, Subarna; Tsang, Emily K.; Zeng, Haoyang; Meister, Michela; Dill, David L.

    2014-01-01

    Boolean implications (if-then rules) provide a conceptually simple, uniform and highly scalable way to find associations between pairs of random variables. In this paper, we propose to use Boolean implications to find relationships between variables of different data types (mutation, copy number alteration, DNA methylation and gene expression) from the glioblastoma (GBM) and ovarian serous cystadenoma (OV) data sets from The Cancer Genome Atlas (TCGA). We find hundreds of thousands of Boolean implications from these data sets. A direct comparison of the relationships found by Boolean implications and those found by commonly used methods for mining associations show that existing methods would miss relationships found by Boolean implications. Furthermore, many relationships exposed by Boolean implications reflect important aspects of cancer biology. Examples of our findings include cis relationships between copy number alteration, DNA methylation and expression of genes, a new hierarchy of mutations and recurrent copy number alterations, loss-of-heterozygosity of well-known tumor suppressors, and the hypermethylation phenotype associated with IDH1 mutations in GBM. The Boolean implication results used in the paper can be accessed at http://crookneck.stanford.edu/microarray/TCGANetworks/. PMID:25054200

  3. Do simple screening statistical tools help to detect reporting bias?

    PubMed

    Pirracchio, Romain; Resche-Rigon, Matthieu; Chevret, Sylvie; Journois, Didier

    2013-09-02

    As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT). This evaluation roadmap proposed herein relies on four steps: evaluation of the distribution of the reported variables; evaluation of the distribution of the reported p values; data simulation using parametric bootstrap and explicit computation of the p values. Such an approach was illustrated using published data from a retracted RCT comparing a hydroxyethyl starch versus albumin-based priming for cardiopulmonary bypass. Despite obvious nonnormal distributions, several variables are presented as if they were normally distributed. The set of 16 p values testing for differences in baseline characteristics across randomized groups did not follow a Uniform distribution on [0,1] (p = 0.045). The p values obtained by explicit computations were different from the results reported by the authors for the two following variables: urine output at 5 hours (calculated p value < 10-6, reported p ≥ 0.05); packed red blood cells (PRBC) during surgery (calculated p value = 0.08; reported p < 0.05). Finally, parametric bootstrap found p value > 0.05 in only 5 of the 10,000 simulated datasets concerning urine output 5 hours after surgery. Concerning PRBC transfused during surgery, parametric bootstrap showed that only the corresponding p value had less than a 50% chance to be inferior to 0.05 (3,920/10,000, p value < 0.05). Such simple evaluation methods might offer some warning signals. However, it should be emphasized that such methods do not allow concluding to the presence of error or fraud but should rather be used to justify asking for an access to the raw data.

  4. Mean Field Analysis of Stochastic Neural Network Models with Synaptic Depression

    NASA Astrophysics Data System (ADS)

    Yasuhiko Igarashi,; Masafumi Oizumi,; Masato Okada,

    2010-08-01

    We investigated the effects of synaptic depression on the macroscopic behavior of stochastic neural networks. Dynamical mean field equations were derived for such networks by taking the average of two stochastic variables: a firing-state variable and a synaptic variable. In these equations, the average product of thesevariables is decoupled as the product of their averages because the two stochastic variables are independent. We proved the independence of these two stochastic variables assuming that the synaptic weight Jij is of the order of 1/N with respect to the number of neurons N. Using these equations, we derived macroscopic steady-state equations for a network with uniform connections and for a ring attractor network with Mexican hat type connectivity and investigated the stability of the steady-state solutions. An oscillatory uniform state was observed in the network with uniform connections owing to a Hopf instability. For the ring network, high-frequency perturbations were shown not to affect system stability. Two mechanisms destabilize the inhomogeneous steady state, leading to two oscillatory states. A Turing instability leads to a rotating bump state, while a Hopf instability leads to an oscillatory bump state, which was previously unreported. Various oscillatory states take place in a network with synaptic depression depending on the strength of the interneuron connections.

  5. Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.

    2018-05-01

    Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.

  6. Dose uniformity of scored and unscored tablets: Application of the FDA Tablet Scoring Guidance for Industry.

    PubMed

    Ciavarella, Anthony; Khan, Mansoor; Gupta, Abhay; Faustino, Patrick

    2016-06-20

    This FDA laboratory study examines the impact of tablet splitting, the effect of tablet splitters, and the presence of a tablet score on the dose uniformity of two model drugs. Whole tablets were purchased from five manufacturers for amlodipine and six for gabapentin. Two splitters were used for each drug product and the gabapentin tablets were also split by hand. Whole and split amlodipine tablets were tested for content uniformity following the general chapter of the United States Pharmacopeia (USP) Uniformity of Dosage Units <905>, which is a requirement of the new FDA Guidance for Industry on tablet scoring. The USP weight variation method was used for gabapentin split tablets based on the recommendation of the guidance. All whole tablets met the USP acceptance criteria for the Uniformity of Dosage Units. Variation in whole tablet content ranged from 0.5-2.1 standard deviation (SD) of the % label claim. Splitting the unscored amlodipine tablets resulted in a significant increase in dose variability of 6.5-25.4 SD when compared to whole tablets. Split tablets from all amlodipine drug products did not meet the USP acceptance criteria for content uniformity. Variation in the weight for gabapentin split tablets was greater than the whole tablets, ranging from 1.3-9.3 SD. All fully scored gabapentin products met the USP acceptance criteria for weight variation. Size, shape, and the presence or absence of a tablet score can affect the content uniformity and weight variation of amlodipine and gabapentin tablets. Tablet splitting produced higher variability. Differences in dose variability and fragmentation were observed between tablet splitters and hand splitting. These results are consistent with the FDA's concerns that tablet splitting "can affect how much drug is present in the split tablet and available for absorption" as stated in the guidance (1). Copyright © 2016, Parenteral Drug Association.

  7. Effect of Heterogeneous Investments on the Evolution of Cooperation in Spatial Public Goods Game

    PubMed Central

    Huang, Keke; Wang, Tao; Cheng, Yuan; Zheng, Xiaoping

    2015-01-01

    Understanding the emergence of cooperation in spatial public goods game remains a grand challenge across disciplines. In most previous studies, it is assumed that the investments of all the cooperators are identical, and often equal to 1. However, it is worth mentioning that players are diverse and heterogeneous when choosing actions in the rapidly developing modern society and researchers have shown more interest to the heterogeneity of players recently. For modeling the heterogeneous players without loss of generality, it is assumed in this work that the investment of a cooperator is a random variable with uniform distribution, the mean value of which is equal to 1. The results of extensive numerical simulations convincingly indicate that heterogeneous investments can promote cooperation. Specifically, a large value of the variance of the random variable can decrease the two critical values for the result of behavioral evolution effectively. Moreover, the larger the variance is, the better the promotion effect will be. In addition, this article has discussed the impact of heterogeneous investments when the coevolution of both strategy and investment is taken into account. Comparing the promotion effect of coevolution of strategy and investment with that of strategy imitation only, we can conclude that the coevolution of strategy and investment decreases the asymptotic fraction of cooperators by weakening the heterogeneity of investments, which further demonstrates that heterogeneous investments can promote cooperation in spatial public goods game. PMID:25781345

  8. Thermoelastic analysis of non-uniform pressurized functionally graded cylinder with variable thickness using first order shear deformation theory(FSDT) and perturbation method

    NASA Astrophysics Data System (ADS)

    Khoshgoftar, M. J.; Mirzaali, M. J.; Rahimi, G. H.

    2015-11-01

    Recently application of functionally graded materials(FGMs) have attracted a great deal of interest. These materials are composed of various materials with different micro-structures which can vary spatially in FGMs. Such composites with varying thickness and non-uniform pressure can be used in the aerospace engineering. Therefore, analysis of such composite is of high importance in engineering problems. Thermoelastic analysis of functionally graded cylinder with variable thickness under non-uniform pressure is considered. First order shear deformation theory and total potential energy approach is applied to obtain the governing equations of non-homogeneous cylinder. Considering the inner and outer solutions, perturbation series are applied to solve the governing equations. Outer solution for out of boundaries and more sensitive variable in inner solution at the boundaries are considered. Combining of inner and outer solution for near and far points from boundaries leads to high accurate displacement field distribution. The main aim of this paper is to show the capability of matched asymptotic solution for different non-homogeneous cylinders with different shapes and different non-uniform pressures. The results can be used to design the optimum thickness of the cylinder and also some properties such as high temperature residence by applying non-homogeneous material.

  9. Automated support tool for variable rate irrigation prescriptions

    USDA-ARS?s Scientific Manuscript database

    Variable rate irrigation (VRI) enables center pivot management to better meet non-uniform water and fertility needs. This is accomplished through correctly matching system water application with spatial and temporal variability within the field. A computer program was modified to accommodate GIS dat...

  10. Comparison of crop stress and soil maps to enhance variable rate irrigation prescriptions

    USDA-ARS?s Scientific Manuscript database

    Soil textural variability within many irrigated fields diminishes the effectiveness of conventional irrigation management, and scheduling methods that assume uniform soil conditions may produce less than satisfactory results. Furthermore, benefits of variable-rate application of agrochemicals, seeds...

  11. Zone edge effects with variable rate irrigation

    USDA-ARS?s Scientific Manuscript database

    Variable rate irrigation (VRI) systems may offer solutions to enhance water use efficiency by addressing variability within a field. However, the design of VRI systems should be considered to maximize application uniformity within sprinkler zones, while minimizing edge effects between such zones alo...

  12. Spatial analysis of factors influencing long-term stress in the grizzly bear (Ursus arctos) population of Alberta, Canada.

    PubMed

    Bourbonnais, Mathieu L; Nelson, Trisalyn A; Cattet, Marc R L; Darimont, Chris T; Stenhouse, Gordon B

    2013-01-01

    Non-invasive measures for assessing long-term stress in free ranging mammals are an increasingly important approach for understanding physiological responses to landscape conditions. Using a spatially and temporally expansive dataset of hair cortisol concentrations (HCC) generated from a threatened grizzly bear (Ursus arctos) population in Alberta, Canada, we quantified how variables representing habitat conditions and anthropogenic disturbance impact long-term stress in grizzly bears. We characterized spatial variability in male and female HCC point data using kernel density estimation and quantified variable influence on spatial patterns of male and female HCC stress surfaces using random forests. Separate models were developed for regions inside and outside of parks and protected areas to account for substantial differences in anthropogenic activity and disturbance within the study area. Variance explained in the random forest models ranged from 55.34% to 74.96% for males and 58.15% to 68.46% for females. Predicted HCC levels were higher for females compared to males. Generally, high spatially continuous female HCC levels were associated with parks and protected areas while low-to-moderate levels were associated with increased anthropogenic disturbance. In contrast, male HCC levels were low in parks and protected areas and low-to-moderate in areas with increased anthropogenic disturbance. Spatial variability in gender-specific HCC levels reveal that the type and intensity of external stressors are not uniform across the landscape and that male and female grizzly bears may be exposed to, or perceive, potential stressors differently. We suggest observed spatial patterns of long-term stress may be the result of the availability and distribution of foods related to disturbance features, potential sexual segregation in available habitat selection, and may not be influenced by sources of mortality which represent acute traumas. In this wildlife system and others, conservation and management efforts can benefit by understanding spatial- and gender-based stress responses to landscape conditions.

  13. Spatial Analysis of Factors Influencing Long-Term Stress in the Grizzly Bear (Ursus arctos) Population of Alberta, Canada

    PubMed Central

    Bourbonnais, Mathieu L.; Nelson, Trisalyn A.; Cattet, Marc R. L.; Darimont, Chris T.; Stenhouse, Gordon B.

    2013-01-01

    Non-invasive measures for assessing long-term stress in free ranging mammals are an increasingly important approach for understanding physiological responses to landscape conditions. Using a spatially and temporally expansive dataset of hair cortisol concentrations (HCC) generated from a threatened grizzly bear (Ursus arctos) population in Alberta, Canada, we quantified how variables representing habitat conditions and anthropogenic disturbance impact long-term stress in grizzly bears. We characterized spatial variability in male and female HCC point data using kernel density estimation and quantified variable influence on spatial patterns of male and female HCC stress surfaces using random forests. Separate models were developed for regions inside and outside of parks and protected areas to account for substantial differences in anthropogenic activity and disturbance within the study area. Variance explained in the random forest models ranged from 55.34% to 74.96% for males and 58.15% to 68.46% for females. Predicted HCC levels were higher for females compared to males. Generally, high spatially continuous female HCC levels were associated with parks and protected areas while low-to-moderate levels were associated with increased anthropogenic disturbance. In contrast, male HCC levels were low in parks and protected areas and low-to-moderate in areas with increased anthropogenic disturbance. Spatial variability in gender-specific HCC levels reveal that the type and intensity of external stressors are not uniform across the landscape and that male and female grizzly bears may be exposed to, or perceive, potential stressors differently. We suggest observed spatial patterns of long-term stress may be the result of the availability and distribution of foods related to disturbance features, potential sexual segregation in available habitat selection, and may not be influenced by sources of mortality which represent acute traumas. In this wildlife system and others, conservation and management efforts can benefit by understanding spatial- and gender-based stress responses to landscape conditions. PMID:24386273

  14. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  15. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.

  16. Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space

    PubMed Central

    Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred

    2016-01-01

    Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112

  17. Antipersistent dynamics in kinetic models of wealth exchange

    NASA Astrophysics Data System (ADS)

    Goswami, Sanchari; Chatterjee, Arnab; Sen, Parongama

    2011-11-01

    We investigate the detailed dynamics of gains and losses made by agents in some kinetic models of wealth exchange. An earlier work suggested that a walk in an abstract gain-loss space can be conceived for the agents. For models in which agents do not save, or save with uniform saving propensity, the walk has diffusive behavior. For the case in which the saving propensity λ is distributed randomly (0≤λ<1), the resultant walk showed a ballistic nature (except at a particular value of λ*≈0.47). Here we consider several other features of the walk with random λ. While some macroscopic properties of this walk are comparable to a biased random walk, at microscopic level, there are gross differences. The difference turns out to be due to an antipersistent tendency toward making a gain (loss) immediately after making a loss (gain). This correlation is in fact present in kinetic models without saving or with uniform saving as well, such that the corresponding walks are not identical to ordinary random walks. In the distributed saving case, antipersistence occurs with a simultaneous overall bias.

  18. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  19. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  20. Continuous-variable quantum key distribution in uniform fast-fading channels

    NASA Astrophysics Data System (ADS)

    Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano

    2018-03-01

    We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.

  1. Circular Data Images for Directional Data

    NASA Technical Reports Server (NTRS)

    Morpet, William J.

    2004-01-01

    Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.

  2. Comparing two-zone models of dust exposure.

    PubMed

    Jones, Rachael M; Simmons, Catherine E; Boelter, Fred W

    2011-09-01

    The selection and application of mathematical models to work tasks is challenging. Previously, we developed and evaluated a semi-empirical two-zone model that predicts time-weighted average (TWA) concentrations (Ctwa) of dust emitted during the sanding of drywall joint compound. Here, we fit the emission rate and random air speed variables of a mechanistic two-zone model to testing event data and apply and evaluate the model using data from two field studies. We found that the fitted random air speed values and emission rate were sensitive to (i) the size of the near-field and (ii) the objective function used for fitting, but this did not substantially impact predicted dust Ctwa. The mechanistic model predictions were lower than the semi-empirical model predictions and measured respirable dust Ctwa at Site A but were within an acceptable range. At Site B, a 10.5 m3 room, the mechanistic model did not capture the observed difference between PBZ and area Ctwa. The model predicted uniform mixing and predicted dust Ctwa up to an order of magnitude greater than was measured. We suggest that applications of the mechanistic model be limited to contexts where the near-field volume is very small relative to the far-field volume.

  3. Statistical analysis of tire treadwear data

    DOT National Transportation Integrated Search

    1985-03-01

    This report describes the results of a statistical analysis of the treadwear : variability of radial tires subjected to the Uniform Tire Quality Grading (UTQG) : standard. Because unexplained variability in the treadwear portion of the standard : cou...

  4. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  5. Effects of spatial constraints on channel network topology: Implications for geomorphological inference

    NASA Astrophysics Data System (ADS)

    Cabral, Mariza Castanheira De Moura Da Costa

    In the fifty-two years since Robert Horton's 1945 pioneering quantitative description of channel network planform (or plan view morphology), no conclusive findings have been presented that permit inference of geomorphological processes from any measures of network planform. All measures of network planform studied exhibit limited geographic variability across different environments. Horton (1945), Langbein et al. (1947), Schumm (1956), Hack (1957), Melton (1958), and Gray (1961) established various "laws" of network planform, that is, statistical relationships between different variables which have limited variability. A wide variety of models which have been proposed to simulate the growth of channel networks in time over a landsurface are generally also in agreement with the above planform laws. An explanation is proposed for the generality of the channel network planform laws. Channel networks must be space filling, that is, they must extend over the landscape to drain every hillslope, leaving no large undrained areas, and with no crossing of channels, often achieving a roughly uniform drainage density in a given environment. It is shown that the space-filling constraint can reduce the sensitivity of planform variables to different network growth models, and it is proposed that this constraint may determine the planform laws. The "Q model" of network growth of Van Pelt and Verwer (1985) is used to generate samples of networks. Sensitivity to the model parameter Q is markedly reduced when the networks generated are required to be space filling. For a wide variety of Q values, the space-filling networks are in approximate agreement with the various channel network planform laws. Additional constraints, including of energy efficiency, were not studied but may further reduce the variability of planform laws. Inference of model parameter Q from network topology is successful only in networks not subject to spatial constraints. In space-filling networks, for a wide range of Q values, the maximal-likelihood Q parameter value is generally in the vicinity of 1/2, which yields topological randomness. It is proposed that space filling originates the appearance of randomness in channel network topology, and may cause difficulties to geomorphological inference from network planform.

  6. Accelerated 1 H MRSI using randomly undersampled spiral-based k-space trajectories.

    PubMed

    Chatnuntawech, Itthi; Gagoski, Borjan; Bilgic, Berkin; Cauley, Stephen F; Setsompop, Kawin; Adalsteinsson, Elfar

    2014-07-30

    To develop and evaluate the performance of an acquisition and reconstruction method for accelerated MR spectroscopic imaging (MRSI) through undersampling of spiral trajectories. A randomly undersampled spiral acquisition and sensitivity encoding (SENSE) with total variation (TV) regularization, random SENSE+TV, is developed and evaluated on single-slice numerical phantom, in vivo single-slice MRSI, and in vivo three-dimensional (3D)-MRSI at 3 Tesla. Random SENSE+TV was compared with five alternative methods for accelerated MRSI. For the in vivo single-slice MRSI, random SENSE+TV yields up to 2.7 and 2 times reduction in root-mean-square error (RMSE) of reconstructed N-acetyl aspartate (NAA), creatine, and choline maps, compared with the denoised fully sampled and uniformly undersampled SENSE+TV methods with the same acquisition time, respectively. For the in vivo 3D-MRSI, random SENSE+TV yields up to 1.6 times reduction in RMSE, compared with uniform SENSE+TV. Furthermore, by using random SENSE+TV, we have demonstrated on the in vivo single-slice and 3D-MRSI that acceleration factors of 4.5 and 4 are achievable with the same quality as the fully sampled data, as measured by RMSE of reconstructed NAA map, respectively. With the same scan time, random SENSE+TV yields lower RMSEs of metabolite maps than other methods evaluated. Random SENSE+TV achieves up to 4.5-fold acceleration with comparable data quality as the fully sampled acquisition. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  7. Synthesis of stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Thornton, W. A.

    1974-01-01

    Computer programs for the synthesis of shells of various configurations were developed. The conditions considered are: (1) uniform shells (mainly cones) using a membrane buckling analysis, (2) completely uniform shells (cones, spheres, toroidal segments) using linear bending prebuckling analysis, and (3) revision of second design process to reduce the number of design variables to about 30 by considering piecewise uniform designs. A perturbation formula was derived and this allows exact derivatives of the general buckling load to be computed with little additional computer time.

  8. TOC: Table of Contents Practices of Primary Journals--Recommendations for Monolingual, Multilingual and International Journals.

    ERIC Educational Resources Information Center

    Juhasz, Stephen; And Others

    Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…

  9. Averaging in SU(2) open quantum random walk

    NASA Astrophysics Data System (ADS)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  10. English Non-Uniformity: A Non-Adult Form of Ethnic English.

    ERIC Educational Resources Information Center

    Stout, Steven Owen

    The paper examines interpretive aspects of English non-uniformity among fifth and sixth grade Native Americans at Laguna Elementary School, Laguna, New Mexico. Speaker assessments of instances of uninflected "be" are ordered to form an implicational scale. The variability in the students' assessment pattern is compared to previous inter-ethnic…

  11. Model of a thin film optical fiber fluorosensor

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1991-01-01

    The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.

  12. Anomalies in the 1D Anderson model: Beyond the band-centre and band-edge cases

    NASA Astrophysics Data System (ADS)

    Tessieri, L.; Izrailev, F. M.

    2018-03-01

    We consider the one-dimensional Anderson model with weak disorder. Using the Hamiltonian map approach, we analyse the validity of the random-phase approximation for resonant values of the energy, E = 2 cos(πr) , with r a rational number. We expand the invariant measure of the phase variable in powers of the disorder strength and we show that, contrary to what happens at the centre and at the edges of the band, for all other resonant energies the leading term of the invariant measure is uniform. When higher-order terms are taken into account, a modulation of the invariant measure appears for all resonant values of the energy. This implies that, when the localisation length is computed within the second-order approximation in the disorder strength, the Thouless formula is valid everywhere except at the band centre and at the band edges.

  13. Spatial and temporal variability of microgeographic genetic structure in white-tailed deer

    USGS Publications Warehouse

    Scribner, Kim T.; Smith, Michael H.; Chesser, Ronald K.

    1997-01-01

    Techniques are described that define contiguous genetic subpopulations of white-tailed deer (Odocoileus virginianus) based on the spatial dispersion of 4,749 individuals that possessed discrete character values (alleles or genotypes) during each of 6 years (1974-1979). White-tailed deer were not uniformly distributed in space, but exhibited considerable spatial genetic structuring. Significant non-random clusters of individuals were documented during each year based on specific alleles and genotypes at the Sdh locus. Considerable temporal variation was observed in the position and genetic composition of specific clusters, which reflected changes in allele frequency in small geographic areas. The position of clusters did not consistently correspond with traditional management boundaries based on major discontinuities in habitat (swamp versus upland) and hunt compartments that were defined by roads and streams. Spatio-temporal stability of observed genetic contiguous clusters was interpreted relative to method and intensity of harvest, movements, and breeding ecology.

  14. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  15. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  16. Spatial and temporal variability of interhemispheric transport times

    NASA Astrophysics Data System (ADS)

    Wu, Xiaokang; Yang, Huang; Waugh, Darryn W.; Orbe, Clara; Tilmes, Simone; Lamarque, Jean-Francois

    2018-05-01

    The seasonal and interannual variability of transport times from the northern midlatitude surface into the Southern Hemisphere is examined using simulations of three idealized age tracers: an ideal age tracer that yields the mean transit time from northern midlatitudes and two tracers with uniform 50- and 5-day decay. For all tracers the largest seasonal and interannual variability occurs near the surface within the tropics and is generally closely coupled to movement of the Intertropical Convergence Zone (ITCZ). There are, however, notable differences in variability between the different tracers. The largest seasonal and interannual variability in the mean age is generally confined to latitudes spanning the ITCZ, with very weak variability in the southern extratropics. In contrast, for tracers subject to spatially uniform exponential loss the peak variability tends to be south of the ITCZ, and there is a smaller contrast between tropical and extratropical variability. These differences in variability occur because the distribution of transit times from northern midlatitudes is very broad and tracers with more rapid loss are more sensitive to changes in fast transit times than the mean age tracer. These simulations suggest that the seasonal-interannual variability in the southern extratropics of trace gases with predominantly NH midlatitude sources may differ depending on the gases' chemical lifetimes.

  17. Adapting radiotherapy to hypoxic tumours

    NASA Astrophysics Data System (ADS)

    Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag

    2006-10-01

    In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.

  18. Variable gamma-ray sky at 1 GeV

    NASA Astrophysics Data System (ADS)

    Pshirkov, M. S.; Rubtsov, G. I.

    2013-01-01

    We search for the long-term variability of the gamma-ray sky in the energy range E > 1 GeV with 168 weeks of the gamma-ray telescope Fermi-LAT data. We perform a full sky blind search for regions with variable flux looking for deviations from uniformity. We bin the sky into 12288 pixels using the HEALPix package and use the Kolmogorov-Smirnov test to compare weekly photon counts in each pixel with the constant flux hypothesis. The weekly exposure of Fermi-LAT for each pixel is calculated with the Fermi-LAT tools. We consider flux variations in a pixel significant if the statistical probability of uniformity is less than 4 × 10-6, which corresponds to 0.05 false detections in the whole set. We identified 117 variable sources, 27 of which have not been reported variable before. The sources with previously unidentified variability contain 25 active galactic nuclei (AGN) belonging to the blazar class (11 BL Lacs and 14 FSRQs), one AGN of an uncertain type, and one pulsar PSR J0633+1746 (Geminga).

  19. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  20. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  1. Kinetic market models with single commodity having price fluctuations

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Chakrabarti, B. K.

    2006-12-01

    We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.

  2. Tomographical imaging using uniformly redundant arrays

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1979-01-01

    An investigation is conducted of the behavior of two types of uniformly redundant array (URA) when used for close-up imaging. One URA pattern is a quadratic residue array whose characteristics for imaging planar sources have been simulated by Fenimore and Cannon (1978), while the second is based on m sequences that have been simulated by Gunson and Polychronopulos (1976) and by MacWilliams and Sloan (1976). Close-up imaging is necessary in order to obtain depth information for tomographical purposes. The properties of the two URA patterns are compared with a random array of equal open area. The goal considered in the investigation is to determine if a URA pattern exists which has the desirable defocus properties of the random array while maintaining artifact-free image properties for in-focus objects.

  3. Origin of the OFF state variability in ReRAM cells

    NASA Astrophysics Data System (ADS)

    Salaoru, Iulia; Khiat, Ali; Li, Qingjiang; Berdan, Radu; Papavassiliou, Christos; Prodromakis, Themistoklis

    2014-04-01

    This work exploits the switching dynamics of nanoscale resistive random access memory (ReRAM) cells with particular emphasis on the origin of the observed variability when cells are consecutively cycled/programmed at distinct memory states. It is demonstrated that this variance is a common feature of all ReRAM elements and is ascribed to the formation and rupture of conductive filaments that expand across the active core, independently of the material employed as the active switching core, the causal physical switching mechanism, the switching mode (bipolar/unipolar) or even the unit cells' dimensions. Our hypothesis is supported through both experimental and theoretical studies on TiO2 and In2O3 : SnO2 (ITO) based ReRAM cells programmed at three distinct resistive states. Our prototypes employed TiO2 or ITO active cores over 5 × 5 µm2 and 100 × 100 µm2 cell areas, with all tested devices demonstrating both unipolar and bipolar switching modalities. In the case of TiO2-based cells, the underlying switching mechanism is based on the non-uniform displacement of ionic species that foster the formation of conductive filaments. On the other hand, the resistive switching observed in the ITO-based devices is considered to be due to a phase change mechanism. The selected experimental parameters allowed us to demonstrate that the observed programming variance is a common feature of all ReRAM devices, proving that its origin is dependent upon randomly oriented local disorders within the active core that have a substantial impact on the overall state variance, particularly for high-resistive states.

  4. Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers

    NASA Astrophysics Data System (ADS)

    Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan

    2018-04-01

    The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1  ×  2.1  ×  2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D-MRI with high temporal and spatial resolution within short scan time for visualization of organ or tumor motion during free breathing. Further studies, e.g. the application of the method for radiotherapy planning are needed to investigate the clinical applicability and diagnostic value of the approach.

  5. Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling

    NASA Astrophysics Data System (ADS)

    Bierhorst, Peter; Shalm, Lynden K.; Mink, Alan; Jordan, Stephen; Liu, Yi-Kai; Rommal, Andrea; Glancy, Scott; Christensen, Bradley; Nam, Sae Woo; Knill, Emanuel

    Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment to obtain data containing randomness that cannot be predicted by any theory that does not also allow the sending of signals faster than the speed of light. To certify and quantify the randomness, we develop a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtain 256 new random bits, uniform to within 10- 3 .

  6. Materials characterization of propellants using ultrasonics

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Jones, David

    1993-01-01

    Propellant characteristics for solid rocket motors were not completely determined for its use as a processing variable in today's production facilities. A major effort to determine propellant characteristics obtainable through ultrasonic measurement techniques was performed in this task. The information obtained was then used to determine the uniformity of manufacturing methods and/or the ability to determine non-uniformity in processes.

  7. A simple algorithm to improve the performance of the WENO scheme on non-uniform grids

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong

    2018-02-01

    This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.

  8. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  9. Genetic parameters for uniformity of harvest weight and body size traits in the GIFT strain of Nile tilapia.

    PubMed

    Marjanovic, Jovana; Mulder, Han A; Khaw, Hooi L; Bijma, Piter

    2016-06-10

    Animal breeding programs have been very successful in improving the mean levels of traits through selection. However, in recent decades, reducing the variability of trait levels between individuals has become a highly desirable objective. Reaching this objective through genetic selection requires that there is genetic variation in the variability of trait levels, a phenomenon known as genetic heterogeneity of environmental (residual) variance. The aim of our study was to investigate the potential for genetic improvement of uniformity of harvest weight and body size traits (length, depth, and width) in the genetically improved farmed tilapia (GIFT) strain. In order to quantify the genetic variation in uniformity of traits and estimate the genetic correlations between level and variance of the traits, double hierarchical generalized linear models were applied to individual trait values. Our results showed substantial genetic variation in uniformity of all analyzed traits, with genetic coefficients of variation for residual variance ranging from 39 to 58 %. Genetic correlation between trait level and variance was strongly positive for harvest weight (0.60 ± 0.09), moderate and positive for body depth (0.37 ± 0.13), but not significantly different from 0 for body length and width. Our results on the genetic variation in uniformity of harvest weight and body size traits show good prospects for the genetic improvement of uniformity in the GIFT strain. A high and positive genetic correlation was estimated between level and variance of harvest weight, which suggests that selection for heavier fish will also result in more variation in harvest weight. Simultaneous improvement of harvest weight and its uniformity will thus require index selection.

  10. Field comparison of analytical results from discrete-depth ground water samplers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Delfino, T.A.; Gallinatti, J.D.

    1995-07-01

    Discrete-depth ground water samplers are used during environmental screening investigations to collect ground water samples in lieu of installing and sampling monitoring wells. Two of the most commonly used samplers are the BAT Enviroprobe and the QED HydroPunch I, which rely on differing sample collection mechanics. Although these devices have been on the market for several years, it was unknown what, if any, effect the differences would have on analytical results for ground water samples containing low to moderate concentrations of chlorinated volatile organic compounds (VOCs). This study investigated whether the discrete-depth ground water sampler used introduces statistically significant differencesmore » in analytical results. The goal was to provide a technical basis for allowing the two devices to be used interchangeably during screening investigations. Because this study was based on field samples, it included several sources of potential variability. It was necessary to separate differences due to sampler type from variability due to sampling location, sample handling, and laboratory analytical error. To statistically evaluate these sources of variability, the experiment was arranged in a nested design. Sixteen ground water samples were collected from eight random locations within a 15-foot by 15-foot grid. The grid was located in an area where shallow ground water was believed to be uniformly affected by VOCs. The data were evaluated using analysis of variance.« less

  11. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  12. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  13. Development and first use of a novel cylindrical ball bearing phantom for 9-DOF geometric calibrations of flat panel imaging devices used in image-guided ion beam therapy

    NASA Astrophysics Data System (ADS)

    Zechner, A.; Stock, M.; Kellner, D.; Ziegler, I.; Keuschnigg, P.; Huber, P.; Mayer, U.; Sedlmayer, F.; Deutschmann, H.; Steininger, P.

    2016-11-01

    Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager’s deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner’s geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.

  14. Development and first use of a novel cylindrical ball bearing phantom for 9-DOF geometric calibrations of flat panel imaging devices used in image-guided ion beam therapy.

    PubMed

    Zechner, A; Stock, M; Kellner, D; Ziegler, I; Keuschnigg, P; Huber, P; Mayer, U; Sedlmayer, F; Deutschmann, H; Steininger, P

    2016-11-21

    Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager's deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner's geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.

  15. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2014-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  16. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2013-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using NCEP/NCAR re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally-refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  17. A Search Model for Imperfectly Detected Targets

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert

    2012-01-01

    Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.

  18. Educators' Perceptions of the Effects of School Uniforms on School Climate in a Selected Metropolitan Disciplinary Alternative Education Program

    ERIC Educational Resources Information Center

    Chime, Emmanuel Onoh

    2010-01-01

    The purpose of this study was to examine educators' perceptions regarding the effects of school uniforms on school climate in a selected metropolitan disciplinary alternative education program. More specifically, this study investigated the influence of the variables group status, gender, ethnicity, age and years of experience on the perceptions…

  19. Aspect-related Vegetation Differences Amplify Soil Moisture Variability in Semiarid Landscapes

    NASA Astrophysics Data System (ADS)

    Yetemen, O.; Srivastava, A.; Kumari, N.; Saco, P. M.

    2017-12-01

    Soil moisture variability (SMV) in semiarid landscapes is affected by vegetation, soil texture, climate, aspect, and topography. The heterogeneity in vegetation cover that results from the effects of microclimate, terrain attributes (slope gradient, aspect, drainage area etc.), soil properties, and spatial variability in precipitation have been reported to act as the dominant factors modulating SMV in semiarid ecosystems. However, the role of hillslope aspect in SMV, though reported in many field studies, has not received the same degree of attention probably due to the lack of extensive large datasets. Numerical simulations can then be used to elucidate the contribution of aspect-driven vegetation patterns to this variability. In this work, we perform a sensitivity analysis to study on variables driving SMV using the CHILD landscape evolution model equipped with a spatially-distributed solar-radiation component that couples vegetation dynamics and surface hydrology. To explore how aspect-driven vegetation heterogeneity contributes to the SMV, CHILD was run using a range of parameters selected to reflect different scenarios (from uniform to heterogeneous vegetation cover). Throughout the simulations, the spatial distribution of soil moisture and vegetation cover are computed to estimate the corresponding coefficients of variation. Under the uniform spatial precipitation forcing and uniform soil properties, the factors affecting the spatial distribution of solar insolation are found to play a key role in the SMV through the emergence of aspect-driven vegetation patterns. Hence, factors such as catchment gradient, aspect, and latitude, define water stress and vegetation growth, and in turn affect the available soil moisture content. Interestingly, changes in soil properties (porosity, root depth, and pore-size distribution) over the domain are not as effective as the other factors. These findings show that the factors associated to aspect-related vegetation differences amplify the soil moisture variability of semi-arid landscapes.

  20. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  1. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  2. DG-IMEX Stochastic Galerkin Schemes for Linear Transport Equation with Random Inputs and Diffusive Scalings

    DOE PAGES

    Chen, Zheng; Liu, Liu; Mu, Lin

    2017-05-03

    In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less

  3. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  4. An analysis of tire tread wear groove patterns and the effect of heteroscedasticity on tire tread wear statistics

    DOT National Transportation Integrated Search

    1985-09-01

    This report examines the groove wear variability among tires subjected to the : Uniform Tire Quality Grading (UTQC) test procedure for determining tire tread wear. : The effects of heteroscedasticity (variable variance) on a previously reported : sta...

  5. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  6. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  7. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  8. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  9. Effects of Spatial Variability of Soil Properties on the Triggering of Rainfall-Induced Shallow Landslides

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2015-04-01

    Naturally-occurring spatial variations in soil properties (e.g., soil depth, moisture, and texture) affect key hydrological processes and potentially the mechanical response of soil to hydromechanical loading (relative to the commonly-assumed uniform soil mantle). We quantified the effects of soil spatial variability on the triggering of rainfall-induced shallow landslides at the hillslope- and catchment-scales, using a physically-based landslide triggering model that considers interacting soil columns with mechanical strength thresholds (represented by the Fiber Bundle Model). The spatial variations in soil properties are represented as Gaussian random distributions and the level of variation is characterized by the coefficient of variation and correlation lengths of soil properties (i.e., soil depth, soil texture and initial water content in this study). The impacts of these spatial variations on landslide triggering characteristics were measured by comparing the times to triggering and landslide volumes for heterogeneous soil properties and homogeneous cases. Results at hillslope scale indicate that for spatial variations of an individual property (without cross correlation), the increasing of coefficient of variation introduces weak spots where mechanical damage is accelerated and leads to earlier onset of landslide triggering and smaller volumes. Increasing spatial correlation length of soil texture and initial water content also induces early landslide triggering and small released volumes due to the transition of failure mode from brittle to ductile failure. In contrast, increasing spatial correlation length of soil depth "reduces" local steepness and postpones landslide triggering. Cross-correlated soil properties generally promote landslide initiation, but depending on the internal structure of spatial distribution of each soil property, landslide triggering may be reduced. The effects of cross-correlation between initial water content and soil texture were investigated in detail at the catchment scale by incorporating correlations of both variables with topography. Results indicate that the internal structure of the spatial distribution of each soil property together with their interplays determine the overall performance of the coupled spatial variability. This study emphasizes the importance of both the randomness and spatial structure of soil properties on landslide triggering and characteristics.

  10. Misinterpretation of statistical distance in security of quantum key distribution shown by simulation

    NASA Astrophysics Data System (ADS)

    Iwakoshi, Takehisa; Hirota, Osamu

    2014-10-01

    This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.

  11. Probabilistic homogenization of random composite with ellipsoidal particle reinforcement by the iterative stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Sokołowski, Damian; Kamiński, Marcin

    2018-01-01

    This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).

  12. The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.

    PubMed

    de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J

    2017-05-16

    Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.

  13. Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field

    NASA Astrophysics Data System (ADS)

    Borelli, M. E. S.; Carneiro, C. E. I.

    1996-02-01

    We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.

  14. Biologically-variable rhythmic auditory cues are superior to isochronous cues in fostering natural gait variability in Parkinson's disease.

    PubMed

    Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S

    2017-01-01

    Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p's<0.05). In contrast, cueing with isochronous or randomly varying inter-stimulus/beat intervals removed the LRC in the stride cycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p's<0.05) but not the positive effect of biological variability. Stimulus variability and patients' propensity to synchronize play a critical role in fostering healthier gait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A UNIFORM VERSUS AN AGGREGATED WATER BALANCE OF A SEMI-ARID WATERSHED. (R824784)

    EPA Science Inventory

    Hydrologists have long struggled with the problem of how to account for the effects of spatial variability in precipitation, vegetation and soils. This problem is particularly acute in snow-fed, semi-arid watersheds, which typically have considerable variability in snow distribut...

  16. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  17. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  18. Washing and changing uniforms: is guidance being adhered to?

    PubMed

    Potter, Yvonne Camilla; Justham, David

    To allay public apprehension regarding the risk of nurses' uniforms transmitting healthcare-associated infections (HCAI), national and local guidelines have been issued to control use, laundry and storage. This paper aims to measure the knowledge of registered nurses (RNs) and healthcare assistants (HCAs) working within a rural NHS foundation Trust and their adherence to the local infection prevention and control (IPC) standard regarding uniforms through a Trust-wide audit. Stratified random sampling selected 597 nursing staff and 399 responded (67%) by completing a short questionnaire based on the local standard. Responses were coded and transferred to SPSS (v. 17) for analysis. The audit found that nursing staff generally adhere to the guidelines, changing their uniforms daily and immediately upon accidental soiling, and wearing plastic aprons where indicated. At home, staff normally machine-wash and then iron their uniforms at the hottest setting. Nevertheless, few observe the local direction to place their newly-laundered uniforms in protective covers. This paper recommends a re-audit to compare compliance rates with baseline figures and further research into the reasons why compliance is lacking to sanction interventions for improvement, such as providing relevant staff education and re-introducing appropriate changing facilities.

  19. Role of work uniform in alleviating perceptual strain among construction workers.

    PubMed

    Yang, Yang; Chan, Albert Ping-Chuen

    2017-02-07

    This study aims to examine the benefits of wearing a new construction work uniform in real-work settings. A field experiment with a randomized assignment of an intervention group to a newly designed uniform and a control group to a commercially available trade uniform was executed. A total of 568 sets of physical, physiological, perceptual, and microclimatological data were obtained. A linear mixed-effects model (LMM) was built to examine the cause-effect relationship between the Perceptual Strain Index (PeSI) and heat stressors including wet bulb globe temperature (WBGT), estimated workload (relative heart rate), exposure time, trade, workplace, and clothing type. An interaction effect between clothing and trade revealed that perceptual strain of workers across four trades was significantly alleviated by 1.6-6.3 units in the intervention group. Additionally, the results of a questionnaire survey on assessing the subjective sensations on the two uniforms indicated that wearing comfort was improved by 1.6-1.8 units when wearing the intervention type. This study not only provides convincing evidences on the benefits of wearing the newly designed work uniform in reducing perceptual strain but also heightens the value of the field experiment in heat stress intervention studies.

  20. Role of work uniform in alleviating perceptual strain among construction workers

    PubMed Central

    YANG, Yang; CHAN, Albert Ping-chuen

    2016-01-01

    This study aims to examine the benefits of wearing a new construction work uniform in real-work settings. A field experiment with a randomized assignment of an intervention group to a newly designed uniform and a control group to a commercially available trade uniform was executed. A total of 568 sets of physical, physiological, perceptual, and microclimatological data were obtained. A linear mixed-effects model (LMM) was built to examine the cause-effect relationship between the Perceptual Strain Index (PeSI) and heat stressors including wet bulb globe temperature (WBGT), estimated workload (relative heart rate), exposure time, trade, workplace, and clothing type. An interaction effect between clothing and trade revealed that perceptual strain of workers across four trades was significantly alleviated by 1.6–6.3 units in the intervention group. Additionally, the results of a questionnaire survey on assessing the subjective sensations on the two uniforms indicated that wearing comfort was improved by 1.6–1.8 units when wearing the intervention type. This study not only provides convincing evidences on the benefits of wearing the newly designed work uniform in reducing perceptual strain but also heightens the value of the field experiment in heat stress intervention studies. PMID:27666953

  1. A new approach to evaluate gamma-ray measurements

    NASA Technical Reports Server (NTRS)

    Dejager, O. C.; Swanepoel, J. W. H.; Raubenheimer, B. C.; Vandervalt, D. J.

    1985-01-01

    Misunderstandings about the term random samples its implications may easily arise. Conditions under which the phases, obtained from arrival times, do not form a random sample and the dangers involved are discussed. Watson's U sup 2 test for uniformity is recommended for light curves with duty cycles larger than 10%. Under certain conditions, non-parametric density estimation may be used to determine estimates of the true light curve and its parameters.

  2. Investigation of certain wing shapes with sections varying progressively along the span

    NASA Technical Reports Server (NTRS)

    Arsandaux, L

    1931-01-01

    This investigation has a double object: 1) the calculation of the general characteristics of certain wings with progressively varying sections; 2) the determination of data furnishing, in certain cases, some information on the actual distribution of the external forces acting on a wing. We shall try to show certain advantages belonging to the few wing types of variable section which we shall study and that, even if the general aerodynamic coefficients of these wings are not often clearly superior to those of certain wings of uniform section, the wings of variable section nevertheless have certain advantages over those of uniform section in the distribution of the attainable stresses.

  3. Microstructure Filled Hohlraums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, A. S.; Thomas, C. A.; Reese, T. M.

    2017-02-24

    We propose replacing the gas fill in a hohlraum with a low average density, variable uniformity 3D printed structure. This creates a bimodal hohlraum which acts like a vacuum hohlraum initially during the picket, but could protect the capsule from glint or direct illumination, and then once expanded, homogenizes to behave like a variable z gas-fill during peak portion of the drive. This is motivated by a two main aims: 1) reduction of the Au bubble velocity to improve inner beam propagation, and 2) the introduction of a low density, high-Z, x-ray converter to improve x-ray production in the hohlraummore » and uniformity of the radiation field seen by the capsule.« less

  4. Shape optimization using a NURBS-based interface-enriched generalized FEM

    DOE PAGES

    Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...

    2016-11-26

    This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less

  5. The APOGEE-2 Survey of the Orion Star-forming Complex. I. Target Selection and Validation with Early Observations

    NASA Astrophysics Data System (ADS)

    Cottle, J.’Neil; Covey, Kevin R.; Suárez, Genaro; Román-Zúñiga, Carlos; Schlafly, Edward; Downes, Juan Jose; Ybarra, Jason E.; Hernandez, Jesus; Stassun, Keivan; Stringfellow, Guy S.; Getman, Konstantin; Feigelson, Eric; Borissova, Jura; Kim, J. Serena; Roman-Lopes, A.; Da Rio, Nicola; De Lee, Nathan; Frinchaboy, Peter M.; Kounkel, Marina; Majewski, Steven R.; Mennickent, Ronald E.; Nidever, David L.; Nitschelm, Christian; Pan, Kaike; Shetrone, Matthew; Zasowski, Gail; Chambers, Ken; Magnier, Eugene; Valenti, Jeff

    2018-06-01

    The Orion Star-forming Complex (OSFC) is a central target for the APOGEE-2 Young Cluster Survey. Existing membership catalogs span limited portions of the OSFC, reflecting the difficulty of selecting targets homogeneously across this extended, highly structured region. We have used data from wide-field photometric surveys to produce a less biased parent sample of young stellar objects (YSOs) with infrared (IR) excesses indicative of warm circumstellar material or photometric variability at optical wavelengths across the full 420 square degree extent of the OSFC. When restricted to YSO candidates with H < 12.4, to ensure S/N ∼ 100 for a six-visit source, this uniformly selected sample includes 1307 IR excess sources selected using criteria vetted by Koenig & Liesawitz (2014) and 990 optical variables identified in the Pan-STARRS1 3π survey: 319 sources exhibit both optical variability and evidence of circumstellar disks through IR excess. Objects from this uniformly selected sample received the highest priority for targeting, but required fewer than half of the fibers on each APOGEE-2 plate. We filled the remaining fibers with previously confirmed and new color–magnitude selected candidate OSFC members. Radial velocity measurements from APOGEE-1 and new APOGEE-2 observations taken in the survey’s first year indicate that ∼90% of the uniformly selected targets have radial velocities consistent with Orion membership. The APOGEE-2 Orion survey will include >1100 bona fide YSOs whose uniform selection function will provide a robust sample for comparative analyses of the stellar populations and properties across all sub-regions of Orion.

  6. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-01-07

    vector that helps to manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original...measures of risk. They view a random variable of interest in concert with an auxiliary random vector that helps to manage , predict and mitigate the risk

  7. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  8. Variability in research ethics review of cluster randomized trials: a scenario-based survey in three countries

    PubMed Central

    2014-01-01

    Background Cluster randomized trials (CRTs) present unique ethical challenges. In the absence of a uniform standard for their ethical design and conduct, problems such as variability in procedures and requirements by different research ethics committees will persist. We aimed to assess the need for ethics guidelines for CRTs among research ethics chairs internationally, investigate variability in procedures for research ethics review of CRTs within and among countries, and elicit research ethics chairs’ perspectives on specific ethical issues in CRTs, including the identification of research subjects. The proper identification of research subjects is a necessary requirement in the research ethics review process, to help ensure, on the one hand, that subjects are protected from harm and exploitation, and on the other, that reviews of CRTs are completed efficiently. Methods A web-based survey with closed- and open-ended questions was administered to research ethics chairs in Canada, the United States, and the United Kingdom. The survey presented three scenarios of CRTs involving cluster-level, professional-level, and individual-level interventions. For each scenario, a series of questions was posed with respect to the type of review required (full, expedited, or no review) and the identification of research subjects at cluster and individual levels. Results A total of 189 (35%) of 542 chairs responded. Overall, 144 (84%, 95% CI 79 to 90%) agreed or strongly agreed that there is a need for ethics guidelines for CRTs and 158 (92%, 95% CI 88 to 96%) agreed or strongly agreed that research ethics committees could be better informed about distinct ethical issues surrounding CRTs. There was considerable variability among research ethics chairs with respect to the type of review required, as well as the identification of research subjects. The cluster-cluster and professional-cluster scenarios produced the most disagreement. Conclusions Research ethics committees identified a clear need for ethics guidelines for CRTs and education about distinct ethical issues in CRTs. There is disagreement among committees, even within the same countries, with respect to key questions in the ethics review of CRTs. This disagreement reflects variability of opinion and practices pointing toward possible gaps in knowledge, and supports the need for explicit guidelines for the ethical conduct and review of CRTs. PMID:24495542

  9. The segmented non-uniform dielectric module design for uniformity control of plasma profile in a capacitively coupled plasma chamber

    NASA Astrophysics Data System (ADS)

    Xia, Huanxiong; Xiang, Dong; Yang, Wang; Mou, Peng

    2014-12-01

    Low-temperature plasma technique is one of the critical techniques in IC manufacturing process, such as etching and thin-film deposition, and the uniformity greatly impacts the process quality, so the design for the plasma uniformity control is very important but difficult. It is hard to finely and flexibly regulate the spatial distribution of the plasma in the chamber via controlling the discharge parameters or modifying the structure in zero-dimensional space, and it just can adjust the overall level of the process factors. In the view of this problem, a segmented non-uniform dielectric module design solution is proposed for the regulation of the plasma profile in a CCP chamber. The solution achieves refined and flexible regulation of the plasma profile in the radial direction via configuring the relative permittivity and the width of each segment. In order to solve this design problem, a novel simulation-based auto-design approach is proposed, which can automatically design the positional sequence with multi independent variables to make the output target profile in the parameterized simulation model approximate the one that users preset. This approach employs an idea of quasi-closed-loop control system, and works in an iterative mode. It starts from initial values of the design variable sequences, and predicts better sequences via the feedback of the profile error between the output target profile and the expected one. It never stops until the profile error is narrowed in the preset tolerance.

  10. The distribution of cigarette prices under different tax structures: findings from the International Tobacco Control Policy Evaluation (ITC) Project

    PubMed Central

    Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T

    2013-01-01

    Background The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. Objective This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. Methods We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Findings Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax. PMID:23792324

  11. The distribution of cigarette prices under different tax structures: findings from the International Tobacco Control Policy Evaluation (ITC) Project.

    PubMed

    Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T

    2014-03-01

    The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax.

  12. Automating variable rate irrigation management prescriptions for center pivots from field data maps

    USDA-ARS?s Scientific Manuscript database

    Variable rate irrigation (VRI) enables center pivot systems to match irrigation application to non-uniform field needs. This technology has potential to improve application and water-use efficiency while reducing environmental impacts from excess runoff and poor water quality. Proper management of V...

  13. Performance evaluation of a center pivot variable rate irrigation system

    USDA-ARS?s Scientific Manuscript database

    Variable Rate Irrigation (VRI) for center pivots offers potential to match specific application rates to non-uniform soil conditions along the length of the lateral. The benefit of such systems is influenced by the areal extent of these variations and the smallest scale to which the irrigation syste...

  14. Stress and Performance: Effects of Subjective Work Load and Time Urgency.

    ERIC Educational Resources Information Center

    Friend, Kenneth E.

    1982-01-01

    Measured subjective work load, time urgency, and other stress/motivation variables for management personnel taking a demanding problem-solving exam. Data suggest increases in psychological stresses like subjectively high work load and time urgency uniformly impair performance across the whole range of these variables. (Author)

  15. Traffic signal inventory project

    DOT National Transportation Integrated Search

    2001-06-01

    The purpose of this study was to determine the level of compliance with the "Manual on Uniform Traffic Control Devices" (MUTCD) and other industry standards of traffic signals on the Iowa state highway system. Signals were randomly selected in cities...

  16. Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2018-03-01

    A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or themore » giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.« less

  18. Impact of Beamforming on the Path Connectivity in Cognitive Radio Ad Hoc Networks

    PubMed Central

    Dung, Le The; Hieu, Tran Dinh; Choi, Seong-Gon; Kim, Byung-Seo; An, Beongku

    2017-01-01

    This paper investigates the impact of using directional antennas and beamforming schemes on the connectivity of cognitive radio ad hoc networks (CRAHNs). Specifically, considering that secondary users use two kinds of directional antennas, i.e., uniform linear array (ULA) and uniform circular array (UCA) antennas, and two different beamforming schemes, i.e., randomized beamforming and center-directed to communicate with each other, we study the connectivity of all combination pairs of directional antennas and beamforming schemes and compare their performances to those of omnidirectional antennas. The results obtained in this paper show that, compared with omnidirectional transmission, beamforming transmission only benefits the connectivity when the density of secondary user is moderate. Moreover, the combination of UCA and randomized beamforming scheme gives the highest path connectivity in all evaluating scenarios. Finally, the number of antenna elements and degree of path loss greatly affect path connectivity in CRAHNs. PMID:28346377

  19. Large-size, high-uniformity, random silver nanowire networks as transparent electrodes for crystalline silicon wafer solar cells.

    PubMed

    Xie, Shouyi; Ouyang, Zi; Jia, Baohua; Gu, Min

    2013-05-06

    Metal nanowire networks are emerging as next generation transparent electrodes for photovoltaic devices. We demonstrate the application of random silver nanowire networks as the top electrode on crystalline silicon wafer solar cells. The dependence of transmittance and sheet resistance on the surface coverage is measured. Superior optical and electrical properties are observed due to the large-size, highly-uniform nature of these networks. When applying the nanowire networks on the solar cells with an optimized two-step annealing process, we achieved as large as 19% enhancement on the energy conversion efficiency. The detailed analysis reveals that the enhancement is mainly caused by the improved electrical properties of the solar cells due to the silver nanowire networks. Our result reveals that this technology is a promising alternative transparent electrode technology for crystalline silicon wafer solar cells.

  20. Effects of asymmetric rolling process on ridging resistance of ultra-purified 17%Cr ferritic stainless steel

    NASA Astrophysics Data System (ADS)

    Lu, Cheng-zhuang; Li, Jing-yuan; Fang, Zhi

    2018-02-01

    In ferritic stainless steels, a significant non-uniform recrystallization orientation and a substantial texture gradient usually occur, which can degrade the ridging resistance of the final sheets. To improve the homogeneity of the recrystallization orientation and reduce the texture gradient in ultra-purified 17%Cr ferritic stainless steel, in this work, we performed conventional and asymmetric rolling processes and conducted macro and micro-texture analyses to investigate texture evolution under different cold-rolling conditions. In the conventional rolling specimens, we observed that the deformation was not uniform in the thickness direction, whereas there was homogeneous shear deformation in the asymmetric rolling specimens as well as the formation of uniform recrystallized grains and random orientation grains in the final annealing sheets. As such, the ridging resistance of the final sheets was significantly improved by employing the asymmetric rolling process. This result indicates with certainty that the texture gradient and orientation inhomogeneity can be attributed to non-uniform deformation, whereas the uniform orientation gradient in the thickness direction is explained by the increased number of shear bands obtained in the asymmetric rolling process.

  1. A Random Variable Related to the Inversion Vector of a Partial Random Permutation

    ERIC Educational Resources Information Center

    Laghate, Kavita; Deshpande, M. N.

    2005-01-01

    In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.

  2. A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables

    ERIC Educational Resources Information Center

    Vernizzi, Graziano; Nakai, Miki

    2015-01-01

    It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…

  3. Dose Uniformity of Scored and Unscored Tablets: Application of the FDA Tablet Scoring Guidance for Industry.

    PubMed

    Ciavarella, Anthony B; Khan, Mansoor A; Gupta, Abhay; Faustino, Patrick J

    This U.S. Food and Drug Administration (FDA) laboratory study examines the impact of tablet splitting, the effect of tablet splitters, and the presence of a tablet score on the dose uniformity of two model drugs. Whole tablets were purchased from five manufacturers for amlodipine and six for gabapentin. Two splitters were used for each drug product, and the gabapentin tablets were also split by hand. Whole and split amlodipine tablets were tested for content uniformity following the general chapter of the United States Pharmacopeia (USP) Uniformity of Dosage Units <905>, which is a requirement of the new FDA Guidance for Industry on tablet scoring. The USP weight variation method was used for gabapentin split tablets based on the recommendation of the guidance. All whole tablets met the USP acceptance criteria for the Uniformity of Dosage Units. Variation in whole tablet content ranged from 0.5 to 2.1 standard deviation (SD) of the percent label claim. Splitting the unscored amlodipine tablets resulted in a significant increase in dose variability of 6.5-25.4 SD when compared to whole tablets. Split tablets from all amlodipine drug products did not meet the USP acceptance criteria for content uniformity. Variation in the weight for gabapentin split tablets was greater than the whole tablets, ranging from 1.3 to 9.3 SD. All fully scored gabapentin products met the USP acceptance criteria for weight variation. Size, shape, and the presence or absence of a tablet score can affect the content uniformity and weight variation of amlodipine and gabapentin tablets. Tablet splitting produced higher variability. Differences in dose variability and fragmentation were observed between tablet splitters and hand splitting. These results are consistent with the FDA's concerns that tablet splitting can have an effect on the amount of drug present in a split tablet and available for absorption. Tablet splitting has become a very common practice in the United States and throughout the world. Tablets are often split to modify dose strength, make swallowing easier, and reduce cost to the consumer. To better address product quality for this widely used practice, the U.S. Food and Drug Administration (FDA) published a Guidance for Industry that addresses tablet splitting. The guidance provides testing criteria for scored tablets, which is a part of the FDA review process for drugs. The model drugs selected for this study were amlodipine and gabapentin, which have different sizes, shapes, and tablet scores. Whole and split amlodipine tablets were tested for drug content because of a concern that the low-dose strength may cause greater variability. Whole and split gabapentin tablets were tested for weight variation because of their higher dosage strength of 600 mg. All whole tablets met the acceptance criteria for the Uniformity of Dosage Units based on the guidance recommendations. When unscored amlodipine tablets were split by a splitter, all formulations did not meet the acceptance criteria. When fully scored gabapentin tablets were split by hand and by splitter, they met the acceptance criteria. The findings of this FDA study indicated physical characteristics such as size, shape, and tablet score can affect the uniformity of split tablets. © PDA, Inc. 2016.

  4. Narrow linewidth short cavity Brillouin random laser based on Bragg grating array fiber and dynamical population inversion gratings

    NASA Astrophysics Data System (ADS)

    Popov, S. M.; Butov, O. V.; Chamorovski, Y. K.; Isaev, V. A.; Mégret, P.; Korobko, D. A.; Zolotovskii, I. O.; Fotiadi, A. A.

    2018-06-01

    We report on random lasing observed with 100-m-long fiber comprising an array of weak FBGs inscribed in the fiber core and uniformly distributed over the fiber length. Extended fluctuation-free oscilloscope traces highlight power dynamics typical for lasing. An additional piece of Er-doped fiber included into the laser cavity enables a stable laser generation with a linewidth narrower than 10 kHz.

  5. Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks

    DTIC Science & Technology

    2006-09-01

    time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by

  6. The Chaotic Light Curves of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Kazanas, Demosthenes

    2007-01-01

    We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.

  7. Solute plumes mean velocity in aquifer transport: Impact of injection and detection modes

    NASA Astrophysics Data System (ADS)

    Dagan, Gedeon

    2017-08-01

    Flow of mean velocity U takes place in a heterogeneous aquifer of random spatially variable conductivity K. A solute plume is injected instantaneously along a plane normal to U, over a large area relative to the logconductivity integral scale I (ergodic plume). Transport is by advection by the spatially variable Eulerian velocity. The study is focused on the derivation of the mean plume velocity in the four modes set forth by Kreft and Zuber [1978] for one dimensional flow in a homogeneous medium. In the resident injection mode the mass is initially distributed uniformly in space while in the flux mode it is proportional to the local velocity. In the resident detection mode the mean velocity pertains to the plume centroid, whereas in flux detection it is quantified with the aid of the BTC and the corresponding mean arrival time. In agreement with the literature, it is shown that URR and UFF, pertaining to same injection and detection modes, either resident or flux, are equal to U. In contrast, in the mixed modes the solute velocity may differ significantly from U near the injection plane, approaching it at large distances relative to I. These effects are explained qualitatively with the aid of the exact solution for stratified aquifers.

  8. [Prediction model of health workforce and beds in county hospitals of Hunan by multiple linear regression].

    PubMed

    Ling, Ru; Liu, Jiawang

    2011-12-01

    To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting.

  9. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  10. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  11. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  12. Shade tree spatial structure and pod production explain frosty pod rot intensity in cacao agroforests, Costa Rica.

    PubMed

    Gidoin, Cynthia; Avelino, Jacques; Deheuvels, Olivier; Cilas, Christian; Bieng, Marie Ange Ngo

    2014-03-01

    Vegetation composition and plant spatial structure affect disease intensity through resource and microclimatic variation effects. The aim of this study was to evaluate the independent effect and relative importance of host composition and plant spatial structure variables in explaining disease intensity at the plot scale. For that purpose, frosty pod rot intensity, a disease caused by Moniliophthora roreri on cacao pods, was monitored in 36 cacao agroforests in Costa Rica in order to assess the vegetation composition and spatial structure variables conducive to the disease. Hierarchical partitioning was used to identify the most causal factors. Firstly, pod production, cacao tree density and shade tree spatial structure had significant independent effects on disease intensity. In our case study, the amount of susceptible tissue was the most relevant host composition variable for explaining disease intensity by resource dilution. Indeed, cacao tree density probably affected disease intensity more by the creation of self-shading rather than by host dilution. Lastly, only regularly distributed forest trees, and not aggregated or randomly distributed forest trees, reduced disease intensity in comparison to plots with a low forest tree density. A regular spatial structure is probably crucial to the creation of moderate and uniform shade as recommended for frosty pod rot management. As pod production is an important service expected from these agroforests, shade tree spatial structure may be a lever for integrated management of frosty pod rot in cacao agroforests.

  13. On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density

    NASA Astrophysics Data System (ADS)

    Dorini, F. A.; Cecconello, M. S.; Dorini, L. B.

    2016-04-01

    It is recognized that handling uncertainty is essential to obtain more reliable results in modeling and computer simulation. This paper aims to discuss the logistic equation subject to uncertainties in two parameters: the environmental carrying capacity, K, and the initial population density, N0. We first provide the closed-form results for the first probability density function of time-population density, N(t), and its inflection point, t*. We then use the Maximum Entropy Principle to determine both K and N0 density functions, treating such parameters as independent random variables and considering fluctuations of their values for a situation that commonly occurs in practice. Finally, closed-form results for the density functions and statistical moments of N(t), for a fixed t > 0, and of t* are provided, considering the uniform distribution case. We carried out numerical experiments to validate the theoretical results and compared them against that obtained using Monte Carlo simulation.

  14. A Q-GERT Model for Determining the Maintenance Crew Size for the SAC command Post Upgrade

    DTIC Science & Technology

    1983-12-01

    time that an equiprment fails. DAY3 A real variable corresponding to the day that an LRU is removed from the equipment. DAY4 A real variable...variable corresponding to the time that an LRU is repaired. TIM5 A real variable corresponaing to Lhe time that an equipment returns to service. TNOW...The current time . UF(IFN) User function IFN. UN(I) A sample from the uniform distri- bution defined by parameter set I. YIlN1 A real variable

  15. Variable Stiffness Panel Structural Analyses With Material Nonlinearity and Correlation With Tests

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Gurdal, Zafer

    2006-01-01

    Results from structural analyses of three tow-placed AS4/977-3 composite panels with both geometric and material nonlinearities are presented. Two of the panels have variable stiffness layups where the fiber orientation angle varies as a continuous function of location on the panel planform. One variable stiffness panel has overlapping tow bands of varying thickness, while the other has a theoretically uniform thickness. The third panel has a conventional uniform-thickness [plus or minus 45](sub 5s) layup with straight fibers, providing a baseline for comparing the performance of the variable stiffness panels. Parametric finite element analyses including nonlinear material shear are first compared with material characterization test results for two orthotropic layups. This nonlinear material model is incorporated into structural analysis models of the variable stiffness and baseline panels with applied end shortenings. Measured geometric imperfections and mechanical prestresses, generated by forcing the variable stiffness panels from their cured anticlastic shapes into their flatter test configurations, are also modeled. Results of these structural analyses are then compared to the measured panel structural response. Good correlation is observed between the analysis results and displacement test data throughout deep postbuckling up to global failure, suggesting that nonlinear material behavior is an important component of the actual panel structural response.

  16. Method for curing polymers using variable-frequency microwave heating

    DOEpatents

    Lauf, R.J.; Bible, D.W.; Paulauskas, F.L.

    1998-02-24

    A method for curing polymers incorporating a variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity is disclosed. By varying the frequency of the microwave signal, non-uniformities within the cavity are minimized, thereby achieving a more uniform cure throughout the workpiece. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. The furnace cavity may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing. 15 figs.

  17. Structure and dynamics of an upland old- growth forest at Redwood National Park, California

    USGS Publications Warehouse

    van Mantgem, Philip J.; Stuart, John D.

    2011-01-01

    Many current redwood forest management targets are based on old-growth conditions, so it is critical that we understand the variability and range of conditions that constitute these forests. Here we present information on the structure and dynamics from six one-hectare forest monitoring plots in an upland old-growth forest at Redwood National Park, California. We surveyed all stems =20 cm DBH in 1995 and 2010, allowing us to estimate any systematic changes in these stands. Stem size distributions for all species and for redwood (Sequoia sempervirens (D. Don) Endl.) alone did not appreciably change over the 15 year observation interval. Recruitment and mortality rates were roughly balanced, as were basal area dynamics (gains from recruitment and growth versus losses from mortality). Similar patterns were found for Sequoia alone. The spatial structure of stems at the plots suggested a random distribution of trees, though the pattern for Sequoia alone was found to be significantly clumped at small scales (< 5 m) at three of the six plots. These results suggest that these forests, including populations of Sequoia, have been generally stable over the past 15 years at this site, though it is possible that fire exclusion may be affecting recruitment of smaller Sequoia (< 20 cm DBH). The non-uniform spatial arrangement of stems also suggests that restoration prescriptions for second-growth redwood forests that encourage uniform spatial arrangements do not appear to mimic current upland old-growth conditions.

  18. Borehole measurement of the hydraulic properties of low-permeability rock

    NASA Astrophysics Data System (ADS)

    Novakowski, Kentner S.; Bickerton, Gregory S.

    1997-11-01

    Hydraulic tests conducted in low-permeability media are subject to numerous influences and processes, many of which manifest in a nonunique fashion. To explore the accuracy and meaning of the interpretation of hydraulic tests conducted under such conditions, two semianalytical models are developed in which variable well bore storage, variable temperature, and test method are considered. The formation is assumed to be of uniform permeability and uniform storativity in both models. To investigate uncertainty in the use of these models, a comparison is conducted to similar models that account for nonuniform formation properties such as finite skin, double porosity, and fractional flow. Using the models for a finite skin and double porosity as baseline cases, results show that the interpretation of slug tests are normally nonunique when tests are conducted in material of low permeability. Provided that a lower bound is defined for storativity, the uncertainty in a given interpretation conducted with the model for a uniform medium can be established by comparison with a fit to the data obtained using the model incorporating finite skin. It was also found that the degree of uncertainty can be diminished by conducting the test using an open hole period followed by a shut-in period (similar to a drill stem test). Determination of the degree of uncertainty was found to be case specific and must be defined by using at least a comparison between the model for uniform media and that for finite skin. To illustrate the use of the slug test model and determine the degree of uncertainty that will accrue with the use of that model, a field example, potentially influenced by variable well bore storage, is presented and interpreted.

  19. Anticipated improvement in laser beam uniformity using distributed phase plates with quasirandom patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epstein, R.; Skupsky, S.

    1990-08-01

    The uniformity of focused laser beams, that has been modified with randomly phased distributed phase plates (C. B. Burckhardt, Appl. Opt. {bold 9}, 695 (1970); Kato and Mima, Appl. Phys. B {bold 29}, 186 (1982); Kato {ital et} {ital al}., Phys. Rev. Lett. {bold 53}, 1057 (1984); LLE Rev. {bold 33}, 1 (1987)), can be improved further by constructing patterns of phase elements which minimize phase correlations over small separations. Long-wavelength nonuniformities in the intensity distribution, which are relatively difficult to overcome in the target by thermal smoothing and in the laser by, e.g., spectral dispersion (Skupsky {ital et} {italmore » al}., J. Appl. Phys. {bold 66}, 3456 (1989); LLE Rev. {bold 36}, 158 (1989); {bold 37}, 29 (1989); {bold 37}, 40 (1989)), result largely from short-range phase correlations between phase plate elements. To reduce the long-wavelength structure, we have constructed phase patterns with smaller short-range correlations than would occur randomly. Calculations show that long-wavelength nonuniformities in single-beam intensity patterns can be reduced with these masks when the intrinsic phase error of the beam falls below certain limits. We show the effect of this improvement on uniformity for spherical irradiation by a multibeam system.« less

  20. In Darwinian evolution, feedback from natural selection leads to biased mutations.

    PubMed

    Caporale, Lynn Helena; Doyle, John

    2013-12-01

    Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context).  This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.

  1. Kansas Adult Observational Safety Belt Usage Rates

    DOT National Transportation Integrated Search

    2011-07-01

    Methodology of Adult Survey - based on the federal guidelines in the Uniform Criteria manual. The Kansas survey is performed at 548 sites on 6 different road types in 20 randomly selected counties which encompass 85% of the population of Kansas. The ...

  2. Analysis of Basis Weight Uniformity of Microfiber Nonwovens and Its Impact on Permeability and Filtration Properties

    NASA Astrophysics Data System (ADS)

    Amirnasr, Elham

    It is widely recognized that nonwoven basis weight non-uniformity affects various properties of nonwovens. However, few studies can be found in this topic. The development of uniformity definition and measurement methods and the study of their impact on various web properties such as filtration properties and air permeability would be beneficial both in industrial applications and in academia. They can be utilized as a quality control tool and would provide insights about nonwoven behaviors that cannot be solely explained by average values. Therefore, for quantifying nonwoven web basis weight uniformity we purse to develop an optical analytical tool. The quadrant method and clustering analysis was utilized in an image analysis scheme to help define "uniformity" and its spatial variation. Implementing the quadrant method in an image analysis system allows the establishment of a uniformity index that can be used to quantify the degree of uniformity. Clustering analysis has also been modified and verified using uniform and random simulated images with known parameters. Number of clusters and cluster properties such as cluster size, member and density was determined. We also utilized this new measurement method to evaluate uniformity of nonwovens produced with different processes and investigated impacts of uniformity on filtration and permeability. The results of quadrant method shows that uniformity index computed from quadrant method demonstrate a good range for non-uniformity of nonwoven webs. Clustering analysis is also been applied on reference nonwoven with known visual uniformity. From clustering analysis results, cluster size is promising to be used as uniformity parameter. It is been shown that non-uniform nonwovens has provide lager cluster size than uniform nonwovens. It was been tried to find a relationship between web properties and uniformity index (as a web characteristic). To achieve this, filtration properties, air permeability, solidity and uniformity index of meltblown and spunbond samples was measured. Results for filtration test show some deviation between theoretical and experimental filtration efficiency by considering different types of fiber diameter. This deviation can occur due to variation in basis weight non-uniformity. So an appropriate theory is required to predict the variation of filtration efficiency with respect to non-uniformity of nonwoven filter media. And the results for air permeability test showed that uniformity index determined by quadrant method and measured properties have some relationship. In the other word, air permeability decreases as uniformity index on nonwoven web increase.

  3. Qualitatively Assessing Randomness in SVD Results

    NASA Astrophysics Data System (ADS)

    Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.

    2012-12-01

    Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.

  4. Second thoughts on the final rule: An analysis of baseline participant characteristics reports on ClinicalTrials.gov.

    PubMed

    Cahan, Amos; Anand, Vibha

    2017-01-01

    ClinicalTrials.gov is valuable for aggregate-level analysis of trials. The recently published final rule aims to improve reporting of trial results. We aimed to assess variability in ClinicalTirals.gov records reporting participants' baseline measures. The September 2015 edition of the database for Aggregate Analysis of ClinicalTrials.gov (AACT), was used in this study. To date, AACT contains 186,941 trials of which 16,660 trials reporting baseline (participant) measures were analyzed. We also analyzed a subset of 13,818 Highly Likely Applicable Clinical Trials (HLACT), for which reporting of results is likely mandatory and compared a random sample of 30 trial records to their journal articles. We report counts for each mandatory baseline measure and variability reporting in their formats. The AACT dataset contains 8,161 baseline measures with 1206 unique measurement units. However, of these 6,940 (85%) variables appear only once in the dataset. Age and Gender are reported using many different formats (178 and 49 respectively). "Age" as the variable name is reported in 60 different formats. HLACT subset reports measures using 3,931 variables. The most frequent Age format (i.e. mean (years) ± sd) is found in only 45% of trials. Overall only 4 baseline measures (Region of Enrollment, Age, Number of Participants, and Gender) are reported by > 10% of trials. Discrepancies are found in both the types and formats of ClinicalTrials.gov records and their corresponding journal articles. On average, journal articles include twice the number of baseline measures (13.6±7.1 (sd) vs. 6.6±7.6) when compared to the ClinicalTrials.gov records that report any results. We found marked variability in baseline measures reporting. This is not addressed by the final rule. To support secondary use of ClinicalTrials.gov, a uniform format for baseline measures reporting is warranted.

  5. Long-lasting permethrin impregnated uniforms: A randomized-controlled trial for tick bite prevention.

    PubMed

    Vaughn, Meagan F; Funkhouser, Sheana Whelan; Lin, Feng-Chang; Fine, Jason; Juliano, Jonathan J; Apperson, Charles S; Meshnick, Steven R

    2014-05-01

    Because of frequent exposure to tick habitats, outdoor workers are at high risk for tick-borne diseases. Adherence to National Institute for Occupational Safety and Health-recommended tick bite prevention methods is poor. A factory-based method for permethrin impregnation of clothing that provides long-lasting insecticidal and repellent activity is commercially available, and studies are needed to assess the long-term effectiveness of this clothing under field conditions. To evaluate the protective effectiveness of long-lasting permethrin impregnated uniforms among a cohort of North Carolina outdoor workers. A double-blind RCT was conducted between March 2011 and September 2012. Subjects included outdoor workers from North Carolina State Divisions of Forestry, Parks and Recreation, and Wildlife who worked in eastern or central North Carolina. A total of 159 volunteer subjects were randomized, and 127 and 101 subjects completed the first and second years of follow-up, respectively. Uniforms of participants in the treatment group were factory-impregnated with long-lasting permethrin whereas control group uniforms received a sham treatment. Participants continued to engage in their usual tick bite prevention activities. Incidence of work-related tick bites reported on weekly tick bite logs. Study subjects reported 1,045 work-related tick bites over 5,251 person-weeks of follow-up. The mean number of reported tick bites in the year prior to enrollment was similar for both the treatment and control groups, but markedly different during the study period. In our analysis conducted in 2013, the effectiveness of long-lasting permethrin impregnated uniforms for the prevention of work-related tick bites was 0.82 (95% CI=0.66, 0.91) and 0.34 (95% CI=-0.67, 0.74) for the first and second years of follow-up. These results indicate that long-lasting permethrin impregnated uniforms are highly effective for at least 1 year in deterring tick bites in the context of typical tick bite prevention measures employed by outdoor workers. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  6. Quasirandom geometric networks from low-discrepancy sequences

    NASA Astrophysics Data System (ADS)

    Estrada, Ernesto

    2017-08-01

    We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.

  7. Simultaneous recording of t-tubular electrical activity and Ca2+-release in heart failure

    NASA Astrophysics Data System (ADS)

    Crocini, C.; Coppini, R.; Ferrantini, C.; Yan, P.; Loew, L.; Tesi, C.; Poggesi, C.; Cerbai, E.; Pavone, F. S.; Sacconi, L.

    2014-05-01

    T-tubules (TT) are invaginations of the surface sarcolemma (SS) that mediate the rapid propagation of the action potential (AP) to the cardiomyocyte core. We employed the advantages of an ultrafast random access multi-photon (RAMP) microscope (Sacconi et al., PNAS 2012) with a double staining approach to optically record t-tubular AP and, simultaneously, the corresponding local Ca2+-release in different positions across the cardiomyocytes. Despite a uniform AP between SS and TT at steady-state stimulation, in control cardiomyocytes we observed a non-negligible be variability of local Ca2+-transient amplitude and kinetics. This variability was significantly reduced by applying 0.1μM Isoproterenol, which increases the opening probability of Ca2+-release units. In the rat heart failure model (HF), we previously demonstrated that some tubular elements fail to propagate AP. We found that the tubules unable to propagate AP, displayed a reduced correspondent Ca2+-transient amplitude as well as a slower Ca2+ rise compared to electrically coupled tubules. Moreover variability of Ca2+-transient kinetics were increased in HF. Finally, TT that did not show AP, occasionally exhibited spontaneous depolarizations that were never accompanied by local Ca2+-release in the absence of any pro-arrhythmogenic stimulation. Simultaneous recording of AP and Ca2+-transient allows us to probe the spatio-temporal variability of Ca2+-release, whereas the investigation of Ca2+-transient in HF discloses an unexpected uncoupling between t-tubular depolarization and Ca2+-release in remodeled tubules. This work was funded by the European Union 7th Framework Program (FP7/2007- 2013) under grant agreement n° 284464, 241526, by the Italian Ministry of University and Research (NANOMAX), and by Telethon-Italy (GGP13162).

  8. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  9. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  10. Increased uniformity by planting clones will likely have a minimal effect on inventory costs

    Treesearch

    Curtis L. VanderSchaaf; Dean W. Coble; David B. South

    2012-01-01

    When conducting inventories, reducing variability among tree diameters, heights, and ultimately volumes or biomass, can reduce the number of points/plots needed to obtain a desired level of precision. We present a simple analysis examining the potential reduction in discounted inventory costs when stand variability is decreased (via improved genetics and intensive...

  11. Effects of fixture rotation on coating uniformity for high-performance optical filter fabrication

    NASA Astrophysics Data System (ADS)

    Rubin, Binyamin; George, Jason; Singhal, Riju

    2018-04-01

    Coating uniformity is critical in fabricating high-performance optical filters by various vacuum deposition methods. Simple and planetary rotation systems with shadow masks are used to achieve the required uniformity [J. B. Oliver and D. Talbot, Appl. Optics 45, 13, 3097 (2006); O. Lyngnes, K. Kraus, A. Ode and T. Erguder, in `Method for Designing Coating Thickness Uniformity Shadow Masks for Deposition Systems with a Planetary Fixture', 2014 Technical Conference Proceedings, Optical Coatings, August 13, 2014, DOI: 10.14332/svc14.proc.1817.]. In this work, we discuss the effect of rotation pattern and speed on thickness uniformity in an ion beam sputter deposition system. Numerical modeling is used to determine statistical distribution of random thickness errors in coating layers. The relationship between thickness tolerance and production yield are simulated theoretically and demonstrated experimentally. Production yields for different optical filters produced in an ion beam deposition system with planetary rotation are presented. Single-wavelength and broadband optical monitoring systems were used for endpoint monitoring during filter deposition. Limitations of thickness tolerances that can be achieved in systems with planetary rotation are shown. Paths for improving production yield in an ion beam deposition system are described.

  12. Boundaries, kinetic properties, and final domain structure of plane discrete uniform Poisson-Voronoi tessellations with von Neumann neighborhoods.

    PubMed

    Korobov, A

    2009-03-01

    Discrete random tessellations appear not infrequently in describing nucleation and growth transformations. Generally, several non-Euclidean metrics are possible in this case. Previously [A. Korobov, Phys. Rev. B 76, 085430 (2007)] continual analogs of such tessellations have been studied. Here one of the simplest discrete varieties of the Kolmogorov-Johnson-Mehl-Avrami model, namely, the model with von Neumann neighborhoods, has been examined per se, i.e., without continualization. The tessellation is uniform in the sense that domain boundaries consist of tiles. Similarities and distinctions between discrete and continual models are discussed.

  13. Method for curing polymers using variable-frequency microwave heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lauf, R.J.; Bible, D.W.; Paulauskas, F.L.

    1998-02-24

    A method for curing polymers incorporating a variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity is disclosed. By varying the frequency of the microwave signal, non-uniformities within the cavity are minimized, thereby achieving a more uniform cure throughout the workpiece. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. Themore » furnace cavity may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing. 15 figs.« less

  14. Method for curing polymers using variable-frequency microwave heating

    DOEpatents

    Lauf, Robert J.; Bible, Don W.; Paulauskas, Felix L.

    1998-01-01

    A method for curing polymers (11) incorporating a variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34). By varying the frequency of the microwave signal, non-uniformities within the cavity (34) are minimized, thereby achieving a more uniform cure throughout the workpiece (36). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. The furnace cavity (34) may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing.

  15. Randomized pilot trial of a cognitive-behavioral alcohol, self-harm, and HIV prevention program for teens in mental health treatment.

    PubMed

    Esposito-Smythers, Christianne; Hadley, Wendy; Curby, Timothy W; Brown, Larry K

    2017-02-01

    Adolescents with mental health conditions represent a high-risk group for substance use, deliberate self-harm (DSH), and risky sexual behavior. Mental health treatment does not uniformly decrease these risks. Effective prevention efforts are needed to offset the developmental trajectory from mental health problems to these behaviors. This study tested an adjunctive cognitive-behavioral family-based alcohol, DSH, and HIV prevention program (ASH-P) for adolescents in mental healthcare. A two group randomized design was used to compare ASH-P to an assessment only control (AO-C). Participants included 81 adolescents and a parent. Assessments were completed at pre-intervention as well as 1, 6, and 12-months post-enrollment, and included measures of family-based mechanisms and high-risk behaviors. ASH-P relative to AO-C was associated with greater improvements in most family process variables (perceptions of communication and parental disapproval of alcohol use and sexual behavior) as well as less DSH and greater refusal of sex to avoid a sexually transmitted infection. It also had a moderate (but non-significant) effect on odds of binge drinking. No differences were found in suicidal ideation, alcohol use, or sexual intercourse. ASH-P showed initial promise in preventing multiple high-risk behaviors. Further testing of prevention protocols that target multiple high-risk behaviors in clinical samples is warranted. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    NASA Astrophysics Data System (ADS)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  17. Weighted re-randomization tests for minimization with unbalanced allocation.

    PubMed

    Han, Baoguang; Yu, Menggang; McEntegart, Damian

    2013-01-01

    Re-randomization test has been considered as a robust alternative to the traditional population model-based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate-adaptive randomization method for ensuring balance among prognostic factors. Among various re-randomization tests, fixed-entry-order re-randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed-entry-order re-randomization test is biased and thus compromised in power. We find that the bias is due to non-uniform re-allocation probabilities incurred by the re-randomization in this case. We therefore propose a weighted fixed-entry-order re-randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re-randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Compliance-Effect Correlation Bias in Instrumental Variables Estimators

    ERIC Educational Resources Information Center

    Reardon, Sean F.

    2010-01-01

    Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…

  19. Dissolving variables in connectionist combinatory logic

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    A connectionist system which can represent and execute combinator expressions to elegantly solve the variable binding problem in connectionist networks is presented. This system is a graph reduction machine utilizing graph representations and traversal mechanisms similar to ones described in the BoltzCONS system of Touretzky (1986). It is shown that, as combinators eliminate variables by introducing special functions, these functions can be connectionistically implemented without reintroducing variable binding. This approach 'dissolves' an important part of the variable binding problem, in that a connectionist system still has to manipulate complex data structures, but those structures and their manipulations are rendered more uniform.

  20. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  1. Explicit equilibria in a kinetic model of gambling

    NASA Astrophysics Data System (ADS)

    Bassetti, F.; Toscani, G.

    2010-06-01

    We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.

  2. A tuneable approach to uniform light distribution for artificial daylight photodynamic therapy.

    PubMed

    O'Mahoney, Paul; Haigh, Neil; Wood, Kenny; Brown, C Tom A; Ibbotson, Sally; Eadie, Ewan

    2018-06-16

    Implementation of daylight photodynamic therapy (dPDT) is somewhat limited by variable weather conditions. Light sources have been employed to provide artificial dPDT indoors, with low irradiances and longer treatment times. Uniform light distribution across the target area is key to ensuring effective treatment, particularly for large areas. A novel light source is developed with tuneable direction of light emission in order to meet this challenge. Wavelength composition of the novel light source is controlled such that the protoporphyrin-IX (PpIX) weighed spectra of both the light source and daylight match. The uniformity of the light source is characterised on a flat surface, a model head and a model leg. For context, a typical conventional PDT light source is also characterised. Additionally, the wavelength uniformity across the treatment site is characterised. The PpIX-weighted spectrum of the novel light source matches with PpIX-weighted daylight spectrum, with irradiance values within the bounds for effective dPDT. By tuning the direction of light emission, improvements are seen in the uniformity across large anatomical surfaces. Wavelength uniformity is discussed. We have developed a light source that addresses the challenges in uniform, multiwavelength light distribution for large area artificial dPDT across curved anatomical surfaces. Copyright © 2018. Published by Elsevier B.V.

  3. Reading sentences of uniform word length: Evidence for the adaptation of the preferred saccade length during reading.

    PubMed

    Cutter, Michael G; Drieghe, Denis; Liversedge, Simon P

    2017-11-01

    In the current study, the effect of removing word length variability within sentences on spatial aspects of eye movements during reading was investigated. Participants read sentences that were uniform in terms of word length, with each sentence consisting entirely of three-, four-, or five-letter words, or a combination of these word lengths. Several interesting findings emerged. Adaptation of the preferred saccade length occurred for sentences with different uniform word length; participants would be more accurate at making short saccades while reading uniform sentences of three-letter words, while they would be more accurate at making long saccades while reading uniform sentences of five-letter words. Furthermore, word skipping was affected such that three- and four-letter words were more likely, and five-letter words less likely, to be directly fixated in uniform compared to non-uniform sentences. It is argued that saccadic targeting during reading is highly adaptable and flexible toward the characteristics of the text currently being read, as opposed to the idea implemented in most current models of eye movement control during reading that readers develop a preference for making saccades of a certain length across a lifetime of experience with a given language. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  5. Ranking and clustering of nodes in networks with smart teleportation

    NASA Astrophysics Data System (ADS)

    Lambiotte, R.; Rosvall, M.

    2012-05-01

    Random teleportation is a necessary evil for ranking and clustering directed networks based on random walks. Teleportation enables ergodic solutions, but the solutions must necessarily depend on the exact implementation and parametrization of the teleportation. For example, in the commonly used PageRank algorithm, the teleportation rate must trade off a heavily biased solution with a uniform solution. Here we show that teleportation to links rather than nodes enables a much smoother trade-off and effectively more robust results. We also show that, by not recording the teleportation steps of the random walker, we can further reduce the effect of teleportation with dramatic effects on clustering.

  6. Pellicle transmission uniformity requirements

    NASA Astrophysics Data System (ADS)

    Brown, Thomas L.; Ito, Kunihiro

    1998-12-01

    Controlling critical dimensions of devices is a constant battle for the photolithography engineer. Current DUV lithographic process exposure latitude is typically 12 to 15% of the total dose. A third of this exposure latitude budget may be used up by a variable related to masking that has not previously received much attention. The emphasis on pellicle transmission has been focused on increasing the average transmission. Much less, attention has been paid to transmission uniformity. This paper explores the total demand on the photospeed latitude budget, the causes of pellicle transmission nonuniformity and examines reasonable expectations for pellicle performance. Modeling is used to examine how the two primary errors in pellicle manufacturing contribute to nonuniformity in transmission. World-class pellicle transmission uniformity standards are discussed and a comparison made between specifications of other components in the photolithographic process. Specifications for other materials or parameters are used as benchmarks to develop a proposed industry standard for pellicle transmission uniformity.

  7. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  8. Thin films with disordered nanohole patterns for solar radiation absorbers

    NASA Astrophysics Data System (ADS)

    Fang, Xing; Lou, Minhan; Bao, Hua; Zhao, C. Y.

    2015-06-01

    The radiation absorption in thin films with three disordered nanohole patterns, i.e., random position, non-uniform radius, and amorphous pattern, are numerically investigated by finite-difference time-domain (FDTD) simulations. Disorder can alter the absorption spectra and has an impact on the broadband absorption performance. Compared to random position and non-uniform radius nanoholes, amorphous pattern can induce a much better integrated absorption. The power density spectra indicate that amorphous pattern nanoholes reduce the symmetry and provide more resonance modes that are desired for the broadband absorption. The application condition for amorphous pattern nanoholes shows that they are much more appropriate in absorption enhancement for weak absorption materials. Amorphous silicon thin films with disordered nanohole patterns are applied in solar radiation absorbers. Four configurations of thin films with different nanohole patterns show that interference between layers in absorbers will change the absorption performance. Therefore, it is necessary to optimize the whole radiation absorbers although single thin film with amorphous pattern nanohole has reached optimal absorption.

  9. Pattern-projected schlieren imaging method using a diffractive optics element

    NASA Astrophysics Data System (ADS)

    Min, Gihyeon; Lee, Byung-Tak; Kim, Nac Woo; Lee, Munseob

    2018-04-01

    We propose a novel schlieren imaging method by projecting a random dot pattern, which is generated in a light source module that includes a diffractive optical element. All apparatuses are located in the source side, which leads to one-body sensor applications. This pattern is distorted by the deflections of schlieren objects such that the displacement vectors of random dots in the pixels can be obtained using the particle image velocity algorithm. The air turbulences induced by a burning candle, boiling pot, heater, and gas torch were successfully imaged, and it was shown that imaging up to a size of 0.7 m  ×  0.57 m is possible. An algorithm to correct the non-uniform sensitivity according to the position of a schlieren object was analytically derived. This algorithm was applied to schlieren images of lenses. Comparing the corrected versions to the original schlieren images, we showed a corrected uniform sensitivity of 14.15 times on average.

  10. A new definition of pharmaceutical quality: assembly of a risk simulation platform to investigate the impact of manufacturing/product variability on clinical performance.

    PubMed

    Short, Steven M; Cogdill, Robert P; D'Amico, Frank; Drennen, James K; Anderson, Carl A

    2010-12-01

    The absence of a unanimous, industry-specific definition of quality is, to a certain degree, impeding the progress of ongoing efforts to "modernize" the pharmaceutical industry. This work was predicated on requests by Dr. Woodcock (FDA) to re-define pharmaceutical quality in terms of risk by linking production characteristics to clinical attributes. A risk simulation platform that integrates population statistics, drug delivery system characteristics, dosing guidelines, patient compliance estimates, production metrics, and pharmacokinetic, pharmacodynamic, and in vitro-in vivo correlation models to investigate the impact of manufacturing variability on clinical performance of a model extended-release theophylline solid oral dosage system was developed. Manufacturing was characterized by inter- and intra-batch content uniformity and dissolution variability metrics, while clinical performance was described by a probabilistic pharmacodynamic model that expressed the probability of inefficacy and toxicity as a function of plasma concentrations. Least-squares regression revealed that both patient compliance variables, percent of doses taken and dosing time variability, significantly impacted efficacy and toxicity. Additionally, intra-batch content uniformity variability elicited a significant change in risk scores for the two adverse events and, therefore, was identified as a critical quality attribute. The proposed methodology demonstrates that pharmaceutical quality can be recast to explicitly reflect clinical performance. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  11. Anderson localization for radial tree-like random quantum graphs

    NASA Astrophysics Data System (ADS)

    Hislop, Peter D.; Post, Olaf

    We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.

  12. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  14. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    ERIC Educational Resources Information Center

    Bancroft, Stacie L.; Bourret, Jason C.

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…

  15. Effect of Uniform Design on the Speed of Combat Tourniquet Application: A Simulation Study.

    PubMed

    Higgs, Andrew R; Maughon, Michael J; Ruland, Robert T; Reade, Michael C

    2016-08-01

    Tourniquets are issued to deployed members of both the United States (U.S. military and the Australian Defence Force (ADF). The ease of removing the tourniquet from the pocket of the combat uniform may influence its time to application. The ADF uniform uses buttons to secure the pocket, whereas the U.S. uniform uses a hook and loop fastener system. National differences in training may influence the time to and effectiveness of tourniquet application. To compare the time taken to retrieve and apply a tourniquet from the pocket of the Australian and the U.S. combat uniform and compare the effectiveness of tourniquet application. Twenty participants from both nations were randomly selected. Participants were timed on their ability to remove a tourniquet from their pockets and then apply it effectively. The U.S. personnel removed their tourniquets in shorter time (median 2.5 seconds) than Australians (median 5.72 seconds, p < 0.0001). ADF members (mean 41.36 seconds vs. 58.87 seconds, p < 0.037) applied the tourniquet more rapidly once removed from the pocket and trended to apply it more effectively (p = 0.1). The closure system of pockets on the combat uniform might influence the time taken to apply a tourniquet. Regular training might also reduce the time taken to apply a tourniquet effectively. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  16. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  17. Random Variables: Simulations and Surprising Connections.

    ERIC Educational Resources Information Center

    Quinn, Robert J.; Tomlinson, Stephen

    1999-01-01

    Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)

  18. Statistical metrology—measurement and modeling of variation for advanced process development and design rule generation

    NASA Astrophysics Data System (ADS)

    Boning, Duane S.; Chung, James E.

    1998-11-01

    Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.

  19. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  20. Potential effect of stand structure on belowground allocation

    Treesearch

    Thomas J. Dean

    2001-01-01

    Stand structure affects two key variables that affect biomass allocation to the stem: leaf area and height to the center of the crown. By translating wind forces into bending moment, these variables generate bending stress within a stem. The uniform stress axiom of stem formation can be used to calculate current stem mass for a given bending moment and stem allocation...

  1. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  2. Impact of wall thickness and saccular geometry on the computational wall stress of descending thoracic aortic aneurysms.

    PubMed

    Shang, Eric K; Nathan, Derek P; Sprinkle, Shanna R; Fairman, Ronald M; Bavaria, Joseph E; Gorman, Robert C; Gorman, Joseph H; Jackson, Benjamin M

    2013-09-10

    Wall stress calculated using finite element analysis has been used to predict rupture risk of aortic aneurysms. Prior models often assume uniform aortic wall thickness and fusiform geometry. We examined the effects of including local wall thickness, intraluminal thrombus, calcifications, and saccular geometry on peak wall stress (PWS) in finite element analysis of descending thoracic aortic aneurysms. Computed tomographic angiography of descending thoracic aortic aneurysms (n=10 total, 5 fusiform and 5 saccular) underwent 3-dimensional reconstruction with custom algorithms. For each aneurysm, an initial model was constructed with uniform wall thickness. Experimental models explored the addition of variable wall thickness, calcifications, and intraluminal thrombus. Each model was loaded with 120 mm Hg pressure, and von Mises PWS was computed. The mean PWS of uniform wall thickness models was 410 ± 111 kPa. The imposition of variable wall thickness increased PWS (481 ± 126 kPa, P<0.001). Although the addition of calcifications was not statistically significant (506 ± 126 kPa, P=0.07), the addition of intraluminal thrombus to variable wall thickness (359 ± 86 kPa, P ≤ 0.001) reduced PWS. A final model incorporating all features also reduced PWS (368 ± 88 kPa, P<0.001). Saccular geometry did not increase diameter-normalized stress in the final model (77 ± 7 versus 67 ± 12 kPa/cm, P=0.22). Incorporation of local wall thickness can significantly increase PWS in finite element analysis models of thoracic aortic aneurysms. Incorporating variable wall thickness, intraluminal thrombus, and calcifications significantly impacts computed PWS of thoracic aneurysms; sophisticated models may, therefore, be more accurate in assessing rupture risk. Saccular aneurysms did not demonstrate a significantly higher normalized PWS than fusiform aneurysms.

  3. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  4. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  5. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  6. Data splitting for artificial neural networks using SOM-based stratified sampling.

    PubMed

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. A unified approach for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-09-01

    Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.

  8. A probabilistic approach to randomness in geometric configuration of scalable origami structures

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Paulino, Glaucio; Gardoni, Paolo

    2015-03-01

    Origami, an ancient paper folding art, has inspired many solutions to modern engineering challenges. The demand for actual engineering applications motivates further investigation in this field. Although rooted from the historic art form, many applications of origami are based on newly designed origami patterns to match the specific requirenments of an engineering problem. The application of origami to structural design problems ranges from micro-structure of materials to large scale deployable shells. For instance, some origami-inspired designs have unique properties such as negative Poisson ratio and flat foldability. However, origami structures are typically constrained by strict mathematical geometric relationships, which in reality, can be easily violated, due to, for example, random imperfections introduced during manufacturing, or non-uniform deformations under working conditions (e.g. due to non-uniform thermal effects). Therefore, the effects of uncertainties in origami-like structures need to be studied in further detail in order to provide a practical guide for scalable origami-inspired engineering designs. Through reliability and probabilistic analysis, we investigate the effect of randomness in origami structures on their mechanical properties. Dislocations of vertices of an origami structure have different impacts on different mechanical properties, and different origami designs could have different sensitivities to imperfections. Thus we aim to provide a preliminary understanding of the structural behavior of some common scalable origami structures subject to randomness in their geometric configurations in order to help transition the technology toward practical applications of origami engineering.

  9. Are randomly grown graphs really random?

    PubMed

    Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H

    2001-10-01

    We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.

  10. Apparatus for synthesis of a solar spectrum

    DOEpatents

    Sopori, Bhushan L.

    1993-01-01

    A xenon arc lamp and a tungsten filament lamp provide light beams that together contain all the wavelengths required to accurately simulate a solar spectrum. Suitable filter apparatus selectively direct visible and ultraviolet light from the xenon arc lamp into two legs of a trifurcated randomized fiber optic cable. Infrared light selectively filtered from the tungsten filament lamp is directed into the third leg of the fiber optic cable. The individual optic fibers from the three legs are brought together in a random fashion into a single output leg. The output beam emanating from the output leg of the trifurcated randomized fiber optic cable is extremely uniform and contains wavelengths from each of the individual filtered light beams. This uniform output beam passes through suitable collimation apparatus before striking the surface of the solar cell being tested. Adjustable aperture apparatus located between the lamps and the input legs of the trifurcated fiber optic cable can be selectively adjusted to limit the amount of light entering each leg, thereby providing a means of "fine tuning" or precisely adjusting the spectral content of the output beam. Finally, an adjustable aperture apparatus may also be placed in the output beam to adjust the intensity of the output beam without changing the spectral content and distribution of the output beam.

  11. Many-junction photovoltaic device performance under non-uniform high-concentration illumination

    NASA Astrophysics Data System (ADS)

    Valdivia, Christopher E.; Wilkins, Matthew M.; Chahal, Sanmeet S.; Proulx, Francine; Provost, Philippe-Olivier; Masson, Denis P.; Fafard, Simon; Hinzer, Karin

    2017-09-01

    A parameterized 3D distributed circuit model was developed to calculate the performance of III-V solar cells and photonic power converters (PPC) with a variable number of epitaxial vertically-stacked pn junctions. PPC devices are designed with many pn junctions to realize higher voltages and to operate under non-uniform illumination profiles from a laser or LED. Performance impacts of non-uniform illumination were greatly reduced with increasing number of junctions, with simulations comparing PPC devices with 3 to 20 junctions. Experimental results using Azastra Opto's 12- and 20-junction PPC illuminated by an 845 nm diode laser show high performance even with a small gap between the PPC and optical fiber output, until the local tunnel junction limit is reached.

  12. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  13. Ancestral state reconstruction, rate heterogeneity, and the evolution of reptile viviparity.

    PubMed

    King, Benedict; Lee, Michael S Y

    2015-05-01

    Virtually all models for reconstructing ancestral states for discrete characters make the crucial assumption that the trait of interest evolves at a uniform rate across the entire tree. However, this assumption is unlikely to hold in many situations, particularly as ancestral state reconstructions are being performed on increasingly large phylogenies. Here, we show how failure to account for such variable evolutionary rates can cause highly anomalous (and likely incorrect) results, while three methods that accommodate rate variability yield the opposite, more plausible, and more robust reconstructions. The random local clock method, implemented in BEAST, estimates the position and magnitude of rate changes on the tree; split BiSSE estimates separate rate parameters for pre-specified clades; and the hidden rates model partitions each character state into a number of rate categories. Simulations show the inadequacy of traditional models when characters evolve with both asymmetry (different rates of change between states within a character) and heterotachy (different rates of character evolution across different clades). The importance of accounting for rate heterogeneity in ancestral state reconstruction is highlighted empirically with a new analysis of the evolution of viviparity in squamate reptiles, which reveal a predominance of forward (oviparous-viviparous) transitions and very few reversals. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Non-uniform multivariate embedding to assess the information transfer in cardiovascular and cardiorespiratory variability series.

    PubMed

    Faes, Luca; Nollo, Giandomenico; Porta, Alberto

    2012-03-01

    The complexity of the short-term cardiovascular control prompts for the introduction of multivariate (MV) nonlinear time series analysis methods to assess directional interactions reflecting the underlying regulatory mechanisms. This study introduces a new approach for the detection of nonlinear Granger causality in MV time series, based on embedding the series by a sequential, non-uniform procedure, and on estimating the information flow from one series to another by means of the corrected conditional entropy. The approach is validated on short realizations of linear stochastic and nonlinear deterministic processes, and then evaluated on heart period, systolic arterial pressure and respiration variability series measured from healthy humans in the resting supine position and in the upright position after head-up tilt. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Frequency and distribution of incidental findings deemed appropriate for S modifier designation on low-dose CT in a lung cancer screening program.

    PubMed

    Reiter, Michael J; Nemesure, Allison; Madu, Ezemonye; Reagan, Lisa; Plank, April

    2018-06-01

    To describe the frequency, distribution and reporting patterns of incidental findings receiving the Lung-RADS S modifier on low-dose chest computed tomography (CT) among lung cancer screening participants. This retrospective investigation included 581 individuals who received baseline low-dose chest CT for lung cancer screening between October 2013 and June 2017 at a single center. Incidental findings resulting in assignment of Lung-RADS S modifier were recorded as were incidental abnormalities detailed within the body of the radiology report only. A subset of 60 randomly selected CTs was reviewed by a second (blinded) radiologist to evaluate inter-rater variability of Lung-RADS reporting. A total of 261 (45%) participants received the Lung-RADS S modifier on baseline CT with 369 incidental findings indicated as potentially clinically significant. Coronary artery calcification was most commonly reported, accounting for 182 of the 369 (49%) findings. An additional 141 incidentalomas of the same types as these 369 findings were described in reports but were not labelled with the S modifier. Therefore, as high as 69% (402 of 581) of participants could have received the S modifier if reporting was uniform. Inter-radiologist concordance of S modifier reporting in a subset of 60 participants was poor (42% agreement, kappa = 0.2). Incidental findings are commonly identified on chest CT for lung cancer screening, yet reporting of the S modifier within Lung-RADS is inconsistent. Specific guidelines are necessary to better define potentially clinically significant abnormalities and to improve reporting uniformity. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A Thermoelastic Damping Model for the Cone Microcantilever Resonator with Circular Cross-section

    NASA Astrophysics Data System (ADS)

    Li, Pu; Zhou, Hongyue

    2017-07-01

    Microbeams with variable cross-section have been applied in Microelectromechanical Systems (MEMS) resonators. Quality factor (Q-factor) is an important factor evaluating the performance of MEMS resonators, and high Q-factor stands for the excellent performance. Thermoelastic damping (TED), which has been verified as a fundamental energy lost mechanism for microresonators, determines the upper limit of Q-factor. TED can be calculated by the Zener’s model and Lifshits and Roukes (LR) model. However, for microbeam resonators with variable cross-sections, these two models become invalid in some cases. In this work, we derived the TED model for cone microcantilever with circular cross-section that is a representative non-uniform microbeam. The comparison of results obtained by the present model and Finite Element Method (FEM) model proves that the present model is valid for predicting TED value for cone microcantilever with circular cross-section. The results suggest that the first-order natural frequencies and TED values of cone microcantilever are larger than those of uniform microbeam for large aspect ratios (l/r 0). In addition, the Debye peak value of a uniform microcantilever is equal to 0.5ΔE, while that of cone microcantilever is about 0.438ΔE.

  17. Qualification of security printing features

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.; Aronoff, Jason S.; Arnabat, Jordi

    2006-02-01

    This paper describes the statistical and hardware processes involved in qualifying two related printing features for their deployment in product (e.g. document and package) security. The first is a multi-colored tiling feature that can also be combined with microtext to provide additional forms of security protection. The color information is authenticated automatically with a variety of handheld, desktop and production scanners. The microtext is authenticated either following magnification or manually by a field inspector. The second security feature can also be tile-based. It involves the use of two inks that provide the same visual color, but differ in their transparency to infrared (IR) wavelengths. One of the inks is effectively transparent to IR wavelengths, allowing emitted IR light to pass through. The other ink is effectively opaque to IR wavelengths. These inks allow the printing of a seemingly uniform, or spot, color over a (truly) uniform IR emitting ink layer. The combination converts a uniform covert ink and a spot color to a variable data region capable of encoding identification sequences with high density. Also, it allows the extension of variable data printing for security to ostensibly static printed regions, affording greater security protection while meeting branding and marketing specifications.

  18. Behavior of dusty real gas on adiabatic propagation of cylindrical imploding strong shock waves

    NASA Astrophysics Data System (ADS)

    Gangwar, P. K.

    2018-05-01

    In this paper, CCW method has been used to study the behavior of dusty real gas on adiabatic propagation of cylindrical imploding strong shock waves. The strength of overtaking waves is estimated under the assumption that both C+ and C- disturbances propagate in non-uniform region of same density distribution. It is assumed that the dusty gas is the mixture of a real gas and a large number of small spherical solid particles of uniform size. The solid particles are uniformly distributed in the medium. Maintaining equilibrium flow conditions, the expressions for shock strength has been derived both for freely propagation as well as under the effect of overtaking disturbances. The variation of all flow variables with propagation distance, mass concentration of solid particles in the mixture and the ratio of solid particles to the initial density of gas have been computed and discussed through graphs. It is found that the presence of dust particles in the gases medium has significant effects on the variation of flow variables and the shock is strengthened under the influence of overtaking disturbances. The results accomplished here been compared with those for ideal gas.

  19. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  20. Carbon nanotube bundles with tensile strength over 80 GPa.

    PubMed

    Bai, Yunxiang; Zhang, Rufan; Ye, Xuan; Zhu, Zhenxing; Xie, Huanhuan; Shen, Boyuan; Cai, Dali; Liu, Bofei; Zhang, Chenxi; Jia, Zhao; Zhang, Shenli; Li, Xide; Wei, Fei

    2018-05-14

    Carbon nanotubes (CNTs) are one of the strongest known materials. When assembled into fibres, however, their strength becomes impaired by defects, impurities, random orientations and discontinuous lengths. Fabricating CNT fibres with strength reaching that of a single CNT has been an enduring challenge. Here, we demonstrate the fabrication of CNT bundles (CNTBs) that are centimetres long with tensile strength over 80 GPa using ultralong defect-free CNTs. The tensile strength of CNTBs is controlled by the Daniels effect owing to the non-uniformity of the initial strains in the components. We propose a synchronous tightening and relaxing strategy to release these non-uniform initial strains. The fabricated CNTBs, consisting of a large number of components with parallel alignment, defect-free structures, continuous lengths and uniform initial strains, exhibit a tensile strength of 80 GPa (corresponding to an engineering tensile strength of 43 GPa), which is far higher than that of any other strong fibre.

  1. bFGF-containing electrospun gelatin scaffolds with controlled nano-architectural features for directed angiogenesis

    PubMed Central

    Montero, Ramon B.; Vial, Ximena; Nguyen, Dat Tat; Farhand, Sepehr; Reardon, Mark; Pham, Si M.; Tsechpenakis, Gavriil; Andreopoulos, Fotios M.

    2011-01-01

    Current therapeutic angiogenesis strategies are focused on the development of biologically responsive scaffolds that can deliver multiple angiogenic cytokines and/or cells in ischemic regions. Herein, we report on a novel electrospinning approach to fabricate cytokine-containing nanofibrous scaffolds with tunable architecture to promote angiogenesis. Fiber diameter and uniformity were controlled by varying the concentration of the polymeric (i.e. gelatin) solution, the feed rate, needle to collector distance, and electric field potential between the collector plate and injection needle. Scaffold fiber orientation (random vs. aligned) was achieved by alternating the polarity of two parallel electrodes placed on the collector plate thus dictating fiber deposition patterns. Basic fibroblast growth factor (bFGF) was physically immobilized within the gelatin scaffolds at variable concentrations and human umbilical vein endothelial cells (HUVEC) were seeded on the top of the scaffolds. Cell proliferation and migration was assessed as a function of growth factor loading and scaffold architecture. HUVECs successfully adhered onto gelatin B scaffolds and cell proliferation was directly proportional to the loading concentrations of the growth factor (0–100 bFGF ng/mL). Fiber orientation had a pronounced effect on cell morphology and orientation. Cells were spread along the fibers of the electrospun scaffolds with the aligned orientation and developed a spindle-like morphology parallel to the scaffold's fibers. In contrast, cells seeded onto the scaffolds with random fiber orientation, did not demonstrate any directionality and appeared to have a rounder shape. Capillary formation (i.e. sprouts length and number of sprouts per bead), assessed in a 3-D in vitro angiogenesis assay, was a function of bFGF loading concentration (0 ng, 50 ng and 100 ng per scaffold) for both types of electrospun scaffolds (i.e. with aligned or random fiber orientation). PMID:22200610

  2. Modeling the motion and orientation of various pharmaceutical tablet shapes in a film coating pan using DEM.

    PubMed

    Ketterhagen, William R

    2011-05-16

    Film coating uniformity is an important quality attribute of pharmaceutical tablets. Large variability in coating thickness can limit process efficiency or cause significant variation in the amount or delivery rate of the active pharmaceutical ingredient to the patient. In this work, the discrete element method (DEM) is used to computationally model the motion and orientation of several novel pharmaceutical tablet shapes in a film coating pan in order to predict coating uniformity. The model predictions are first confirmed with experimental data obtained from an equivalent film coating pan using a machine vision system. The model is then applied to predict coating uniformity for various tablet shapes, pan speeds, and pan loadings. The relative effects of these parameters on both inter- and intra-tablet film coating uniformity are assessed. The DEM results show intra-tablet coating uniformity is strongly influenced by tablet shape, and the extent of this can be predicted by a measure of the tablet shape. The tablet shape is shown to have little effect on the mixing of tablets, and thus, the inter-tablet coating uniformity. The pan rotation speed and pan loading are shown to have a small effect on intra-tablet coating uniformity but a more significant impact on inter-tablet uniformity. These results demonstrate the usefulness of modeling in guiding drug product development decisions such as selection of tablet shape and process operating conditions. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  4. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  5. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  6. Directional, seamless, and restriction enzyme-free construction of random-primed complementary DNA libraries using phosphorothioate-modified primers.

    PubMed

    Howland, Shanshan W; Poh, Chek-Meng; Rénia, Laurent

    2011-09-01

    Directional cloning of complementary DNA (cDNA) primed by oligo(dT) is commonly achieved by appending a restriction site to the primer, whereas the second strand is synthesized through the combined action of RNase H and Escherichia coli DNA polymerase I (PolI). Although random primers provide more uniform and complete coverage, directional cloning with the same strategy is highly inefficient. We report that phosphorothioate linkages protect the tail sequence appended to random primers from the 5'→3' exonuclease activity of PolI. We present a simple strategy for constructing a random-primed cDNA library using the efficient, size-independent, and seamless In-Fusion cloning method instead of restriction enzymes. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Breast MRI at 7 Tesla with a bilateral coil and robust fat suppression.

    PubMed

    Brown, Ryan; Storey, Pippa; Geppert, Christian; McGorty, KellyAnne; Klautau Leite, Ana Paula; Babb, James; Sodickson, Daniel K; Wiggins, Graham C; Moy, Linda

    2014-03-01

    To develop a bilateral coil and fat suppressed T1-weighted sequence for 7 Tesla (T) breast MRI. A dual-solenoid coil and three-dimensional (3D) T1w gradient echo sequence with B1+ insensitive fat suppression (FS) were developed. T1w FS image quality was characterized through image uniformity and fat-water contrast measurements in 11 subjects. Signal-to-noise ratio (SNR) and flip angle maps were acquired to assess the coil performance. Bilateral contrast-enhanced and unilateral high resolution (0.6 mm isotropic, 6.5 min acquisition time) imaging highlighted the 7T SNR advantage. Reliable and effective FS and high image quality was observed in all subjects at 7T, indicating that the custom coil and pulse sequence were insensitive to high-field obstacles such as variable tissue loading. 7T and 3T image uniformity was similar (P=0.24), indicating adequate 7T B1+ uniformity. High 7T SNR and fat-water contrast enabled 0.6 mm isotropic imaging and visualization of a high level of fibroglandular tissue detail. 7T T1w FS bilateral breast imaging is feasible with a custom radiofrequency (RF) coil and pulse sequence. Similar image uniformity was achieved at 7T and 3T, despite different RF field behavior and variable coil-tissue interaction due to anatomic differences that might be expected to alter magnetic field patterns. Copyright © 2013 Wiley Periodicals, Inc.

  8. Breast MRI at 7 Tesla with a Bilateral Coil and Robust Fat Suppression

    PubMed Central

    Brown, Ryan; Storey, Pippa; Geppert, Christian; McGorty, KellyAnne; Leite, Ana Paula Klautau; Babb, James; Sodickson, Daniel K.; Wiggins, Graham C.; Moy, Linda

    2013-01-01

    Purpose To develop a bilateral coil and optimized fat suppressed T1-weighted sequence for 7T breast MRI. Materials and Methods A dual-solenoid coil and 3D T1w gradient echo sequence with B1+ insensitive fat suppression (FS) were developed for 7T. T1w FS image quality was characterized through image uniformity and fat/water contrast measurements in 11 subjects. Signal-to-noise ratio (SNR) and flip angle maps were acquired to assess the coil performance. Bilateral contrast-enhanced and unilateral high resolution (0.6 mm isotropic, 6.5 min acquisition time) imaging highlighted the 7 T SNR advantage. Results Reliable and effective FS and high image quality was observed in all subjects at 7T, indicating that the custom coil and pulse sequence were insensitive to high-field obstacles such as variable tissue loading. 7T and 3T T1w FS image uniformity was similar (P=0.24), indicating adequate 7T B1+ uniformity. High 7T SNR and fat/water contrast enabled 0.6 mm isotropic imaging and visualization of a high level of fibroglandular tissue detail. Conclusion 7T T1w FS bilateral breast imaging is feasible with a custom RF coil and pulse sequence. Similar image uniformity was achieved at 7T and 3T, despite different RF field behavior and variable coil-tissue interaction due to anatomic differences that might be expected to alter magnetic field patterns. PMID:24123517

  9. Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies.

    PubMed

    Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary

    2018-04-29

    Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Impact of deposition-rate fluctuations on thin-film thickness and uniformity

    DOE PAGES

    Oliver, Joli B.

    2016-11-04

    Variations in deposition rate are superimposed on a thin-film–deposition model with planetary rotation to determine the impact on film thickness. Variations in magnitude and frequency of the fluctuations relative to the speed of planetary revolution lead to thickness errors and uniformity variations up to 3%. Sufficiently rapid oscillations in the deposition rate have a negligible impact, while slow oscillations are found to be problematic, leading to changes in the nominal film thickness. Finally, superimposing noise as random fluctuations in the deposition rate has a negligible impact, confirming the importance of any underlying harmonic oscillations in deposition rate or source operation.

  11. Semiconductor laser insert with uniform illumination for use in photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Charamisinau, Ivan; Happawana, Gemunu; Evans, Gary; Rosen, Arye; Hsi, Richard A.; Bour, David

    2005-08-01

    A low-cost semiconductor red laser light delivery system for esophagus cancer treatment is presented. The system is small enough for insertion into the patient's body. Scattering elements with nanoscale particles are used to achieve uniform illumination. The scattering element optimization calculations, with Mie theory, provide scattering and absorption efficiency factors for scattering particles composed of various materials. The possibility of using randomly deformed spheres and composite particles instead of perfect spheres is analyzed using an extension to Mie theory. The measured radiation pattern from a prototype light delivery system fabricated using these design criteria shows reasonable agreement with the theoretically predicted pattern.

  12. Severity of Organized Item Theft in Computerized Adaptive Testing: A Simulation Study

    ERIC Educational Resources Information Center

    Yi, Qing; Zhang, Jinming; Chang, Hua-Hua

    2008-01-01

    Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…

  13. Explanatory Variables Associated with Campylobacter and Escherichia coli Concentrations on Broiler Chicken Carcasses during Processing in Two Slaughterhouses.

    PubMed

    Pacholewicz, Ewa; Swart, Arno; Wagenaar, Jaap A; Lipman, Len J A; Havelaar, Arie H

    2016-12-01

    This study aimed at identifying explanatory variables that were associated with Campylobacter and Escherichia coli concentrations throughout processing in two commercial broiler slaughterhouses. Quantative data on Campylobacter and E. coli along the processing line were collected. Moreover, information on batch characteristics, slaughterhouse practices, process performance, and environmental variables was collected through questionnaires, observations, and measurements, resulting in data on 19 potential explanatory variables. Analysis was conducted separately in each slaughterhouse to identify which variables were related to changes in concentrations of Campylobacter and E. coli during the processing steps: scalding, defeathering, evisceration, and chilling. Associations with explanatory variables were different in the slaughterhouses studied. In the first slaughterhouse, there was only one significant association: poorer uniformity of the weight of carcasses within a batch with less decrease in E. coli concentrations after defeathering. In the second slaughterhouse, significant statistical associations were found with variables, including age, uniformity, average weight of carcasses, Campylobacter concentrations in excreta and ceca, and E. coli concentrations in excreta. Bacterial concentrations in excreta and ceca were found to be the most prominent variables, because they were associated with concentration on carcasses at various processing points. Although the slaughterhouses produced specific products and had different batch characteristics and processing parameters, the effect of the significant variables was not always the same for each slaughterhouse. Therefore, each slaughterhouse needs to determine its particular relevant measures for hygiene control and process management. This identification could be supported by monitoring changes in bacterial concentrations during processing in individual slaughterhouses. In addition, the possibility that management and food handling practices in slaughterhouses contribute to the differences in bacterial contamination between slaughterhouses needs further investigation.

  14. Impact of recombination on polymorphism of genes encoding Kunitz-type protease inhibitors in the genus Solanum.

    PubMed

    Speranskaya, Anna S; Krinitsina, Anastasia A; Kudryavtseva, Anna V; Poltronieri, Palmiro; Santino, Angelo; Oparina, Nina Y; Dmitriev, Alexey A; Belenikin, Maxim S; Guseva, Marina A; Shevelev, Alexei B

    2012-08-01

    The group of Kunitz-type protease inhibitors (KPI) from potato is encoded by a polymorphic family of multiple allelic and non-allelic genes. The previous explanations of the KPI variability were based on the hypothesis of random mutagenesis as a key factor of KPI polymorphism. KPI-A genes from the genomes of Solanum tuberosum cv. Istrinskii and the wild species Solanum palustre were amplified by PCR with subsequent cloning in plasmids. True KPI sequences were derived from comparison of the cloned copies. "Hot spots" of recombination in KPI genes were independently identified by DnaSP 4.0 and TOPALi v2.5 software. The KPI-A sequence from potato cv. Istrinskii was found to be 100% identical to the gene from Solanum nigrum. This fact illustrates a high degree of similarity of KPI genes in the genus Solanum. Pairwise comparison of KPI A and B genes unambiguously showed a non-uniform extent of polymorphism at different nt positions. Moreover, the occurrence of substitutions was not random along the strand. Taken together, these facts contradict the traditional hypothesis of random mutagenesis as a principal source of KPI gene polymorphism. The experimentally found mosaic structure of KPI genes in both plants studied is consistent with the hypothesis suggesting recombination of ancestral genes. The same mechanism was proposed earlier for other resistance-conferring genes in the nightshade family (Solanaceae). Based on the data obtained, we searched for potential motifs of site-specific binding with plant DNA recombinases. During this work, we analyzed the sequencing data reported by the Potato Genome Sequencing Consortium (PGSC), 2011 and found considerable inconsistence of their data concerning the number, location, and orientation of KPI genes of groups A and B. The key role of recombination rather than random point mutagenesis in KPI polymorphism was demonstrated for the first time. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  15. Measuring Symmetry, Asymmetry and Randomness in Neural Network Connectivity

    PubMed Central

    Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni

    2014-01-01

    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity. PMID:25006663

  16. Measuring symmetry, asymmetry and randomness in neural network connectivity.

    PubMed

    Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni

    2014-01-01

    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.

  17. A recurrence matrix method for the analysis of longitudinal and torsional vibrations in non-uniform multibranch beams with variable boundary conditions

    NASA Technical Reports Server (NTRS)

    Davis, R. B.; Stephens, M. V.

    1974-01-01

    An approximate method for calculating the longitudinal and torsional natural frequencies and associated modal data of a beamlike, variable cross section multibranch structure is presented. The procedure described is the numerical integration of the first order differential equations that characterize the beam element in longitudinal motion and that satisfy the appropriate boundary conditions.

  18. Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    Miller, L.

    1974-01-01

    A two year study of the major process variables associated with the manufacturing process for sealed, nickel-cadmium, areospace cells is summarized. Effort was directed toward identifying the major process variables associated with a manufacturing process, experimentally assessing each variable's effect, and imposing the necessary changes (optimization) and controls for the critical process variables to improve results and uniformity. A critical process variable associated with the sintered nickel plaque manufacturing process was identified as the manual forming operation. Critical process variables identified with the positive electrode impregnation/polarization process were impregnation solution temperature, free acid content, vacuum impregnation, and sintered plaque strength. Positive and negative electrodes were identified as a major source of carbonate contamination in sealed cells.

  19. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  20. Security authentication with a three-dimensional optical phase code using random forest classifier: an overview

    NASA Astrophysics Data System (ADS)

    Markman, Adam; Carnicer, Artur; Javidi, Bahram

    2017-05-01

    We overview our recent work [1] on utilizing three-dimensional (3D) optical phase codes for object authentication using the random forest classifier. A simple 3D optical phase code (OPC) is generated by combining multiple diffusers and glass slides. This tag is then placed on a quick-response (QR) code, which is a barcode capable of storing information and can be scanned under non-uniform illumination conditions, rotation, and slight degradation. A coherent light source illuminates the OPC and the transmitted light is captured by a CCD to record the unique signature. Feature extraction on the signature is performed and inputted into a pre-trained random-forest classifier for authentication.

  1. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  2. Travtek Global Evaluation And Executive Summary

    DOT National Transportation Integrated Search

    2000-09-01

    Several measures have been carried out in the Long Term Pavement Performance (LTPP) Program to ensure uniform distress data collection and interpretation. However, no systematic evaluation has been done to quantify the variability (bias and precision...

  3. Age and choice in health insurance: evidence from a discrete choice experiment.

    PubMed

    Becker, Karolin; Zweifel, Peter

    2008-01-01

    A uniform package of benefits and uniform cost sharing are elements of regulation inherent in most social health insurance systems. Both elements risk burdening the population with a welfare loss if preferences for risk and insurance attributes differ. This suggests the introduction of more choice in social health insurance packages may be advantageous; however, it is widely believed that this would not benefit the elderly.A representative telephone survey of 1000 people aged >24 years living in the German- and French-speaking parts of Switzerland was conducted. Participants were asked to compare the status quo (i.e. their current insurance contract) with ten hypothetical alternatives. In addition, participants were asked questions concerning utilization of healthcare services; overall satisfaction with the healthcare system, insurer and insurance policy; and a general preference for new elements in the insurance package. Socioeconomic variables surveyed were age, sex, total household income, education (seven categories ranging from primary school to university degree), place of residence, occupation, and marital status. To examine the relationship between age and willingness to pay (WTP) for additional options in Swiss social health insurance.A representative telephone survey of 1000 people aged >24 years living in the German- and French-speaking parts of Switzerland was conducted. Participants were asked to compare the status quo (i.e. their current insurance contract) with ten hypothetical alternatives. In addition, participants were asked questions concerning utilization of healthcare services; overall satisfaction with the healthcare system, insurer and insurance policy; and a general preference for new elements in the insurance package. Socioeconomic variables surveyed were age, sex, total household income, education (seven categories ranging from primary school to university degree), place of residence, occupation, and marital status. A discrete choice experiment was developed using six attributes (deductibles, co-payment, access to alternative medicines, medication choice, access to innovation, and monthly premium) that are currently in debate within the context of Swiss health insurance. These attributes have been shown to be important in the choice of insurance contract. Using statistical design optimization procedures, the number of choice sets was reduced to 27 and randomly split into three groups. One choice was included twice to test for consistency. Two random effects probit models were developed: a simple model where marginal utilities and WTP values were not allowed to vary according to socioeconomic characteristics, and a more complex model where the values were permitted to depend on socioeconomic variables.A representative telephone survey of 1000 people aged >24 years living in the German- and French-speaking parts of Switzerland was conducted. Participants were asked to compare the status quo (i.e. their current insurance contract) with ten hypothetical alternatives. In addition, participants were asked questions concerning utilization of healthcare services; overall satisfaction with the healthcare system, insurer and insurance policy; and a general preference for new elements in the insurance package. Socioeconomic variables surveyed were age, sex, total household income, education (seven categories ranging from primary school to university degree), place of residence, occupation, and marital status. All chosen elements proved relevant for choice in the simple model. Accounting for socioeconomic characteristics in the comprehensive model reveals preference heterogeneity for contract attributes, but also for the propensity to consider deviating from the status quo and choosing an alternative health insurance contract. The findings suggest that while the elderly do exhibit a stronger status quo bias than younger age groups, they require less rather than more specific compensation for selected cutbacks, indicating a potential for contracts that induce self-rationing in return for lower premiums.

  4. Trimming a hazard logic tree with a new model-order-reduction technique

    USGS Publications Warehouse

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  5. Cellulose Fibre-Reinforced Biofoam for Structural Applications

    PubMed Central

    Obradovic, Jasmina; Voutilainen, Mikko; Virtanen, Pasi; Lassila, Lippo; Fardim, Pedro

    2017-01-01

    Traditionally, polymers and macromolecular components used in the foam industry are mostly derived from petroleum. The current transition to a bio-economy creates demand for the use of more renewable feedstocks. Soybean oil is a vegetable oil, composed mainly of triglycerides, that is suitable material for foam production. In this study, acrylated epoxidized soybean oil and variable amounts of cellulose fibres were used in the production of bio-based foam. The developed macroporous bio-based architectures were characterised by several techniques, including porosity measurements, nanoindentation testing, scanning electron microscopy, and thermogravimetric analysis. It was found that the introduction of cellulose fibres during the foaming process was necessary to create the three-dimensional polymer foams. Using cellulose fibres has potential as a foam stabiliser because it obstructs the drainage of liquid from the film region in these gas-oil interfaces while simultaneously acting as a reinforcing agent in the polymer foam. The resulting foams possessed a porosity of approximately 56%, and the incorporation of cellulose fibres did not affect thermal behaviour. Scanning electron micrographs showed randomly oriented pores with irregular shapes and non-uniform pore size throughout the samples. PMID:28772981

  6. Modelling opinion formation driven communities in social networks

    NASA Astrophysics Data System (ADS)

    Iñiguez, Gerardo; Barrio, Rafael A.; Kertész, János; Kaski, Kimmo K.

    2011-09-01

    In a previous paper we proposed a model to study the dynamics of opinion formation in human societies by a co-evolution process involving two distinct time scales of fast transaction and slower network evolution dynamics. In the transaction dynamics we take into account short range interactions as discussions between individuals and long range interactions to describe the attitude to the overall mood of society. The latter is handled by a uniformly distributed parameter α, assigned randomly to each individual, as quenched personal bias. The network evolution dynamics is realised by rewiring the societal network due to state variable changes as a result of transaction dynamics. The main consequence of this complex dynamics is that communities emerge in the social network for a range of values in the ratio between time scales. In this paper we focus our attention on the attitude parameter α and its influence on the conformation of opinion and the size of the resulting communities. We present numerical studies and extract interesting features of the model that can be interpreted in terms of social behaviour.

  7. Insights into the latent multinomial model through mark-resight data on female grizzly bears with cubs-of-the-year

    USGS Publications Warehouse

    Higgs, Megan D.; Link, William; White, Gary C.; Haroldson, Mark A.; Bjornlie, Daniel D.

    2013-01-01

    Mark-resight designs for estimation of population abundance are common and attractive to researchers. However, inference from such designs is very limited when faced with sparse data, either from a low number of marked animals, a low probability of detection, or both. In the Greater Yellowstone Ecosystem, yearly mark-resight data are collected for female grizzly bears with cubs-of-the-year (FCOY), and inference suffers from both limitations. To overcome difficulties due to sparseness, we assume homogeneity in sighting probabilities over 16 years of bi-annual aerial surveys. We model counts of marked and unmarked animals as multinomial random variables, using the capture frequencies of marked animals for inference about the latent multinomial frequencies for unmarked animals. We discuss undesirable behavior of the commonly used discrete uniform prior distribution on the population size parameter and provide OpenBUGS code for fitting such models. The application provides valuable insights into subtleties of implementing Bayesian inference for latent multinomial models. We tie the discussion to our application, though the insights are broadly useful for applications of the latent multinomial model.

  8. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  9. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  10. Improving global CD uniformity by optimizing post-exposure bake and develop sequences

    NASA Astrophysics Data System (ADS)

    Osborne, Stephen P.; Mueller, Mark; Lem, Homer; Reyland, David; Baik, KiHo

    2003-12-01

    Improvements in the final uniformity of masks can be shrouded by error contributions from many sources. The final Global CD Uniformity (GCDU) of a mask is degraded by individual contributions of the writing tool, the Post Applied Bake (PAB), the Post Exposure Bake (PEB), the Develop sequence and the Etch step. Final global uniformity will improve by isolating and minimizing the variability of the PEB and Develop. We achieved this de-coupling of the PEB and Develop process from the whole process stream by using "dark loss" which is the loss of unexposed resist during the develop process. We confirmed a correspondence between Angstroms of dark loss and nanometer sized deviations in the chrome CD. A plate with a distinctive dark loss pattern was related to a nearly identical pattern in the chrome CD. This pattern was verified to have originated during the PEB process and displayed a [Δ(Final CD)/Δ(Dark Loss)] ratio of 6 for TOK REAP200 resist. Previous papers have reported a sensitive linkage between Angstroms of dark loss and nanometers in the final uniformity of the written plate. These initial studies reported using this method to improve the PAB of resists for greater uniformity of sensitivity and contrast. Similarly, this paper demonstrates an outstanding optimization of PEB and Develop processes.

  11. Cell-centered high-order hyperbolic finite volume method for diffusion equation on unstructured grids

    NASA Astrophysics Data System (ADS)

    Lee, Euntaek; Ahn, Hyung Taek; Luo, Hong

    2018-02-01

    We apply a hyperbolic cell-centered finite volume method to solve a steady diffusion equation on unstructured meshes. This method, originally proposed by Nishikawa using a node-centered finite volume method, reformulates the elliptic nature of viscous fluxes into a set of augmented equations that makes the entire system hyperbolic. We introduce an efficient and accurate solution strategy for the cell-centered finite volume method. To obtain high-order accuracy for both solution and gradient variables, we use a successive order solution reconstruction: constant, linear, and quadratic (k-exact) reconstruction with an efficient reconstruction stencil, a so-called wrapping stencil. By the virtue of the cell-centered scheme, the source term evaluation was greatly simplified regardless of the solution order. For uniform schemes, we obtain the same order of accuracy, i.e., first, second, and third orders, for both the solution and its gradient variables. For hybrid schemes, recycling the gradient variable information for solution variable reconstruction makes one order of additional accuracy, i.e., second, third, and fourth orders, possible for the solution variable with less computational work than needed for uniform schemes. In general, the hyperbolic method can be an effective solution technique for diffusion problems, but instability is also observed for the discontinuous diffusion coefficient cases, which brings necessity for further investigation about the monotonicity preserving hyperbolic diffusion method.

  12. Simulation of Ground-Water Flow in the Shenandoah Valley, Virginia and West Virginia, Using Variable-Direction Anisotropy in Hydraulic Conductivity to Represent Bedrock Structure

    USGS Publications Warehouse

    Yager, Richard M.; Southworth, Scott C.; Voss, Clifford I.

    2008-01-01

    Ground-water flow was simulated using variable-direction anisotropy in hydraulic conductivity to represent the folded, fractured sedimentary rocks that underlie the Shenandoah Valley in Virginia and West Virginia. The anisotropy is a consequence of the orientations of fractures that provide preferential flow paths through the rock, such that the direction of maximum hydraulic conductivity is oriented within bedding planes, which generally strike N30 deg E; the direction of minimum hydraulic conductivity is perpendicular to the bedding. The finite-element model SUTRA was used to specify variable directions of the hydraulic-conductivity tensor in order to represent changes in the strike and dip of the bedding throughout the valley. The folded rocks in the valley are collectively referred to as the Massanutten synclinorium, which contains about a 5-km thick section of clastic and carbonate rocks. For the model, the bedrock was divided into four units: a 300-m thick top unit with 10 equally spaced layers through which most ground water is assumed to flow, and three lower units each containing 5 layers of increasing thickness that correspond to the three major rock units in the valley: clastic, carbonate and metamorphic rocks. A separate zone in the carbonate rocks that is overlain by colluvial gravel - called the western-toe carbonate unit - was also distinguished. Hydraulic-conductivity values were estimated through model calibration for each of the four rock units, using data from 354 wells and 23 streamflow-gaging stations. Conductivity tensors for metamorphic and western-toe carbonate rocks were assumed to be isotropic, while conductivity tensors for carbonate and clastic rocks were assumed to be anisotropic. The directions of the conductivity tensor for carbonate and clastic rocks were interpolated for each mesh element from a stack of 'form surfaces' that provided a three-dimensional representation of bedrock structure. Model simulations were run with (1) variable strike and dip, in which conductivity tensors were aligned with the strike and dip of the bedding, and (2) uniform strike in which conductivity tensors were assumed to be horizontally isotropic with the maximum conductivity direction parallel to the N30 deg E axis of the valley and the minimum conductivity direction perpendicular to the horizontal plane. Simulated flow penetrated deeper into the aquifer system with the uniform-strike tensor than with the variable-strike-and-dip tensor. Sensitivity analyses suggest that additional information on recharge rates would increase confidence in the estimated parameter values. Two applications of the model were conducted - the first, to determine depth of recent ground-water flow by simulating the distribution of ground-water ages, showed that most shallow ground water is less than 10 years old. Ground-water age distributions computed by variable-strike-and-dip and uniform-strike models were similar, but differed beneath Massanutten Mountain in the center of the valley. The variable-strike-and-dip model simulated flow from west to east parallel to the bedding of the carbonate rocks beneath Massanutten Mountain, while the uniform-strike model, in which flow was largely controlled by topography, simulated this same area as an east-west ground-water divide. The second application, which delineated capture zones for selected well fields in the valley, showed that capture zones delineated with both models were similar in plan view, but differed in vertical extent. Capture zones simulated by the variable-strike-and-dip model extended downdip with the bedding of carbonate rock and were relatively shallow, while those simulated by the uniform-strike model extended to the bottom of the flow system, which is unrealistic. These results suggest that simulations of ground-water flow through folded fractured rock can be constructed using SUTRA to represent variable orientations of the hydraulic-conductivity tensor and produce a

  13. Do little interactions get lost in dark random forests?

    PubMed

    Wright, Marvin N; Ziegler, Andreas; König, Inke R

    2016-03-31

    Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.

  14. The influence of an uncertain force environment on reshaping trial-to-trial motor variability.

    PubMed

    Izawa, Jun; Yoshioka, Toshinori; Osu, Rieko

    2014-09-10

    Motor memory is updated to generate ideal movements in a novel environment. When the environment changes every trial randomly, how does the brain incorporate this uncertainty into motor memory? To investigate how the brain adapts to an uncertain environment, we considered a reach adaptation protocol where individuals practiced moving in a force field where a noise was injected. After they had adapted, we measured the trial-to-trial variability in the temporal profiles of the produced hand force. We found that the motor variability was significantly magnified by the adaptation to the random force field. Temporal profiles of the motor variance were significantly dissociable between two different types of random force fields experienced. A model-based analysis suggests that the variability is generated by noise in the gains of the internal model. It further suggests that the trial-to-trial motor variability magnified by the adaptation in a random force field is generated by the uncertainty of the internal model formed in the brain as a result of the adaptation.

  15. Superconducting matrix fault current limiter with current-driven trigger mechanism

    DOEpatents

    Yuan; Xing

    2008-04-15

    A modular and scalable Matrix-type Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. An inductor is connected in series with the trigger superconductor in the trigger matrix and physically surrounds the superconductor. The current surge during a fault will generate a trigger magnetic field in the series inductor to cause fast and uniform quenching of the trigger superconductor to significantly reduce burnout risk due to superconductor material non-uniformity.

  16. Dynamical system modeling to simulate donor T cell response to whole exome sequencing-derived recipient peptides: Understanding randomness in alloreactivity incidence following stem cell transplantation.

    PubMed

    Koparde, Vishal; Abdul Razzaq, Badar; Suntum, Tara; Sabo, Roy; Scalora, Allison; Serrano, Myrna; Jameson-Lee, Max; Hall, Charles; Kobulnicky, David; Sheth, Nihar; Feltz, Juliana; Contaifer, Daniel; Wijesinghe, Dayanjan; Reed, Jason; Roberts, Catherine; Qayyum, Rehan; Buck, Gregory; Neale, Michael; Toor, Amir

    2017-01-01

    Quantitative relationship between the magnitude of variation in minor histocompatibility antigens (mHA) and graft versus host disease (GVHD) pathophysiology in stem cell transplant (SCT) donor-recipient pairs (DRP) is not established. In order to elucidate this relationship, whole exome sequencing (WES) was performed on 27 HLA matched related (MRD), & 50 unrelated donors (URD), to identify nonsynonymous single nucleotide polymorphisms (SNPs). An average 2,463 SNPs were identified in MRD, and 4,287 in URD DRP (p<0.01); resulting peptide antigens that may be presented on HLA class I molecules in each DRP were derived in silico (NetMHCpan ver2.0) and the tissue expression of proteins these were derived from determined (GTex). MRD DRP had an average 3,670 HLA-binding-alloreactive peptides, putative mHA (pmHA) with an IC50 of <500 nM, and URD, had 5,386 (p<0.01). To simulate an alloreactive donor cytotoxic T cell response, the array of pmHA in each patient was considered as an operator matrix modifying a hypothetical cytotoxic T cell clonal vector matrix; each responding T cell clone's proliferation was determined by the logistic equation of growth, accounting for HLA binding affinity and tissue expression of each alloreactive peptide. The resulting simulated organ-specific alloreactive T cell clonal growth revealed marked variability, with the T cell count differences spanning orders of magnitude between different DRP. Despite an estimated, uniform set of constants used in the model for all DRP, and a heterogeneously treated group of patients, higher total and organ-specific T cell counts were associated with cumulative incidence of moderate to severe GVHD in recipients. In conclusion, exome wide sequence differences and the variable alloreactive peptide binding to HLA in each DRP yields a large range of possible alloreactive donor T cell responses. Our findings also help understand the apparent randomness observed in the development of alloimmune responses.

  17. Dynamical system modeling to simulate donor T cell response to whole exome sequencing-derived recipient peptides: Understanding randomness in alloreactivity incidence following stem cell transplantation

    PubMed Central

    Suntum, Tara; Sabo, Roy; Scalora, Allison; Serrano, Myrna; Jameson-Lee, Max; Hall, Charles; Kobulnicky, David; Sheth, Nihar; Feltz, Juliana; Contaifer, Daniel; Wijesinghe, Dayanjan; Reed, Jason; Roberts, Catherine; Qayyum, Rehan; Buck, Gregory; Neale, Michael

    2017-01-01

    Quantitative relationship between the magnitude of variation in minor histocompatibility antigens (mHA) and graft versus host disease (GVHD) pathophysiology in stem cell transplant (SCT) donor-recipient pairs (DRP) is not established. In order to elucidate this relationship, whole exome sequencing (WES) was performed on 27 HLA matched related (MRD), & 50 unrelated donors (URD), to identify nonsynonymous single nucleotide polymorphisms (SNPs). An average 2,463 SNPs were identified in MRD, and 4,287 in URD DRP (p<0.01); resulting peptide antigens that may be presented on HLA class I molecules in each DRP were derived in silico (NetMHCpan ver2.0) and the tissue expression of proteins these were derived from determined (GTex). MRD DRP had an average 3,670 HLA-binding-alloreactive peptides, putative mHA (pmHA) with an IC50 of <500 nM, and URD, had 5,386 (p<0.01). To simulate an alloreactive donor cytotoxic T cell response, the array of pmHA in each patient was considered as an operator matrix modifying a hypothetical cytotoxic T cell clonal vector matrix; each responding T cell clone’s proliferation was determined by the logistic equation of growth, accounting for HLA binding affinity and tissue expression of each alloreactive peptide. The resulting simulated organ-specific alloreactive T cell clonal growth revealed marked variability, with the T cell count differences spanning orders of magnitude between different DRP. Despite an estimated, uniform set of constants used in the model for all DRP, and a heterogeneously treated group of patients, higher total and organ-specific T cell counts were associated with cumulative incidence of moderate to severe GVHD in recipients. In conclusion, exome wide sequence differences and the variable alloreactive peptide binding to HLA in each DRP yields a large range of possible alloreactive donor T cell responses. Our findings also help understand the apparent randomness observed in the development of alloimmune responses. PMID:29194460

  18. INDUCTION OF RABBIT ANTIBODY WITH MOLECULAR UNIFORMITY AFTER IMMUNIZATION WITH GROUP C STREPTOCOCCI

    PubMed Central

    Eichmann, Klaus; Lackland, Henry; Hood, Leroy; Krause, Richard M.

    1970-01-01

    Antibodies with uniform properties may occur in rabbits after immunization with Group C streptococci. These precipitating antibodies possess specificity for the group-specific carbohydrate. Not uncommonly, their concentration is between 20 and 40 mg/ml of antiserum. Evidence for molecular uniformity in the case of one of these antibodies, described in detail here, includes: individual antigenic specificity; monodisperse distribution of the light chains by alkaline urea polyacrylamide disc electrophoresis; and a single amino acid in each of the first three N-terminal positions of the light chains. When the amino acid sequence of rabbit antibody b+ light chains (κ type) are aligned against their human κ counterparts, a definite homology is observed between the N-terminus of the human and the rabbit variable region. PMID:5409946

  19. Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding

    PubMed Central

    2018-01-01

    Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669

  20. Development of an efficient computer code to solve the time-dependent Navier-Stokes equations. [for predicting viscous flow fields about lifting bodies

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.; Oatway, T. P.

    1975-01-01

    A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.

  1. Evaluation of fuel preparation systems for lean premixing-prevaporizing combustors

    NASA Technical Reports Server (NTRS)

    Dodds, W. J.; Ekstedt, E. E.

    1985-01-01

    A series of experiments was carried out in order to produce design data for a premixing prevaporizing fuel-air mixture preparation system for aircraft gas turbine engine combustors. The fuel-air mixture uniformity of four different system design concepts was evaluated over a range of conditions representing the cruise operation of a modern commercial turbofan engine. Operating conditions including pressure, temperature, fuel-to-air ratio, and velocity, exhibited no clear effect on mixture uniformity of systems using pressure-atomizing fuel nozzles and large-scale mixing devices. However, the performance of systems using atomizing fuel nozzles and large-scale mixing devices was found to be sensitive to operating conditions. Variations in system design variables were also evaluated and correlated. Mixing uniformity was found to improve with system length, pressure drop, and the number of fuel injection points per unit area. A premixing system capable of providing mixing uniformity to within 15 percent over a typical range of cruise operating conditions is demonstrated.

  2. Subgrade variability on the Ohio SHRP test road

    DOT National Transportation Integrated Search

    1999-01-01

    This paper documents the extent to which subgrade uniformity was achieved on the Ohio SHRP test road. As construction proceeded, considerably more subgrade undercutting was required than originally anticipated. Much of the undercutting was due to the...

  3. Timing in a Variable Interval Procedure: Evidence for a Memory Singularity

    PubMed Central

    Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn

    2013-01-01

    Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783

  4. Evaluation of acoustic telemetry grids for determining aquatic animal movement and survival

    USGS Publications Warehouse

    Kraus, Richard T.; Holbrook, Christopher; Vandergoot, Christopher; Stewart, Taylor R.; Faust, Matthew D.; Watkinson, Douglas A.; Charles, Colin; Pegg, Mark; Enders, Eva C.; Krueger, Charles C.

    2018-01-01

    Acoustic telemetry studies have frequently prioritized linear configurations of hydrophone receivers, such as perpendicular from shorelines or across rivers, to detect the presence of tagged aquatic animals. This approach introduces unknown bias when receivers are stationed for convenience at geographic bottlenecks (e.g., at the mouth of an embayment or between islands) as opposed to deployments following a statistical sampling design.We evaluated two-dimensional acoustic receiver arrays (grids: receivers spread uniformly across space) as an alternative approach to provide estimates of survival, movement, and habitat use. Performance of variably-spaced receiver grids (5–25 km spacing) was evaluated by simulating (1) animal tracks as correlated random walks (speed: 0.1–0.9 m/s; turning angle standard deviation: 5–30 degrees); (2) variable tag transmission intervals along each track (nominal delay: 15–300 seconds); and (3) probability of detection of each transmission based on logistic detection range curves (midpoint: 200–1500 m). From simulations, we quantified i) time between successive detections on any receiver (detection time), ii) time between successive detections on different receivers (transit time), and iii) distance between successive detections on different receivers (transit distance).In the most restrictive detection range scenario (200 m), the 95th percentile of transit time was 3.2 days at 5 km grid spacing, 5.7 days at 7 km, and 15.2 days at 25 km; for the 1500 m detection range scenario, it was 0.1 days at 5 km, 0.5 days at 7 km, and 10.8 days at 25 km. These values represented upper bounds on the expected maximum time that an animal could go undetected. Comparison of the simulations with pilot studies on three fishes (walleye Sander vitreus, common carp Cyprinus carpio, and channel catfish Ictalurus punctatus) from two independent large lake ecosystems (lakes Erie and Winnipeg) revealed shorter detection and transit times than what simulations predicted.By spreading effort uniformly across space, grids can improve understanding of fish migration over the commonly employed receiver line approach, but at increased time cost for maintaining grids.

  5. Synchronization in oscillator networks with delayed coupling: a stability criterion.

    PubMed

    Earl, Matthew G; Strogatz, Steven H

    2003-03-01

    We derive a stability criterion for the synchronous state in networks of identical phase oscillators with delayed coupling. The criterion applies to any network (whether regular or random, low dimensional or high dimensional, directed or undirected) in which each oscillator receives delayed signals from k others, where k is uniform for all oscillators.

  6. Probabilistic pathway construction.

    PubMed

    Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha

    2011-07-01

    Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.

  8. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems.

    PubMed

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.

  9. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  10. A comparison of microscopic ink characteristics of 35 commercially available surgical margin inks.

    PubMed

    Milovancev, Milan; Löhr, Christiane V; Bildfell, Robert J; Gelberg, Howard B; Heidel, Jerry R; Valentine, Beth A

    2013-11-01

    To compare microscopic characteristics of commercially available surgical margin inks used for surgical pathology specimens. Prospective in vitro study. Thirty-five different surgical margin inks (black, blue, green, orange, red, violet, and yellow from 5 different manufacturers). Inks were applied to uniform, single-source, canine cadaveric full-thickness ventral abdominal tissue blocks. Tissue blocks and ink manufacturers were randomly paired and each color was applied to a length of the cut tissue margin. After drying, tissues were fixed in formalin, and 3 radial slices were obtained from each color section and processed for routine histologic evaluation, yielding 105 randomly numbered slides with each manufacturer's color represented in triplicate. Slides were evaluated by 5 blinded, board-certified veterinary anatomic pathologists using a standardized scoring scheme. Statistical analyses were performed to evaluate for ink manufacturer effects on scores, correlation among different subjective variables, and pathologist agreement. Black and blue had the most consistently high scores whereas red and violet had the most consistently low overall scores, across all manufacturers. All colors tested, except yellow, had statistically significant differences in overall scores among individual manufacturers. Overall score was significantly correlated to all other subjective microscopic scores evaluated. The average Spearman correlation coefficient among the 10 pairwise pathologists overall ink scores was 0.60. There are statistically significant differences in microscopic ink characteristics among manufacturers, with a notable degree of inter-pathologist agreement. © Copyright 2013 by The American College of Veterinary Surgeons.

  11. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  12. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  13. Spatial pattern of Baccharis platypoda shrub as determined by sex and life stages

    NASA Astrophysics Data System (ADS)

    Fonseca, Darliana da Costa; de Oliveira, Marcio Leles Romarco; Pereira, Israel Marinho; Gonzaga, Anne Priscila Dias; de Moura, Cristiane Coelho; Machado, Evandro Luiz Mendonça

    2017-11-01

    Spatial patterns of dioecious species can be determined by their nutritional requirements and intraspecific competition, apart from being a response to environmental heterogeneity. The aim of the study was to evaluate the spatial pattern of populations of a dioecious shrub reporting to sex and reproductive stage patterns of individuals. Sampling was carried out in three areas located in the meridional portion of Serra do Espinhaço, where in individuals of the studied species were mapped. The spatial pattern was determined through O-ring analysis and Ripley's K-function and the distribution of individuals' frequencies was verified through x2 test. Populations in two areas showed an aggregate spatial pattern tending towards random or uniform according to the observed scale. Male and female adults presented an aggregate pattern at smaller scales, while random and uniform patterns were verified above 20 m for individuals of both sexes of the areas A2 and A3. Young individuals presented an aggregate pattern in all areas and spatial independence in relation to adult individuals, especially female plants. The interactions between individuals of both genders presented spatial independence with respect to spatial distribution. Baccharis platypoda showed characteristics in accordance with the spatial distribution of savannic and dioecious species, whereas the population was aggregated tending towards random at greater spatial scales. Young individuals showed an aggregated pattern at different scales compared to adults, without positive association between them. Female and male adult individuals presented similar characteristics, confirming that adult individuals at greater scales are randomly distributed despite their distinct preferences for environments with moisture variation.

  14. Exploiting Data Missingness in Bayesian Network Modeling

    NASA Astrophysics Data System (ADS)

    Rodrigues de Morais, Sérgio; Aussem, Alex

    This paper proposes a framework built on the use of Bayesian networks (BN) for representing statistical dependencies between the existing random variables and additional dummy boolean variables, which represent the presence/absence of the respective random variable value. We show how augmenting the BN with these additional variables helps pinpoint the mechanism through which missing data contributes to the classification task. The missing data mechanism is thus explicitly taken into account to predict the class variable using the data at hand. Extensive experiments on synthetic and real-world incomplete data sets reveals that the missingness information improves classification accuracy.

  15. Observed and simulated hydrologic response for a first-order catchment during extreme rainfall 3 years after wildfire disturbance

    USGS Publications Warehouse

    Ebel, Brian A.; Rengers, Francis K.; Tucker, Gregory E.

    2016-01-01

    Hydrologic response to extreme rainfall in disturbed landscapes is poorly understood because of the paucity of measurements. A unique opportunity presented itself when extreme rainfall in September 2013 fell on a headwater catchment (i.e., <1 ha) in Colorado, USA that had previously been burned by a wildfire in 2010. We compared measurements of soil-hydraulic properties, soil saturation from subsurface sensors, and estimated peak runoff during the extreme rainfall with numerical simulations of runoff generation and subsurface hydrologic response during this event. The simulations were used to explore differences in runoff generation between the wildfire-affected headwater catchment, a simulated unburned case, and for uniform versus spatially variable parameterizations of soil-hydraulic properties that affect infiltration and runoff generation in burned landscapes. Despite 3 years of elapsed time since the 2010 wildfire, observations and simulations pointed to substantial surface runoff generation in the wildfire-affected headwater catchment by the infiltration-excess mechanism while no surface runoff was generated in the unburned case. The surface runoff generation was the result of incomplete recovery of soil-hydraulic properties in the burned area, suggesting recovery takes longer than 3 years. Moreover, spatially variable soil-hydraulic property parameterizations produced longer duration but lower peak-flow infiltration-excess runoff, compared to uniform parameterization, which may have important hillslope sediment export and geomorphologic implications during long duration, extreme rainfall. The majority of the simulated surface runoff in the spatially variable cases came from connected near-channel contributing areas, which was a substantially smaller contributing area than the uniform simulations.

  16. Optimization of Heat Exchangers with Dimpled Surfaces to Improve the Performance in Thermoelectric Generators Using a Kriging Model

    NASA Astrophysics Data System (ADS)

    Li, Shuai; Wang, Yiping; Wang, Tao; Yang, Xue; Deng, Yadong; Su, Chuqi

    2017-05-01

    Thermoelectric generators (TEGs) have become a topic of interest for vehicle exhaust energy recovery. Electrical power generation is deeply influenced by temperature differences, temperature uniformity and topological structures of TEGs. When the dimpled surfaces are adopted in heat exchangers, the heat transfer rates can be augmented with a minimal pressure drop. However, the temperature distribution shows a large gradient along the flow direction which has adverse effects on the power generation. In the current study, the heat exchanger performance was studied in a computational fluid dynamics (CFD) model. The dimple depth, dimple print diameter, and channel height were chosen as design variables. The objective function was defined as a combination of average temperature, temperature uniformity and pressure loss. The optimal Latin hypercube method was used to determine the experiment points as a method of design of the experiment in order to analyze the sensitivity of the design variables. A Kriging surrogate model was built and verified according to the database resulting from the CFD simulation. A multi-island genetic algorithm was used to optimize the structure in the heat exchanger based on the surrogate model. The results showed that the average temperature of the heat exchanger was most sensitive to the dimple depth. The pressure loss and temperature uniformity were most sensitive to the parameter of channel rear height, h 2. With an optimal design of channel structure, the temperature uniformity can be greatly improved compared with the initial exchanger, and the additional pressure loss also increased.

  17. Aposematism and crypsis are not enough to explain dorsal polymorphism in the Iberian adder

    NASA Astrophysics Data System (ADS)

    Martínez-Freiría, Fernando; Pérez i de Lanuza, Guillem; Pimenta, António A.; Pinto, Tiago; Santos, Xavier

    2017-11-01

    Aposematic organisms can show phenotypic variability across their distributional ranges. The ecological advantages of this variability have been scarcely studied in vipers. We explored this issue in Vipera seoanei, a species that exhibits five geographically structured dorsal colour phenotypes across Northern Iberia: two zigzag patterned (Classic and Cantabrica), one dorsal-strip patterned (Bilineata), one even grey (Uniform), and one melanistic (Melanistic). We compared predation rates (raptors and mammals) on plasticine models resembling each colour phenotype in three localities. Visual modelling techniques were used to infer detectability (i.e. conspicuousness) of each model type for visually guided predators (i.e. diurnal raptors). We hypothesize that predation rates will be lower for the two zigzag models (aposematism hypothesis) and that models with higher detectability would show higher predation rates (detectability hypothesis). Classic and Bilineata models were the most conspicuous, while Cantabrica and Uniform were the less. Melanistic presented an intermediate conspicuousness. Predation rate was low (3.24% of models) although there was variation in attack frequency among models. Zigzag models were scarcely predated supporting the aposematic role of the zigzag pattern in European vipers to reduce predation (aposematism hypothesis). From the non-zigzag models, high predation occurred on Bilineata and Melanistic models, and low on Uniform models, partially supporting our detectability hypothesis. These results suggest particular evolutionary advantages for non-zigzag phenotypes such as better performance of Melanistic phenotypes in cold environments or better crypsis of Uniform phenotypes. Polymorphism in V. seoanei may respond to a complex number of forces acting differentially across an environmental gradient.

  18. The Ciliate Paramecium Shows Higher Motility in Non-Uniform Chemical Landscapes

    PubMed Central

    Giuffre, Carl; Hinow, Peter; Vogel, Ryan; Ahmed, Tanvir; Stocker, Roman; Consi, Thomas R.; Strickler, J. Rudi

    2011-01-01

    We study the motility behavior of the unicellular protozoan Paramecium tetraurelia in a microfluidic device that can be prepared with a landscape of attracting or repelling chemicals. We investigate the spatial distribution of the positions of the individuals at different time points with methods from spatial statistics and Poisson random point fields. This makes quantitative the informal notion of “uniform distribution” (or lack thereof). Our device is characterized by the absence of large systematic biases due to gravitation and fluid flow. It has the potential to be applied to the study of other aquatic chemosensitive organisms as well. This may result in better diagnostic devices for environmental pollutants. PMID:21494596

  19. The uniform quantized electron gas revisited

    NASA Astrophysics Data System (ADS)

    Lomba, Enrique; Høye, Johan S.

    2017-11-01

    In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.

  20. Aligning land use with land potential

    USDA-ARS?s Scientific Manuscript database

    Current agricultural land use is dominated by an emphasis on provisioning services by applying energy-intensive inputs through relatively uniform production systems across variable landscapes. This approach to agricultural land use is not sustainable. Integrated agricultural systems (IAS) are uphe...

  1. Demographics, Affect, and Adolescents' Health Behaviors.

    ERIC Educational Resources Information Center

    Terre, Lisa; And Others

    1992-01-01

    Examined relationship between affect, demographics, and health-related lifestyle among 139 public high school students. Data analyses revealed distinctive demographic and affective correlates of different health behaviors. No one variable uniformly predicted adolescents' health behaviors. Demographics and affect showed differential relationships…

  2. A fast ergodic algorithm for generating ensembles of equilateral random polygons

    NASA Astrophysics Data System (ADS)

    Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.

    2009-03-01

    Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.

  3. Nonrecurrence and Bell-like inequalities

    NASA Astrophysics Data System (ADS)

    Danforth, Douglas G.

    2017-12-01

    The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.

  4. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  5. Variability-induced transition in a net of neural elements: From oscillatory to excitable behavior.

    PubMed

    Glatt, Erik; Gassel, Martin; Kaiser, Friedemann

    2006-06-01

    Starting with an oscillatory net of neural elements, increasing variability induces a phase transition to excitability. This transition is explained by a systematic effect of the variability, which stabilizes the formerly unstable, spatially uniform, temporally constant solution of the net. Multiplicative noise may also influence the net in a systematic way and may thus induce a similar transition. Adding noise into the model, the interplay of noise and variability with respect to the reported transition is investigated. Finally, pattern formation in a diffusively coupled net is studied, because excitability implies the ability of pattern formation and information transmission.

  6. The First Pan-Starrs Medium Deep Field Variable Star Catalog

    NASA Astrophysics Data System (ADS)

    Flewelling, Heather

    2013-01-01

    We present the first Pan-Starrs 1 Medium Deep Field Variable Star Catalog (PS1-MDF-VSC). The Pan-Starrs 1 (PS1) telescope is a 1.8 meter survey telescope with a 1.4 Gigapixel camera, and is located in Haleakala, Hawaii. The Medium Deep survey, which consists of 10 fields located uniformly across the sky, totalling 70 square degrees, is observed each night, in 2-3 filters per field, with 8 exposures per filter. We have located and classified several hundred periodic variable stars within the Medium Deep fields, and we present the first catalog listing the properties of these variable stars.

  7. Deployment-based lifetime optimization model for homogeneous Wireless Sensor Network under retransmission.

    PubMed

    Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning

    2014-12-10

    Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.

  8. Local Neighbourhoods for First-Passage Percolation on the Configuration Model

    NASA Astrophysics Data System (ADS)

    Dereich, Steffen; Ortgiese, Marcel

    2018-04-01

    We consider first-passage percolation on the configuration model. Once the network has been generated each edge is assigned an i.i.d. weight modeling the passage time of a message along this edge. Then independently two vertices are chosen uniformly at random, a sender and a recipient, and all edges along the geodesic connecting the two vertices are coloured in red (in the case that both vertices are in the same component). In this article we prove local limit theorems for the coloured graph around the recipient in the spirit of Benjamini and Schramm. We consider the explosive regime, in which case the random distances are of finite order, and the Malthusian regime, in which case the random distances are of logarithmic order.

  9. Variability of single bean coffee volatile compounds of Arabica and robusta roasted coffees analysed by SPME-GC-MS.

    PubMed

    Caporaso, Nicola; Whitworth, Martin B; Cui, Chenhao; Fisk, Ian D

    2018-06-01

    We report on the analysis of volatile compounds by SPME-GC-MS for individual roasted coffee beans. The aim was to understand the relative abundance and variability of volatile compounds between individual roasted coffee beans at constant roasting conditions. Twenty-five batches of Arabica and robusta species were sampled from 13 countries, and 10 single coffee beans randomly selected from each batch were individually roasted in a fluidised-bed roaster at 210 °C for 3 min. High variability (CV = 14.0-53.3%) of 50 volatile compounds in roasted coffee was obtained within batches (10 beans per batch). Phenols and heterocyclic nitrogen compounds generally had higher intra-batch variation, while ketones were the most uniform compounds (CV < 20%). The variation between batches was much higher, with the CV ranging from 15.6 to 179.3%. The highest variation was observed for 2,3-butanediol, 3-ethylpyridine and hexanal. It was also possible to build classification models based on geographical origin, obtaining 99.5% and 90.8% accuracy using LDA or MLR classifiers respectively, and classification between Arabica and robusta beans. These results give further insight into natural variation of coffee aroma and could be used to obtain higher quality and more consistent final products. Our results suggest that coffee volatile concentration is also influenced by other factors than simply the roasting degree, especially green coffee composition, which is in turn influenced by the coffee species, geographical origin, ripening stage and pre- and post-harvest processing. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Impact of Reproductive History and Exogenous Hormone Use on Cognitive Function in Midlife and Late Life

    PubMed Central

    Karim, Roksana; Dang, Ha; Henderson, Victor W.; Hodis, Howard N.; St John, Jan; Brinton, Roberta D.; Mack, Wendy J.

    2016-01-01

    Background/objectives Given the potent role of sex hormones on brain chemistry and function, we investigated the association of reproductive history indicators of hormonal exposures, including reproductive period, pregnancy, and use of hormonal contraceptives, on mid- and late-life cognition in postmenopausal women. Design Analysis of baseline data from two randomized clinical trials, the Women’s Isoflavone Soy Health (WISH) and the Early vs Late Intervention Trial of Estradiol (ELITE). Setting University academic research center Participants 830 naturally menopausal women Measurements Participants were uniformly evaluated with a cognitive battery and a structured reproductive history. Outcomes were composite scores for verbal episodic memory, executive functions, and global cognition. Reproductive variables included ages at pregnancies, menarche, and menopause, reproductive period, number of pregnancies, and use of hormones for contraception and menopausal symptoms. Multivariable linear regression evaluated associations between cognitive scores (dependent variable) and reproductive factors (independent variables), adjusting for age, race/ethnicity, income and education. Results On multivariable modeling, age at menarche ≥ 13 years of age was inversely associated with global cognition (p= 0.05). Last pregnancy after age 35 was positively associated with verbal memory (p=0.03). Use of hormonal contraceptives was positively associated with global cognition (p trend=0.04), and verbal memory (p trend=0.007). The association between hormonal contraceptive use and verbal memory and executive functions was strongest for more than 10 years of use. Reproductive period was positively associated with global cognition (p=0.04) and executive functions (p=0.04). Conclusion In this sample of healthy postmenopausal women, reproductive life events related to sex hormones, including earlier age at menarche, later age at last pregnancy, length of reproductive period, and use of oral contraceptives are positively related to aspects of cognition in later life. PMID:27996108

  11. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  12. Multiphase contrast medium injection for optimization of computed tomographic coronary angiography.

    PubMed

    Budoff, Matthew Jay; Shinbane, Jerold S; Child, Janis; Carson, Sivi; Chau, Alex; Liu, Stephen H; Mao, SongShou

    2006-02-01

    Electron beam angiography is a minimally invasive imaging technique. Adequate vascular opacification throughout the study remains a critical issue for image quality. We hypothesized that vascular image opacification and uniformity of vascular enhancement between slices can be improved using multiphase contrast medium injection protocols. We enrolled 244 consecutive patients who were randomized to three different injection protocols: single-phase contrast medium injection (Group 1), dual-phase contrast medium injection with each phase at a different injection rate (Group 2), and a three-phase injection with two phases of contrast medium injection followed by a saline injection phase (Group 3). Parameters measured were aortic opacification based on Hounsfield units and uniformity of aortic enhancement at predetermined slices (locations from top [level 1] to base [level 60]). In Group 1, contrast opacification differed across seven predetermined locations (scan levels: 1st versus 60th, P < .05), demonstrating significant nonuniformity. In Group 2, there was more uniform vascular enhancement, with no significant differences between the first 50 slices (P > .05). In Group 3, there was greater uniformity of vascular enhancement and higher mean Hounsfield units value across all 60 images, from the aortic root to the base of the heart (P < .05). The three-phase injection protocol improved vascular opacification at the base of the heart, as well as uniformity of arterial enhancement throughout the study.

  13. Formation and evolution of magnetised filaments in wind-swept turbulent clumps

    NASA Astrophysics Data System (ADS)

    Banda-Barragan, Wladimir Eduardo; Federrath, Christoph; Crocker, Roland M.; Bicknell, Geoffrey Vincent; Parkin, Elliot Ross

    2015-08-01

    Using high-resolution three-dimensional simulations, we examine the formation and evolution of filamentary structures arising from magnetohydrodynamic interactions between supersonic winds and turbulent clumps in the interstellar medium. Previous numerical studies assumed homogenous density profiles, null velocity fields, and uniformly distributed magnetic fields as the initial conditions for interstellar clumps. Here, we have, for the first time, incorporated fractal clumps with log-normal density distributions, random velocity fields and turbulent magnetic fields (superimposed on top of a uniform background field). Disruptive processes, instigated by dynamical instabilities and akin to those observed in simulations with uniform media, lead to stripping of clump material and the subsequent formation of filamentary tails. The evolution of filaments in uniform and turbulent models is, however, radically different as evidenced by comparisons of global quantities in both scenarios. We show, for example, that turbulent clumps produce tails with higher velocity dispersions, increased gas mixing, greater kinetic energy, and lower plasma beta than their uniform counterparts. We attribute the observed differences to: 1) the turbulence-driven enhanced growth of dynamical instabilities (e.g. Kelvin-Helmholtz and Rayleigh-Taylor instabilities) at fluid interfaces, and 2) the localised amplification of magnetic fields caused by the stretching of field lines trapped in the numerous surface deformations of fractal clumps. We briefly discuss the implications of this work to the physics of the optical filaments observed in the starburst galaxy M82.

  14. Benford's law and continuous dependent random variables

    NASA Astrophysics Data System (ADS)

    Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine

    2018-01-01

    Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.

  15. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  16. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.

  17. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  18. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  19. The influence of random indium alloy fluctuations in indium gallium nitride quantum wells on the device behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tsung-Jui; Wu, Yuh-Renn, E-mail: yrwu@ntu.edu.tw; Shivaraman, Ravi

    2014-09-21

    In this paper, we describe the influence of the intrinsic indium fluctuation in the InGaN quantum wells on the carrier transport, efficiency droop, and emission spectrum in GaN-based light emitting diodes (LEDs). Both real and randomly generated indium fluctuations were used in 3D simulations and compared to quantum wells with a uniform indium distribution. We found that without further hypothesis the simulations of electrical and optical properties in LEDs such as carrier transport, radiative and Auger recombination, and efficiency droop are greatly improved by considering natural nanoscale indium fluctuations.

  20. Small violations of Bell inequalities for multipartite pure random states

    NASA Astrophysics Data System (ADS)

    Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.

    2018-05-01

    For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.

  1. Standard random number generation for MBASIC

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.

  2. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  3. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  4. Detection of sinkholes or anomalies using full seismic wave fields : phase II [summary].

    DOT National Transportation Integrated Search

    2016-09-01

    Florida geology with its non-uniform rock and soil layers, variable deposits of poor soils (clay, organics, etc.), and weathered (and possibly voided) limestone is a major concern for design engineers, contractors, and maintenance personnel. However,...

  5. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  6. Correlated resistive/capacitive state variability in solid TiO2 based memory devices

    NASA Astrophysics Data System (ADS)

    Li, Qingjiang; Salaoru, Iulia; Khiat, Ali; Xu, Hui; Prodromakis, Themistoklis

    2017-05-01

    In this work, we experimentally demonstrated the correlated resistive/capacitive switching and state variability in practical TiO2 based memory devices. Based on filamentary functional mechanism, we argue that the impedance state variability stems from the randomly distributed defects inside the oxide bulk. Finally, our assumption was verified via a current percolation circuit model, by taking into account of random defects distribution and coexistence of memristor and memcapacitor.

  7. Algebraic Functions of H-Functions with Specific Dependency Structure.

    DTIC Science & Technology

    1984-05-01

    a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables

  8. Ultra-low power, highly uniform polymer memory by inserted multilayer graphene electrode

    NASA Astrophysics Data System (ADS)

    Jang, Byung Chul; Seong, Hyejeong; Kim, Jong Yun; Koo, Beom Jun; Kim, Sung Kyu; Yang, Sang Yoon; Gap Im, Sung; Choi, Sung-Yool

    2015-12-01

    Filament type resistive random access memory (RRAM) based on polymer thin films is a promising device for next generation, flexible nonvolatile memory. However, the resistive switching nonuniformity and the high power consumption found in the general filament type RRAM devices present critical issues for practical memory applications. Here, we introduce a novel approach not only to reduce the power consumption but also to improve the resistive switching uniformity in RRAM devices based on poly(1,3,5-trimethyl-3,4,5-trivinyl cyclotrisiloxane) by inserting multilayer graphene (MLG) at the electrode/polymer interface. The resistive switching uniformity was thereby significantly improved, and the power consumption was markedly reduced by 250 times. Furthermore, the inserted MLG film enabled a transition of the resistive switching operation from unipolar resistive switching to bipolar resistive switching and induced self-compliance behavior. The findings of this study can pave the way toward a new area of application for graphene in electronic devices.

  9. Hypothesis: Impregnated school uniforms reduce the incidence of dengue infections in school children.

    PubMed

    Wilder-Smith, A; Lover, A; Kittayapong, P; Burnham, G

    2011-06-01

    Dengue infection causes a significant economic, social and medical burden in affected populations in over 100 countries in the tropics and sub-tropics. Current dengue control efforts have generally focused on vector control but have not shown major impact. School-aged children are especially vulnerable to infection, due to sustained human-vector-human transmission in the close proximity environments of schools. Infection in children has a higher rate of complications, including dengue hemorrhagic fever and shock syndromes, than infections in adults. There is an urgent need for integrated and complementary population-based strategies to protect vulnerable children. We hypothesize that insecticide-treated school uniforms will reduce the incidence of dengue in school-aged children. The hypothesis would need to be tested in a community based randomized trial. If proven to be true, insecticide-treated school uniforms would be a cost-effective and scalable community based strategy to reduce the burden of dengue in children. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. A novel application of t-statistics to objectively assess the quality of IC50 fits for P-glycoprotein and other transporters.

    PubMed

    O'Connor, Michael; Lee, Caroline; Ellens, Harma; Bentz, Joe

    2015-02-01

    Current USFDA and EMA guidance for drug transporter interactions is dependent on IC50 measurements as these are utilized in determining whether a clinical interaction study is warranted. It is therefore important not only to standardize transport inhibition assay systems but also to develop uniform statistical criteria with associated probability statements for generation of robust IC50 values, which can be easily adopted across the industry. The current work provides a quantitative examination of critical factors affecting the quality of IC50 fits for P-gp inhibition through simulations of perfect data with randomly added error as commonly observed in the large data set collected by the P-gp IC50 initiative. The types of errors simulated were (1) variability in replicate measures of transport activity; (2) transformations of error-contaminated transport activity data prior to IC50 fitting (such as performed when determining an IC50 for inhibition of P-gp based on efflux ratio); and (3) the lack of well defined "no inhibition" and "complete inhibition" plateaus. The effect of the algorithm used in fitting the inhibition curve (e.g., two or three parameter fits) was also investigated. These simulations provide strong quantitative support for the recommendations provided in Bentz et al. (2013) for the determination of IC50 values for P-gp and demonstrate the adverse effect of data transformation prior to fitting. Furthermore, the simulations validate uniform statistical criteria for robust IC50 fits in general, which can be easily implemented across the industry. A calibration of the t-statistic is provided through calculation of confidence intervals associated with the t-statistic.

  11. Rise and Shock: Optimal Defibrillator Placement in a High-rise Building.

    PubMed

    Chan, Timothy C Y

    2017-01-01

    Out-of-hospital cardiac arrests (OHCA) in high-rise buildings experience lower survival and longer delays until paramedic arrival. Use of publicly accessible automated external defibrillators (AED) can improve survival, but "vertical" placement has not been studied. We aim to determine whether elevator-based or lobby-based AED placement results in shorter vertical distance travelled ("response distance") to OHCAs in a high-rise building. We developed a model of a single-elevator, n-floor high-rise building. We calculated and compared the average distance from AED to floor of arrest for the two AED locations. We modeled OHCA occurrences using floor-specific Poisson processes, the risk of OHCA on the ground floor (λ 1 ) and the risk on any above-ground floor (λ). The elevator was modeled with an override function enabling direct travel to the target floor. The elevator location upon override was modeled as a discrete uniform random variable. Calculations used the laws of probability. Elevator-based AED placement had shorter average response distance if the number of floors (n) in the building exceeded three quarters of the ratio of ground-floor OHCA risk to above-ground floor risk (λ 1 /λ) plus one half (n ≥ 3λ 1 /4λ + 0.5). Otherwise, a lobby-based AED had shorter average response distance. If OHCA risk on each floor was equal, an elevator-based AED had shorter average response distance. Elevator-based AEDs travel less vertical distance to OHCAs in tall buildings or those with uniform vertical risk, while lobby-based AEDs travel less vertical distance in buildings with substantial lobby, underground, and nearby street-level traffic and OHCA risk.

  12. A novel application of t-statistics to objectively assess the quality of IC50 fits for P-glycoprotein and other transporters

    PubMed Central

    O'Connor, Michael; Lee, Caroline; Ellens, Harma; Bentz, Joe

    2015-01-01

    Current USFDA and EMA guidance for drug transporter interactions is dependent on IC50 measurements as these are utilized in determining whether a clinical interaction study is warranted. It is therefore important not only to standardize transport inhibition assay systems but also to develop uniform statistical criteria with associated probability statements for generation of robust IC50 values, which can be easily adopted across the industry. The current work provides a quantitative examination of critical factors affecting the quality of IC50 fits for P-gp inhibition through simulations of perfect data with randomly added error as commonly observed in the large data set collected by the P-gp IC50 initiative. The types of errors simulated were (1) variability in replicate measures of transport activity; (2) transformations of error-contaminated transport activity data prior to IC50 fitting (such as performed when determining an IC50 for inhibition of P-gp based on efflux ratio); and (3) the lack of well defined “no inhibition” and “complete inhibition” plateaus. The effect of the algorithm used in fitting the inhibition curve (e.g., two or three parameter fits) was also investigated. These simulations provide strong quantitative support for the recommendations provided in Bentz et al. (2013) for the determination of IC50 values for P-gp and demonstrate the adverse effect of data transformation prior to fitting. Furthermore, the simulations validate uniform statistical criteria for robust IC50 fits in general, which can be easily implemented across the industry. A calibration of the t-statistic is provided through calculation of confidence intervals associated with the t-statistic. PMID:25692007

  13. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  14. Stochastic Growth of Ion Cyclotron And Mirror Waves In Earth's Magnetosheath

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.; Grubits, K. A.

    2001-01-01

    Electromagnetic ion cyclotron and mirror waves in Earth's magnetosheath are bursty, have widely variable fields, and are unexpectedly persistent, properties difficult to reconcile with uniform secular growth. Here it is shown for specific periods that stochastic growth theory (SGT) quantitatively accounts for the functional form of the wave statistics and qualitatively explains the wave properties. The wave statistics are inconsistent with uniform secular growth or self-organized criticality, but nonlinear processes sometimes play a role at high fields. The results show SGT's relevance near marginal stability and suggest that it is widely relevant to space and astrophysical plasmas.

  15. Analytical solution for heat transfer in three-dimensional porous media including variable fluid properties

    NASA Technical Reports Server (NTRS)

    Siegel, R.; Goldstein, M. E.

    1972-01-01

    An analytical solution is obtained for flow and heat transfer in a three-dimensional porous medium. Coolant from a reservoir at constant pressure and temperature enters one portion of the boundary of the medium and exits through another portion of the boundary which is at a specified uniform temperature and uniform pressure. The variation with temperature of coolant density and viscosity are both taken into account. A general solution is found that provides the temperature distribution in the medium and the mass and heat fluxes along the portion of the surface through which the coolant is exiting.

  16. Buckling analysis of variable thickness nanoplates using nonlocal continuum mechanics

    NASA Astrophysics Data System (ADS)

    Farajpour, Ali; Danesh, Mohammad; Mohammadi, Moslem

    2011-12-01

    This paper presents an investigation on the buckling characteristics of nanoscale rectangular plates under bi-axial compression considering non-uniformity in the thickness. Based on the nonlocal continuum mechanics, governing differential equations are derived. Numerical solutions for the buckling loads are obtained using the Galerkin method. The present study shows that the buckling behaviors of single-layered graphene sheets (SLGSs) are strongly sensitive to the nonlocal and non-uniform parameters. The influence of percentage change of thickness on the stability of SLGSs is more significant in the strip-type nonoplates (nanoribbons) than in the square-type nanoplates.

  17. Severity of Organized Item Theft in Computerized Adaptive Testing: An Empirical Study. Research Report. ETS RR-06-22

    ERIC Educational Resources Information Center

    Yi, Qing; Zhang, Jinming; Chang, Hua-Hua

    2006-01-01

    Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused…

  18. Magnetic noise as the cause of the spontaneous magnetization reversal of RE–TM–B permanent magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitriev, A. I., E-mail: aid@icp.ac.ru; Talantsev, A. D., E-mail: artgtx32@mail.ru; Kunitsyna, E. I.

    2016-08-15

    The relation between the macroscopic spontaneous magnetization reversal (magnetic viscosity) of (NdDySm)(FeCo)B alloys and the spectral characteristics of magnetic noise, which is caused by the random microscopic processes of thermally activated domain wall motion in a potential landscape with uniformly distributed potential barrier heights, is found.

  19. Experimental Evaluation of Field Trips on Instruction in Vocational Agriculture.

    ERIC Educational Resources Information Center

    McCaslin, Norval L.

    To determine the effect of field trips on student achievement in each of four subject matter areas in vocational agriculture, 12 schools offering approved programs were randomly selected and divided into a treatment group and a control group. Uniform teaching outlines and reference materials were provided to each group. While no field trips were…

  20. Electrolytic plating apparatus for discrete microsized particles

    DOEpatents

    Mayer, Anton

    1976-11-30

    Method and apparatus are disclosed for electrolytically producing very uniform coatings of a desired material on discrete microsized particles. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with a powered cathode for a time sufficient for such to occur.

  1. Electroless plating apparatus for discrete microsized particles

    DOEpatents

    Mayer, Anton

    1978-01-01

    Method and apparatus are disclosed for producing very uniform coatings of a desired material on discrete microsized particles by electroless techniques. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with each other for a time sufficient for such to occur.

  2. Modeling emerald ash borer dispersal using percolation theory: estimating the rate of range expansion in a fragmented landscape

    Treesearch

    Robin A. J. Taylor; Daniel A. Herms; Louis R. Iverson

    2008-01-01

    The dispersal of organisms is rarely random, although diffusion processes can be useful models for movement in approximately homogeneous environments. However, the environments through which all organisms disperse are far from uniform at all scales. The emerald ash borer (EAB), Agrilus planipennis, is obligate on ash (Fraxinus spp...

  3. Fermilab | Science | Historic Results

    Science.gov Websites

    quark since the discovery of the bottom quark at Fermilab through fixed-target experiments in 1977. Both cosmic rays. Researchers previously had assumed that cosmic rays approach the Earth uniformly from random impact the Earth generally come from the direction of active galactic nuclei. Many large galaxies

  4. Random walks of colloidal probes in viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2014-04-01

    To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.

  5. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  6. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  7. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  8. Parametric Study of Urban-Like Topographic Statistical Moments Relevant to a Priori Modelling of Bulk Aerodynamic Parameters

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William

    2017-02-01

    For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.

  9. Understanding the relative role of dispersion mechanisms across basin scales

    NASA Astrophysics Data System (ADS)

    Di Lazzaro, M.; Zarlenga, A.; Volpi, E.

    2016-05-01

    Different mechanisms are understood to represent the primary sources of the variance of travel time distribution in natural catchments. To quantify the fraction of variance introduced by each component, dispersion coefficients have been earlier defined in the framework of geomorphology-based rainfall-runoff models. In this paper we compare over a wide range of basin sizes and for a variety of runoff conditions the relative role of geomorphological dispersion, related to the heterogeneity of path lengths, and hillslope kinematic dispersion, generated by flow processes within the hillslopes. Unlike previous works, our approach does not focus on a specific study case; instead, we try to generalize results already obtained in previous literature stemming from the definition of a few significant parameters related to the metrics of the catchment and flow dynamics. We further extend this conceptual framework considering the effects of two additional variance-producing processes: the first covers the random variability of hillslope velocities (i.e. of travel times over hillslopes); the second deals with non-uniform production of runoff over the basin (specifically related to drainage density). Results are useful to clarify the role of hillslope kinematic dispersion and define under which conditions it counteracts or reinforces geomorphological dispersion. We show how its sign is ruled by the specific spatial distribution of hillslope lengths within the basin, as well as by flow conditions. Interestingly, while negative in a wide range of cases, kinematic dispersion is expected to become invariantly positive when the variability of hillslope velocity is large.

  10. A preliminary investigation of the relationships between historical crash and naturalistic driving.

    PubMed

    Pande, Anurag; Chand, Sai; Saxena, Neeraj; Dixit, Vinayak; Loy, James; Wolshon, Brian; Kent, Joshua D

    2017-04-01

    This paper describes a project that was undertaken using naturalistic driving data collected via Global Positioning System (GPS) devices to demonstrate a proof-of-concept for proactive safety assessments of crash-prone locations. The main hypothesis for the study is that the segments where drivers have to apply hard braking (higher jerks) more frequently might be the "unsafe" segments with more crashes over a long-term. The linear referencing methodology in ArcMap was used to link the GPS data with roadway characteristic data of US Highway 101 northbound (NB) and southbound (SB) in San Luis Obispo, California. The process used to merge GPS data with quarter-mile freeway segments for traditional crash frequency analysis is also discussed in the paper. A negative binomial regression analyses showed that proportion of high magnitude jerks while decelerating on freeway segments (from the driving data) was significantly related with the long-term crash frequency of those segments. A random parameter negative binomial model with uniformly distributed parameter for ADT and a fixed parameter for jerk provided a statistically significant estimate for quarter-mile segments. The results also indicated that roadway curvature and the presence of auxiliary lane are not significantly related with crash frequency for the highway segments under consideration. The results from this exploration are promising since the data used to derive the explanatory variable(s) can be collected using most off-the-shelf GPS devices, including many smartphones. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Virtual Learning Environment in Continuing Education for Nursing in Oncology: an Experimental Study.

    PubMed

    das Graças Silva Matsubara, Maria; De Domenico, Edvane Birelo Lopes

    2016-12-01

    Nurses working in oncology require continuing education and nowadays distance education is a possibility. To compare learning outcomes of the professionals participating in classroom learning versus distance learning; describing the sociodemographic characteristics and digital fluency of participants; comparing learning outcomes with independent variables; assessing the adequacy of educational practices in Virtual Environment Moodle Learning through the constructivist online learning environment survey. An experimental, randomized controlled study; conducted at the A C Camargo Cancer Center, located in São Paulo, SP, Brazil. The study included 97 nurses, with average training of 1 to 2 years. A control group (n = 44) had face to face training and the experiment group (n = 53) had training by distance learning, both with identical program content. The dependent variable was the result of learning, measured by applying a pre-assessment questionnaire and post-intervention for both groups. The sociodemographic and digital fluency data were uniform among the groups. The performance of both groups was statistically significant (p 0.005), and the control group had a greater advantage (40.4 %). Distance education has proven to be an effective alternative for training nurses, especially when they have more complex knowledge, more experience in the area and institutional time. Distance Education may be a possibility for the training of nurses for work in oncology. The association of age, training time and the institution, and the experience in Oncology interfered in the performance of both groups.

  12. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  13. Distinguishability of generic quantum states

    NASA Astrophysics Data System (ADS)

    Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol

    2016-06-01

    Properties of random mixed states of dimension N distributed uniformly with respect to the Hilbert-Schmidt measure are investigated. We show that for large N , due to the concentration of measure, the trace distance between two random states tends to a fixed number D ˜=1 /4 +1 /π , which yields the Helstrom bound on their distinguishability. To arrive at this result, we apply free random calculus and derive the symmetrized Marchenko-Pastur distribution, which is shown to describe numerical data for the model of coupled quantum kicked tops. Asymptotic value for the root fidelity between two random states, √{F }=3/4 , can serve as a universal reference value for further theoretical and experimental studies. Analogous results for quantum relative entropy and Chernoff quantity provide other bounds on the distinguishablity of both states in a multiple measurement setup due to the quantum Sanov theorem. We study also mean entropy of coherence of random pure and mixed states and entanglement of a generic mixed state of a bipartite system.

  14. Single-mode SOA-based 1kHz-linewidth dual-wavelength random fiber laser.

    PubMed

    Xu, Yanping; Zhang, Liang; Chen, Liang; Bao, Xiaoyi

    2017-07-10

    Narrow-linewidth multi-wavelength fiber lasers are of significant interests for fiber-optic sensors, spectroscopy, optical communications, and microwave generation. A novel narrow-linewidth dual-wavelength random fiber laser with single-mode operation, based on the semiconductor optical amplifier (SOA) gain, is achieved in this work for the first time, to the best of our knowledge. A simplified theoretical model is established to characterize such kind of random fiber laser. The inhomogeneous gain in SOA mitigates the mode competition significantly and alleviates the laser instability, which are frequently encountered in multi-wavelength fiber lasers with Erbium-doped fiber gain. The enhanced random distributed feedback from a 5km non-uniform fiber provides coherent feedback, acting as mode selection element to ensure single-mode operation with narrow linewidth of ~1kHz. The laser noises are also comprehensively investigated and studied, showing the improvements of the proposed random fiber laser with suppressed intensity and frequency noises.

  15. An invariance property of generalized Pearson random walks in bounded geometries

    NASA Astrophysics Data System (ADS)

    Mazzolo, Alain

    2009-03-01

    Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.

  16. Frequency-dependent scaling from mesoscale to macroscale in viscoelastic random composites

    PubMed Central

    Zhang, Jun

    2016-01-01

    This paper investigates the scaling from a statistical volume element (SVE; i.e. mesoscale level) to representative volume element (RVE; i.e. macroscale level) of spatially random linear viscoelastic materials, focusing on the quasi-static properties in the frequency domain. Requiring the material statistics to be spatially homogeneous and ergodic, the mesoscale bounds on the RVE response are developed from the Hill–Mandel homogenization condition adapted to viscoelastic materials. The bounds are obtained from two stochastic initial-boundary value problems set up, respectively, under uniform kinematic and traction boundary conditions. The frequency and scale dependencies of mesoscale bounds are obtained through computational mechanics for composites with planar random chessboard microstructures. In general, the frequency-dependent scaling to RVE can be described through a complex-valued scaling function, which generalizes the concept originally developed for linear elastic random composites. This scaling function is shown to apply for all different phase combinations on random chessboards and, essentially, is only a function of the microstructure and mesoscale. PMID:27274689

  17. Effect of spatial variability of storm on the optimal placement of best management practices (BMPs).

    PubMed

    Chang, C L; Chiueh, P T; Lo, S L

    2007-12-01

    It is significant to design best management practices (BMPs) and determine the proper BMPs placement for the purpose that can not only satisfy the water quantity and water quality standard, but also lower the total cost of BMPs. The spatial rainfall variability can have much effect on its relative runoff and non-point source pollution (NPSP). Meantime, the optimal design and placement of BMPs would be different as well. The objective of this study was to discuss the relationship between the spatial variability of rainfall and the optimal BMPs placements. Three synthetic rainfall storms with varied spatial distributions, including uniform rainfall, downstream rainfall and upstream rainfall, were designed. WinVAST model was applied to predict runoff and NPSP. Additionally, detention pond and swale were selected for being structural BMPs. Scatter search was applied to find the optimal BMPs placement. The results show that mostly the total cost of BMPs is higher in downstream rainfall than in upstream rainfall or uniform rainfall. Moreover, the cost of detention pond is much higher than swale. Thus, even though detention pond has larger efficiency for lowering peak flow and pollutant exports, it is not always the determined set in each subbasin.

  18. Random parameter models for accident prediction on two-lane undivided highways in India.

    PubMed

    Dinu, R R; Veeraragavan, A

    2011-02-01

    Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  20. Random dopant fluctuations and statistical variability in n-channel junctionless FETs

    NASA Astrophysics Data System (ADS)

    Akhavan, N. D.; Umana-Membreno, G. A.; Gu, R.; Antoszewski, J.; Faraone, L.

    2018-01-01

    The influence of random dopant fluctuations on the statistical variability of the electrical characteristics of n-channel silicon junctionless nanowire transistor (JNT) has been studied using three dimensional quantum simulations based on the non-equilibrium Green’s function (NEGF) formalism. Average randomly distributed body doping densities of 2 × 1019, 6 × 1019 and 1 × 1020 cm-3 have been considered employing an atomistic model for JNTs with gate lengths of 5, 10 and 15 nm. We demonstrate that by properly adjusting the doping density in the JNT, a near ideal statistical variability and electrical performance can be achieved, which can pave the way for the continuation of scaling in silicon CMOS technology.

  1. Development of low-altitude remote sensing systems for crop production management

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture accounts for within-field variability for targeted treatment rather than uniform treatment of an entire field. Precision agriculture is built on agricultural mechanization and state-of-the-art technologies of geographical information systems (GIS), global positioning systems (G...

  2. Aligning land use with land potential: The role of integrated agriculture

    USDA-ARS?s Scientific Manuscript database

    Contemporary agricultural land use is dominated by an emphasis on provisioning services by applying energy-intensive inputs through relatively uniform production systems across variable landscapes. This approach to agricultural land use is not sustainable. Achieving sustainable use of agricultural...

  3. Lack of association between allozyme heterozygosity and juvenile traits in Eucalyptus

    USDA-ARS?s Scientific Manuscript database

    Genetic variability for juvenile waits, which included basal diameter, height, biomass accumulation, and growth increment, was studied in eight provenances involving four species, Eucalyptus grandis, E. saligna, E. camaldulensis and E. urophylla, under uniform greenhouse conditions. The species diff...

  4. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-04-15

    manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original... manage , predict and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved situation faced

  5. Chapter 18: Variable Frequency Drive Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romberger, Jeff

    An adjustable-speed drive (ASD) includes all devices that vary the speed of a rotating load, including those that vary the motor speed and linkage devices that allow constant motor speed while varying the load speed. The Variable Frequency Drive Evaluation Protocol presented here addresses evaluation issues for variable-frequency drives (VFDs) installed on commercial and industrial motor-driven centrifugal fans and pumps for which torque varies with speed. Constant torque load applications, such as those for positive displacement pumps, are not covered by this protocol.

  6. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    NASA Astrophysics Data System (ADS)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

  7. Sums and Products of Jointly Distributed Random Variables: A Simplified Approach

    ERIC Educational Resources Information Center

    Stein, Sheldon H.

    2005-01-01

    Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…

  8. A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.

    ERIC Educational Resources Information Center

    Kraemer, Helena Chmura; Thiemann, Sue

    1989-01-01

    Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…

  9. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    ERIC Educational Resources Information Center

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  10. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  11. The one-dimensional asymmetric persistent random walk

    NASA Astrophysics Data System (ADS)

    Rossetto, Vincent

    2018-04-01

    Persistent random walks are intermediate transport processes between a uniform rectilinear motion and a Brownian motion. They are formed by successive steps of random finite lengths and directions travelled at a fixed speed. The isotropic and symmetric 1D persistent random walk is governed by the telegrapher’s equation, also called the hyperbolic heat conduction equation. These equations have been designed to resolve the paradox of the infinite speed in the heat and diffusion equations. The finiteness of both the speed and the correlation length leads to several classes of random walks: Persistent random walk in one dimension can display anomalies that cannot arise for Brownian motion such as anisotropy and asymmetries. In this work we focus on the case where the mean free path is anisotropic, the only anomaly leading to a physics that is different from the telegrapher’s case. We derive exact expression of its Green’s function, for its scattering statistics and distribution of first-passage time at the origin. The phenomenology of the latter shows a transition for quantities like the escape probability and the residence time.

  12. Field development planning using simulated annealing - optimal economic well scheduling and placement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckner, B.L.; Xong, X.

    1995-12-31

    A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil`s in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic {open_quotes}travelling salesman problem{close_quotes} is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variablemore » reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, nonuniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.« less

  13. Determining clinical practice of expert physiotherapy for patients undergoing lumbar spinal fusion: a cross-sectional survey study.

    PubMed

    Janssen, Esther R C; Scheijen, Elle E M; van Meeteren, Nico L U; de Bie, Rob A; Lenssen, Anton F; Willems, Paul C; Hoogeboom, Thomas J

    2016-05-01

    To determine the content of current Dutch expert hospital physiotherapy practice for patients undergoing lumbar spinal fusion (LSF), to gain insight into expert-based clinical practice. At each hospital where LSF is performed, one expert physiotherapist received an e-mailed questionnaire, about pre- and postoperative physiotherapy and discharge after LSF. The level of uniformity in goals and interventions was graded on a scale from no uniformity (50-60 %) to very strong uniformity (91-100 %). LSF was performed at 34 of the 67 contacted hospitals. From those 34 hospitals, 28 (82 %) expert physiotherapists completed the survey. Twenty-one percent of the respondents saw patients preoperatively, generally to provide information. Stated postoperative goals and administered interventions focused mainly on performing transfers safely and keeping the patient informed. Outcome measures were scarcely used. There was no uniformity regarding advice on the activities of daily living. Dutch perioperative expert physiotherapy for patients undergoing LSF is variable and lacks structural outcome assessment. Studies evaluating the effectiveness of best-practice physiotherapy are warranted.

  14. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  15. Random Effects: Variance Is the Spice of Life.

    PubMed

    Jupiter, Daniel C

    Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  16. General Investigation of Tidal Inlets: Stability of Selected United States Tidal Inlets

    DTIC Science & Technology

    1991-09-01

    characteristics in relation to the variability of the hydr; aulic parameters. An inlet can fall into any of four "stability" classes 48 Orientation Parameter 80...nlot he ~ :Ke(: t 93. If a fairly straight coast with uniform offshore slopes and a regionally homogeneous wave climate is considered, a reasonable...expectation is LhaL the longshore transport quantities and directions are homogeneous. Given a long-term variability in wave climate , a corresponding

  17. A Simulation Model of Issue Processing at Naval Supply Depot Yokosuka, Japan.

    DTIC Science & Technology

    1986-03-01

    DEMANDS RECEIVED DURING THE WORKDAY AM** *AMDD VARIABLE (((V$DDMND*V$NITDD)/1000)*800)/1525 **DEMANDS RECEIVED DURING THE WORKDAY PM ** PMDD VARIABLE...TRANSFER TO NEXT BLOCK IF ON A WORKDAY, ELSE TO RQTRM SPLIT V$ PMDD ,PMAD SPLIT TRANSACTION INTO THE NUMBER OF REQS REC’D DURING WORKDAY PM PMAD ADVANCE...DURING WORKDAY DAYAD ADVANCE 437,437 SPREAD REQUISITION FLOW UNIFORMLY * THROUGHOUT WORKDAY TRANSFER ,PRIAS TRANSFER ALL TO PRIAS, ** PM REQUISITION

  18. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    PubMed

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.

  19. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the total flow variability in the response of the urban drainage systems to heavy rainfall events.

  20. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    PubMed

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  1. Time to rehabilitation in the burn population: incidence of zero onset days in the UDSMR national dataset.

    PubMed

    Schneider, Jeffrey C; Tan, Wei-Han; Goldstein, Richard; Mix, Jacqueline M; Niewczyk, Paulette; Divita, Margaret A; Ryan, Colleen M; Gerrard, Paul B; Kowalske, Karen; Zafonte, Ross

    2013-01-01

    A preliminary investigation of the burn rehabilitation population found a large variability of zero onset day frequency between facilities. Onset days is defined as the time from injury to inpatient rehabilitation admission; this variable has not been investigated in burn patients previously. This study explored if this finding was a facility-based phenomena or characteristic of burn inpatient rehabilitation patients. This study was a secondary analysis of Uniform Data System for Medical Rehabilitation (UDSmr) data from 2002 to 2007 examining inpatient rehabilitation characteristics among patients with burn injuries. Exclusion criteria were age less than 18 years and discharge against medical advice. Comparisons of demographic, medical and functional data were made between facilities with a high frequency of zero onset days versus facilities with a low frequency of zero onset days. A total of 4738 patients from 455 inpatient rehabilitation facilities were included. Twenty-three percent of the population exhibited zero onset days (n = 1103). Sixteen facilities contained zero onset patients; two facilities accounted for 97% of the zero onset subgroup. Facilities with a high frequency of zero onset day patients demonstrated significant differences in demographic, medical, and functional variables compared to the remainder of the study population. There were significantly more zero onset day admissions among burn patients (23%) than other diagnostic groups (0.5- 3.6%) in the Uniform Data System for Medical Rehabilitation database, but the majority (97%) came from two inpatient rehabilitation facilities. It is unexpected for patients with significant burn injury to be admitted to a rehabilitation facility on the day of injury. Future studies investigating burn rehabilitation outcomes using the Uniform Data System for Medical Rehabilitation database should exclude facilities with a high percentage of zero onset days, which are not representative of the burn inpatient rehabilitation population.

  2. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  3. Output Beam Polarisation of X-ray Lasers with Transient Inversion

    NASA Astrophysics Data System (ADS)

    Janulewicz, K. A.; Kim, C. M.; Matouš, B.; Stiel, H.; Nishikino, M.; Hasegawa, N.; Kawachi, T.

    It is commonly accepted that X-ray lasers, as the devices based on amplified spontaneous emission (ASE), did not show any specific polarization in the output beam. The theoretical analysis within the uniform (single-mode) approximation suggested that the output radiation should show some defined polarization feature, but randomly changing from shot-to-shot. This hypothesis has been verified by experiment using traditional double-pulse scheme of transient inversion. Membrane beam-splitter was used as a polarization selector. It was found that the output radiation has a significant component of p-polarisation in each shot. To explain the effect and place it in the line with available, but scarce data, propagation and kinetic effects in the non-uniform plasma have been analysed.

  4. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  5. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  6. Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.

  7. Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Salimi, S.; Jafarizadeh, M. A.

    2009-06-01

    In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.

  8. Transport properties of random media: A new effective medium theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, K.; Soukoulis, C.M.

    We present a new method for efficient, accurate calculations of transport properties of random media. It is based on the principle that the wave energy density should be uniform when averaged over length scales larger than the size of the scatterers. This scheme captures the effects of resonant scattering of the individual scatterer exactly, as well as the multiple scattering in a mean-field sense. It has been successfully applied to both ``scalar`` and ``vector`` classical wave calculations. Results for the energy transport velocity are in agreement with experiment. This approach is of general use and can be easily extended tomore » treat different types of wave propagation in random media. {copyright} {ital 1995} {ital The} {ital American} {ital Physical} {ital Society}.« less

  9. Variation of Ciliary Beat Pattern in Three Different Beating Planes in Healthy Subjects.

    PubMed

    Kempeneers, Celine; Seaton, Claire; Chilvers, Mark A

    2017-05-01

    Digital high-speed video microscopy (DHSV) allows analysis of ciliary beat frequency (CBF) and ciliary beat pattern (CBP) of respiratory cilia in three planes. Normal reference data use a sideways edge to evaluate ciliary dyskinesia and calculate CBF using the time needed for a cilium to complete 10 beat cycles. Variability in CBF within the respiratory epithelium has been described, but data concerning variation of CBP is limited in healthy epithelium. This study aimed to document variability of CBP in normal samples, to compare ciliary function in three profiles, and to compare CBF calculated over five or 10 beat cycles. Nasal brushing samples from 13 healthy subjects were recorded using DHSV in three profiles. CBP and CBF over a 10-beat cycle were evaluated in all profiles, and CBF was reevaluated over five-beat cycles in the sideways edges. A uniform CBP was seen in 82.1% of edges. In the sideways profile, uniformity within the edge was lower (uniform normal CBP, 69.1% [sideways profile]; 97.1% [toward the observer], 92.0% [from above]), and dyskinesia was higher. Interobserver agreement for dyskinesia was poor. CBF was not different between profiles (P = .8097) or between 10 and five beat cycles (P = .1126). Our study demonstrates a lack of uniformity and consistency in manual CBP analysis of samples from healthy subjects, emphasizing the risk of automated CBP analysis in limited regions of interest and of single and limited manual CBP analysis. The toward the observer and from above profiles may be used to calculate CBF but may be less sensitive for evaluation of ciliary dyskinesia and CBP. CBF can be measured reliably by evaluation of only five-beat cycles. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  10. Study on the optimization of the deposition rate of planetary GaN-MOCVD films based on CFD simulation and the corresponding surface model

    PubMed Central

    Fei, Ze-yuan; Xu, Yi-feng; Wang, Jie; Fan, Bing-feng; Ma, Xue-jin; Wang, Gang

    2018-01-01

    Metal-organic chemical vapour deposition (MOCVD) is a key technique for fabricating GaN thin film structures for light-emitting and semiconductor laser diodes. Film uniformity is an important index to measure equipment performance and chip processes. This paper introduces a method to improve the quality of thin films by optimizing the rotation speed of different substrates of a model consisting of a planetary with seven 6-inch wafers for the planetary GaN-MOCVD. A numerical solution to the transient state at low pressure is obtained using computational fluid dynamics. To evaluate the role of the different zone speeds on the growth uniformity, single factor analysis is introduced. The results show that the growth rate and uniformity are strongly related to the rotational speed. Next, a response surface model was constructed by using the variables and the corresponding simulation results. The optimized combination of the matching of different speeds is also proposed as a useful reference for applications in industry, obtained by a response surface model and genetic algorithm with a balance between the growth rate and the growth uniformity. This method can save time, and the optimization can obtain the most uniform and highest thin film quality. PMID:29515883

  11. Study on the optimization of the deposition rate of planetary GaN-MOCVD films based on CFD simulation and the corresponding surface model.

    PubMed

    Li, Jian; Fei, Ze-Yuan; Xu, Yi-Feng; Wang, Jie; Fan, Bing-Feng; Ma, Xue-Jin; Wang, Gang

    2018-02-01

    Metal-organic chemical vapour deposition (MOCVD) is a key technique for fabricating GaN thin film structures for light-emitting and semiconductor laser diodes. Film uniformity is an important index to measure equipment performance and chip processes. This paper introduces a method to improve the quality of thin films by optimizing the rotation speed of different substrates of a model consisting of a planetary with seven 6-inch wafers for the planetary GaN-MOCVD. A numerical solution to the transient state at low pressure is obtained using computational fluid dynamics. To evaluate the role of the different zone speeds on the growth uniformity, single factor analysis is introduced. The results show that the growth rate and uniformity are strongly related to the rotational speed. Next, a response surface model was constructed by using the variables and the corresponding simulation results. The optimized combination of the matching of different speeds is also proposed as a useful reference for applications in industry, obtained by a response surface model and genetic algorithm with a balance between the growth rate and the growth uniformity. This method can save time, and the optimization can obtain the most uniform and highest thin film quality.

  12. Study on the optimization of the deposition rate of planetary GaN-MOCVD films based on CFD simulation and the corresponding surface model

    NASA Astrophysics Data System (ADS)

    Li, Jian; Fei, Ze-yuan; Xu, Yi-feng; Wang, Jie; Fan, Bing-feng; Ma, Xue-jin; Wang, Gang

    2018-02-01

    Metal-organic chemical vapour deposition (MOCVD) is a key technique for fabricating GaN thin film structures for light-emitting and semiconductor laser diodes. Film uniformity is an important index to measure equipment performance and chip processes. This paper introduces a method to improve the quality of thin films by optimizing the rotation speed of different substrates of a model consisting of a planetary with seven 6-inch wafers for the planetary GaN-MOCVD. A numerical solution to the transient state at low pressure is obtained using computational fluid dynamics. To evaluate the role of the different zone speeds on the growth uniformity, single factor analysis is introduced. The results show that the growth rate and uniformity are strongly related to the rotational speed. Next, a response surface model was constructed by using the variables and the corresponding simulation results. The optimized combination of the matching of different speeds is also proposed as a useful reference for applications in industry, obtained by a response surface model and genetic algorithm with a balance between the growth rate and the growth uniformity. This method can save time, and the optimization can obtain the most uniform and highest thin film quality.

  13. Comparison of Random Forest and Parametric Imputation Models for Imputing Missing Data Using MICE: A CALIBER Study

    PubMed Central

    Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-01-01

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914

  14. Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    PubMed

    Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-03-15

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.

  15. "Congratulations, you have been randomized into the control group!(?)": issues to consider when recruiting schools for matched-pair randomized control trials of prevention programs.

    PubMed

    Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa

    2008-03-01

    Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.

  16. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  17. Dynamical properties of the S =1/2 random Heisenberg chain

    NASA Astrophysics Data System (ADS)

    Shu, Yu-Rong; Dupont, Maxime; Yao, Dao-Xin; Capponi, Sylvain; Sandvik, Anders W.

    2018-03-01

    We study dynamical properties at finite temperature (T ) of Heisenberg spin chains with random antiferromagnetic exchange couplings, which realize the random singlet phase in the low-energy limit, using three complementary numerical methods: exact diagonalization, matrix-product-state algorithms, and stochastic analytic continuation of quantum Monte Carlo results in imaginary time. Specifically, we investigate the dynamic spin structure factor S (q ,ω ) and its ω →0 limit, which are closely related to inelastic neutron scattering and nuclear magnetic resonance (NMR) experiments (through the spin-lattice relaxation rate 1 /T1 ). Our study reveals a continuous narrow band of low-energy excitations in S (q ,ω ) , extending throughout the q space, instead of being restricted to q ≈0 and q ≈π as found in the uniform system. Close to q =π , the scaling properties of these excitations are well captured by the random-singlet theory, but disagreements also exist with some aspects of the predicted q dependence further away from q =π . Furthermore we also find spin diffusion effects close to q =0 that are not contained within the random-singlet theory but give non-negligible contributions to the mean 1 /T1 . To compare with NMR experiments, we consider the distribution of the local relaxation rates 1 /T1 . We show that the local 1 /T1 values are broadly distributed, approximately according to a stretched exponential. The mean 1 /T1 first decreases with T , but below a crossover temperature it starts to increase and likely diverges in the limit of a small nuclear resonance frequency ω0. Although a similar divergent behavior has been predicted and experimentally observed for the static uniform susceptibility, this divergent behavior of the mean 1 /T1 has never been experimentally observed. Indeed, we show that the divergence of the mean 1 /T1 is due to rare events in the disordered chains and is concealed in experiments, where the typical 1 /T1 value is accessed.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  19. Random vectors and spatial analysis by geostatistics for geotechnical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, D.S.

    1987-08-01

    Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less

  20. Self-triggering superconducting fault current limiter

    DOEpatents

    Yuan, Xing [Albany, NY; Tekletsadik, Kasegn [Rexford, NY

    2008-10-21

    A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

  1. Simulation to coating weight control for galvanizing

    NASA Astrophysics Data System (ADS)

    Wang, Junsheng; Yan, Zhang; Wu, Kunkui; Song, Lei

    2013-05-01

    Zinc coating weight control is one of the most critical issues for continuous galvanizing line. The process has the characteristic of variable-time large time delay, nonlinear, multivariable. It can result in seriously coating weight error and non-uniform coating. We develop a control system, which can automatically control the air knives pressure and its position to give a constant and uniform zinc coating, in accordance with customer-order specification through an auto-adaptive empirical model-based feed forward adaptive controller, and two model-free adaptive feedback controllers . The proposed models with controller were applied to continuous galvanizing line (CGL) at Angang Steel Works. By the production results, the precise and stability of the control model reduces over-coating weight and improves coating uniform. The product for this hot dip galvanizing line does not only satisfy the customers' quality requirement but also save the zinc consumption.

  2. Design and analysis of reflector for uniform light-emitting diode illuminance.

    PubMed

    Tsai, Chung-Yu

    2013-05-01

    A light-emitting diode (LED) projection system is proposed, composed of an LED chip and a variable-focus-parabolic (VFP) reflector, in which the focal length varies as a function of the vertical displacement of the incidence point relative to the horizontal centerline of the LED chip. The light-ray paths within the projection system are analyzed using an exact analytical model and a skew-ray tracing approach. The profile of the proposed VFP reflector and the position of the LED chip are then optimized in such a way as to enhance the uniformity of the illuminance distribution on the target region of the image plane. The validity of the optimized design is demonstrated by means of ZEMAX simulations. It is shown that the optimized VFP projector system yields a significant improvement in illuminance uniformity compared to conventional spherical and parabolic projectors and therefore minimizes the glare effect.

  3. From School Choice to Student Voice.

    ERIC Educational Resources Information Center

    Heckman, Paul E.; Montera, Viki L.

    2001-01-01

    Educational mass marketing approaches are like fast-food franchises; they offer homogeneous, standardized products that cannot satisfy every consumer's needs. A niche market looks inside the masses to address more individual, specialized choices missing from the menu. Variability, not uniformity, should guide development of public schooling. (MLH)

  4. Spatial distribution visualization of PWM continuous variable-rate spray

    USDA-ARS?s Scientific Manuscript database

    Chemical application is a dynamic spatial distribution process, during which spray liquid covers the targets with certain thickness and uniformity. Therefore, it is important to study the 2-D and 3-D (dimensional) spray distribution to evaluate spraying quality. The curve-surface generation methods ...

  5. The evolution of flowering strategies in US weedy rice

    USDA-ARS?s Scientific Manuscript database

    Local adaptation in plants often involves changes in flowering time in response to day length and temperature differences. Many crop varieties have been selected for uniformity in flowering time. In contrast, variable flowering may be important for increased competitiveness in weed species invading ...

  6. Modeling and analysis of LWIR signature variability associated with 3D and BRDF effects

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Less, David; Jin, Xuemin; Rynes, Peter

    2016-05-01

    Algorithms for retrieval of surface reflectance, emissivity or temperature from a spectral image almost always assume uniform illumination across the scene and horizontal surfaces with Lambertian reflectance. When these algorithms are used to process real 3-D scenes, the retrieved "apparent" values contain the strong, spatially dependent variations in illumination as well as surface bidirectional reflectance distribution function (BRDF) effects. This is especially problematic with horizontal or near-horizontal viewing, where many observed surfaces are vertical, and where horizontal surfaces can show strong specularity. The goals of this study are to characterize long-wavelength infrared (LWIR) signature variability in a HSI 3-D scene and develop practical methods for estimating the true surface values. We take advantage of synthetic near-horizontal imagery generated with the high-fidelity MultiService Electro-optic Signature (MuSES) model, and compare retrievals of temperature and directional-hemispherical reflectance using standard sky downwelling illumination and MuSES-based non-uniform environmental illumination.

  7. A life cycle cost economics model for projects with uniformly varying operating costs. [management planning

    NASA Technical Reports Server (NTRS)

    Remer, D. S.

    1977-01-01

    A mathematical model is developed for calculating the life cycle costs for a project where the operating costs increase or decrease in a linear manner with time. The life cycle cost is shown to be a function of the investment costs, initial operating costs, operating cost gradient, project life time, interest rate for capital and salvage value. The results show that the life cycle cost for a project can be grossly underestimated (or overestimated) if the operating costs increase (or decrease) uniformly over time rather than being constant as is often assumed in project economic evaluations. The following range of variables is examined: (1) project life from 2 to 30 years; (2) interest rate from 0 to 15 percent per year; and (3) operating cost gradient from 5 to 90 percent of the initial operating costs. A numerical example plus tables and graphs is given to help calculate project life cycle costs over a wide range of variables.

  8. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  9. Report on hard red spring wheat varieties grown in cooperative plot and nursery experiments in the spring wheat region in 2016

    USDA-ARS?s Scientific Manuscript database

    The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 86th year in 2016. The nursery contained 26 entries submitted by 8 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ...

  10. Sample-based estimation of tree species richness in a wet tropical forest compartment

    Treesearch

    Steen Magnussen; Raphael Pelissier

    2007-01-01

    Petersen's capture-recapture ratio estimator and the well-known bootstrap estimator are compared across a range of simulated low-intensity simple random sampling with fixed-area plots of 100 m? in a rich wet tropical forest compartment with 93 tree species in the Western Ghats of India. Petersen's ratio estimator was uniformly superior to the bootstrap...

  11. Electromagnetic properties of material coated surfaces

    NASA Technical Reports Server (NTRS)

    Beard, L.; Berrie, J.; Burkholder, R.; Dominek, A.; Walton, E.; Wang, N.

    1989-01-01

    The electromagnetic properties of material coated conducting surfaces were investigated. The coating geometries consist of uniform layers over a planar surface, irregularly shaped formations near edges and randomly positioned, electrically small, irregularly shaped formations over a surface. Techniques to measure the scattered field and constitutive parameters from these geometries were studied. The significance of the scattered field from these geometries warrants further study.

  12. Report on hard red spring wheat varieties grown in cooperative plot and nursery experiments in the spring wheat region in 2014

    USDA-ARS?s Scientific Manuscript database

    The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 84th year in 2014. The nursery contained 26 entries submitted by 6 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ex...

  13. Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.

    PubMed

    Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E

    2000-10-01

    To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.

  14. Electroforming free controlled bipolar resistive switching in Al/CoFe2O4/FTO device with self-compliance effect

    NASA Astrophysics Data System (ADS)

    Munjal, Sandeep; Khare, Neeraj

    2018-02-01

    Controlled bipolar resistive switching (BRS) has been observed in nanostructured CoFe2O4 (CFO) films using an Al (aluminum)/CoFe2O4/FTO (fluorine-doped tin oxide) device. The fabricated device shows electroforming-free uniform BRS with two clearly distinguished and stable resistance states without any application of compliance current, with a resistance ratio of the high resistance state (HRS) and the low resistance state (LRS) of >102. Small switching voltage (<1 volt) and lower current in both the resistance states confirm the fabrication of a low power consumption device. In the LRS, the conduction mechanism was found to be Ohmic in nature, while the high-resistance state (HRS/OFF state) was governed by the space charge-limited conduction mechanism, which indicates the presence of an interfacial layer with an imperfect microstructure near the top Al/CFO interface. The device shows nonvolatile behavior with good endurance properties, an acceptable resistance ratio, uniform resistive switching due to stable, less random filament formation/rupture, and a control over the resistive switching properties by choosing different stop voltages, which makes the device suitable for its application in future nonvolatile resistive random access memory.

  15. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  16. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    PubMed

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  17. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    PubMed Central

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-01

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042

  18. Wave Propagation inside Random Media

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaojun

    This thesis presents results of studies of wave scattering within and transmission through random and periodic systems. The main focus is on energy profiles inside quasi-1D and 1D random media. The connection between transport and the states of the medium is manifested in the equivalence of the dimensionless conductance, g, and the Thouless number which is the ratio of the average linewidth and spacing of energy levels. This equivalence and theories regarding the energy profiles inside random media are based on the assumption that LDOS is uniform throughout the samples. We have conducted microwave measurements of the longitudinal energy profiles within disordered samples contained in a copper tube supporting multiple waveguide channels with an antenna moving along a slit on the tube. These measurements allow us to determine the local density of states (LDOS) at a location which is the sum of energy from all incoming channels on both sides. For diffusive samples, the LDOS is uniform and the energy profile decays linearly as expected. However, for localized samples, we find that the LDOS drops sharply towards the middle of the sample and the energy profile does not follow the result of the local diffusion theory where the LDOS is assumed to be uniform. We analyze the field spectra into quasi-normal modes and found that the mode linewidth and the number of modes saturates as the sample length increases. Thus the Thouless number saturates while the dimensionless conductance g continues to fall with increasing length, indicating that the modes are localized near the boundaries. This is in contrast to the general believing that g and Thouless number follow the same scaling behavior. Previous measurements show that single parameter scaling (SPS) still holds in the same sample where the LDOS is suppressed te{shi2014microwave}. We explore the extension of SPS to the interior of the sample by analyzing statistics of the logrithm of the energy density ln W(x) and found that =-x/l where l is the transport mean free path. The result does not depend on the sample length, which is counterintuitive yet remarkably simple. More supprisingly, the linear fall-off of energy profile holds for totally disordered random 1D layered samples in simulations where the LDOS is uniform as well as for single mode random waveguide experiments and 1D nearly periodic samples where the LDOS is suppressed in the middle of the sample. The generalization of the transmission matrix to the interior of quasi-1D random samples, which is defined as the field matrix, and its eigenvalues statistics are also discussed. The maximum energy deposition at a location is not the intensity of the first transmission eigenchannel but the eigenvalue of the first energy density eigenchannels at that cross section, which can be much greater than the average value. The contrast, which is the ratio of the intensity at the focused point to the background intensity, in optimal focusing is determined by the participation number of the energy density eigenvalues and its inverse gives the variance of the energy density at that cross section in a single configuration. We have also studied topological states in photonic structures. We have demonstrated robust propagation of electromagnetic waves along reconfigurable pathways within a topological photonic metacrystal. Since the wave is confined within the domain wall, which is the boundary between two distinct topological insulating systems, we can freely steer the wave by reconstructing the photonic structure. Other topics, such as speckle pattern evolutions and the effects of boundary conditions on the statistics of transmission eigenvalues and energy profiles are also discussed.

  19. Filtering and Gridding Satellite Observations of Cloud Variables to Compare with Climate Model Output

    NASA Astrophysics Data System (ADS)

    Pitts, K.; Nasiri, S. L.; Smith, N.

    2013-12-01

    Global climate models have improved considerably over the years, yet clouds still represent a large factor of uncertainty for these models. Comparisons of model-simulated cloud variables with equivalent satellite cloud products are the best way to start diagnosing the differences between model output and observations. Gridded (level 3) cloud products from many different satellites and instruments are required for a full analysis, but these products are created by different science teams using different algorithms and filtering criteria to create similar, but not directly comparable, cloud products. This study makes use of a recently developed uniform space-time gridding algorithm to create a new set of gridded cloud products from each satellite instrument's level 2 data of interest which are each filtered using the same criteria, allowing for a more direct comparison between satellite products. The filtering is done via several variables such as cloud top pressure/height, thermodynamic phase, optical properties, satellite viewing angle, and sun zenith angle. The filtering criteria are determined based on the variable being analyzed and the science question at hand. Each comparison of different variables may require different filtering strategies as no single approach is appropriate for all problems. Beyond inter-satellite data comparison, these new sets of uniformly gridded satellite products can also be used for comparison with model-simulated cloud variables. Of particular interest to this study are the differences in the vertical distributions of ice and liquid water content between the satellite retrievals and model simulations, especially in the mid-troposphere where there are mixed-phase clouds to consider. This presentation will demonstrate the proof of concept through comparisons of cloud water path from Aqua MODIS retrievals and NASA GISS-E2-[R/H] model simulations archived in the CMIP5 data portal.

  20. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  1. Influence of process parameters on content uniformity of a low dose active pharmaceutical ingredient in a tablet formulation according to GMP.

    PubMed

    Muselík, Jan; Franc, Aleš; Doležel, Petr; Goněc, Roman; Krondlová, Anna; Lukášová, Ivana

    2014-09-01

    The article describes the development and production of tablets using direct compression of powder mixtures. The aim was to describe the impact of filler particle size and the time of lubricant addition during mixing on content uniformity according to the Good Manufacturing Practice (GMP) process validation requirements. Processes are regulated by complex directives, forcing the producers to validate, using sophisticated methods, the content uniformity of intermediates as well as final products. Cutting down of production time and material, shortening of analyses, and fast and reliable statistic evaluation of results can reduce the final price without affecting product quality. The manufacturing process of directly compressed tablets containing the low dose active pharmaceutical ingredient (API) warfarin, with content uniformity passing validation criteria, is used as a model example. Statistic methods have proved that the manufacturing process is reproducible. Methods suitable for elucidation of various properties of the final blend, e.g., measurement of electrostatic charge by Faraday pail and evaluation of mutual influences of researched variables by partial least square (PLS) regression, were used. Using these methods, it was proved that the filler with higher particle size increased the content uniformity of both blends and the ensuing tablets. Addition of the lubricant, magnesium stearate, during the blending process improved the content uniformity of blends containing the filler with larger particles. This seems to be caused by reduced sampling error due to the suppression of electrostatic charge.

  2. The current impact flux on Mars and its seasonal variation

    NASA Astrophysics Data System (ADS)

    JeongAhn, Youngmin; Malhotra, Renu

    2015-12-01

    We calculate the present-day impact flux on Mars and its variation over the martian year, using the current data on the orbital distribution of known Mars-crossing minor planets. We adapt the Öpik-Wetherill formulation for calculating collision probabilities, paying careful attention to the non-uniform distribution of the perihelion longitude and the argument of perihelion owed to secular planetary perturbations. We find that, at the current epoch, the Mars crossers have an axial distribution of the argument of perihelion, and the mean direction of their eccentricity vectors is nearly aligned with Mars' eccentricity vector. These previously neglected angular non-uniformities have the effect of depressing the mean annual impact flux by a factor of about 2 compared to the estimate based on a uniform random distribution of the angular elements of Mars-crossers; the amplitude of the seasonal variation of the impact flux is likewise depressed by a factor of about 4-5. We estimate that the flux of large impactors (of absolute magnitude H < 16) within ±30° of Mars' aphelion is about three times larger than when the planet is near perihelion. Extrapolation of our results to a model population of meter-size Mars-crossers shows that if these small impactors have a uniform distribution of their angular elements, then their aphelion-to-perihelion impact flux ratio would be 11-15, but if they track the orbital distribution of the large impactors, including their non-uniform angular elements, then this ratio would be about 3. Comparison of our results with the current dataset of fresh impact craters on Mars (detected with Mars-orbiting spacecraft) appears to rule out the uniform distribution of angular elements.

  3. A distributed scheduling algorithm for heterogeneous real-time systems

    NASA Technical Reports Server (NTRS)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  4. Phase transition in nonuniform Josephson arrays: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lozovik, Yu. E.; Pomirchy, L. M.

    1994-01-01

    Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.

  5. Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod

    USGS Publications Warehouse

    Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.

    2008-01-01

    To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.

  6. Seven Deadly Sins in Trauma Outcomes Research: An Epidemiologic Post-Mortem for Major Causes of Bias

    PubMed Central

    del Junco, Deborah J.; Fox, Erin E.; Camp, Elizabeth A.; Rahbar, Mohammad H.; Holcomb, John B.

    2013-01-01

    Background Because randomized clinical trials (RCTs) in trauma outcomes research are expensive and complex, they have rarely been the basis for the clinical care of trauma patients. Most published findings are derived from retrospective and occasionally prospective observational studies that may be particularly susceptible to bias. The sources of bias include some common to other clinical domains, such as heterogeneous patient populations with competing and interdependent short- and long-term outcomes. Other sources of bias are unique to trauma, such as rapidly changing multi-system responses to injury that necessitate highly dynamic treatment regimes like blood product transfusion. The standard research design and analysis strategies applied in published observational studies are often inadequate to address these biases. Methods Drawing on recent experience in the design, data collection, monitoring and analysis of the 10-site observational PROMMTT study, seven common and sometimes overlapping biases are described through examples and resolution strategies. Results Sources of bias in trauma research include ignoring 1) variation in patients’ indications for treatment (indication bias), 2) the dependency of intervention delivery on patient survival (survival bias), 3) time-varying treatment, 4) time-dependent confounding, 5) non-uniform intervention effects over time, 6) non-random missing data mechanisms, and 7) imperfectly defined variables. This list is not exhaustive. Conclusion The mitigation strategies to overcome these threats to validity require epidemiologic and statistical vigilance. Minimizing the highlighted types of bias in trauma research will facilitate clinical translation of more accurate and reproducible findings and improve the evidence-base that clinicians apply in their care of injured patients. PMID:23778519

  7. Medicaid Waivers and Public Sector Mental Health Service Penetration Rates for Youth.

    PubMed

    Graaf, Genevieve; Snowden, Lonnie

    2018-01-22

    To assist families of youth with serious emotional disturbance in financing youth's comprehensive care, some states have sought and received Medicaid waivers. Medicaid waivers waive or relax the Medicaid means test for eligibility to provide insurance coverage to nonpoor families for expensive, otherwise out-of-reach treatment for youth with Serious Emotional Disturbance (SED). Waivers promote treatment access for the most troubled youth, and the present study investigated whether any of several Medicaid waiver options-and those that completely omit the means test in particular-are associated with higher state-wide public sector treatment penetration rates. The investigators obtained data from the U.S. Census, SAMHSA's Uniform Reporting System, and the Centers for Medicare and Medicaid Services. Analysis employed random intercept and random slope linear regression models, controlling for a variety of state demographic and fiscal variables, to determine whether a relationship between Medicaid waiver policies and state-level public sector penetration rates could be observed. Findings indicate that, whether relaxing or completely waiving Medicaid's qualifying income limits, waivers increase public sector penetration rates, particularly for youth under age 17. However, completely waiving Medicaid income limits did not uniquely contribute to penetration rate increases. States offering Medicaid waivers that either relax or completely waive Medicaid's means test to qualify for health coverage present higher public sector treatment rates for youth with behavioral health care needs. There is no evidence that restricting the program to waiving the means test for accessing Medicaid would increase treatment access. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. PERIODOGRAMS FOR MULTIBAND ASTRONOMICAL TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VanderPlas, Jacob T.; Ivezic, Željko

    This paper introduces the multiband periodogram, a general extension of the well-known Lomb–Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb–Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common tomore » all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.« less

  9. Methods and systems for fabricating high quality superconducting tapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majkic, Goran; Selvamanickam, Venkat

    An MOCVD system fabricates high quality superconductor tapes with variable thicknesses. The MOCVD system can include a gas flow chamber between two parallel channels in a housing. A substrate tape is heated and then passed through the MOCVD housing such that the gas flow is perpendicular to the tape's surface. Precursors are injected into the gas flow for deposition on the substrate tape. In this way, superconductor tapes can be fabricated with variable thicknesses, uniform precursor deposition, and high critical current densities.

  10. Optical, near, infrared and ultraviolet monitoring of the Seyfert 1 galaxy Markarian 335

    NASA Technical Reports Server (NTRS)

    Shrader, Chris R.; Sun, W.-H.; Turner, T. J.; Hintzen, P. M.

    1990-01-01

    Preliminary results of a multifrequency monitoring campaign for the bright, Seyfert 1 galactic nuclei Mkn335 are presented. Nearly uniform sampling at 3 day intervals is achieved quasi simultaneously at each wavelength band. Wavelength dependent variability is seen at the 20 to 30 percent level. Interpretation of variability in terms of geometrically thin, optically thick accretion disk models is discussed. The inferred blackhole masses and accretion rates are discussed. Possible correlation between continuum and emission line variations is discussed.

  11. A new variable interval schedule with constant hazard rate and finite time range.

    PubMed

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  12. Transverse Motion of a Particle with an Oscillating Charge and Variable Mass in a Magnetic Field

    NASA Astrophysics Data System (ADS)

    Alisultanov, Z. Z.; Ragimkhanov, G. B.

    2018-03-01

    The problem of motion of a particle with an oscillating electric charge and variable mass in an uniform magnetic field has been solved. Three laws of mass variation have been considered: linear growth, oscillations, and stepwise growth. Analytical expressions for the particle velocity at different time dependences of the particle mass are obtained. It is established that simultaneous consideration of changes in the mass and charge leads to a significant change in the particle trajectory.

  13. Modeling MHD Stagnation Point Flow of Thixotropic Fluid with Non-uniform Heat Absorption/Generation

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Shah, Faisal; Khan, Muhammad Ijaz; Alsaedi, Ahmed; Yasmeen, Tabassum

    2017-12-01

    Here magnetohydrodynamic (MHD) stagnation point flow by nonlinear stretching sheet is discussed. Variable thickness of sheet is accounted. In addition non-uniform heat generation/absorption concept is retained. Numerical treatment to arising nonlinear system is presented. Shooting procedure is adopted for numerical treatment. Graphs and tables lead to physical description of results. It is observed that skin friction enhances for ( H a) and it decays for different rising values of ( K 1), ( K 2) and ( n). Further temperature gradient increases for higher estimation of (Pr) and decreases for larger ( H a). Major findings of present analysis are presented.

  14. A uniform geometrical optics and an extended uniform geometrical theory of diffraction for evaluating high frequency EM fields near smooth caustics and composite shadow boundaries

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1994-01-01

    A uniform geometrical optics (UGO) and an extended uniform geometrical theory of diffraction (EUTD) are developed for evaluating high frequency electromagnetic (EM) fields within transition regions associated with a two and three dimensional smooth caustic of reflected rays and a composite shadow boundary formed by the caustic termination or the confluence of the caustic with the reflection shadow boundary (RSB). The UGO is a uniform version of the classic geometrical optics (GO). It retains the simple ray optical expressions of classic GO and employs a new set of uniform reflection coefficients. The UGO also includes a uniform version of the complex GO ray field that exists on the dark side of the smooth caustic. The EUTD is an extension of the classic uniform geometrical theory of diffraction (UTD) and accounts for the non-ray optical behavior of the UGO reflected field near caustics by using a two-variable transition function in the expressions for the edge diffraction coefficients. It also uniformly recovers the classic UTD behavior of the edge diffracted field outside the composite shadow boundary transition region. The approach employed for constructing the UGO/EUTD solution is based on a spatial domain physical optics (PO) radiation integral representation for the fields which is then reduced using uniform asymptotic procedures. The UGO/EUTD analysis is also employed to investigate the far-zone RCS problem of plane wave scattering from two and three dimensional polynomial defined surfaces, and uniform reflection, zero-curvature, and edge diffraction coefficients are derived. Numerical results for the scattering and diffraction from cubic and fourth order polynomial strips are also shown and the UGO/EUTD solution is validated by comparison to an independent moment method (MM) solution. The UGO/EUTD solution is also compared with the classic GO/UTD solution. The failure of the classic techniques near caustics and composite shadow boundaries is clearly demonstrated and it is shown that the UGO/EUTD results remain valid and uniformly reduce to the classic results away from the transition regions. Mathematical details on the asymptotic properties and efficient numerical evaluation of the canonical functions involved in the UGO/EUTD expressions are also provided.

  15. Tensil Film Clamps And Mounting Block For Viscoelastometers

    NASA Technical Reports Server (NTRS)

    Stoakley, Diane M.; St. Clair, Anne K.; Little, Bruce D.

    1989-01-01

    Set of clamps and mounting block developed for use in determining tensile moduli and damping properties of films in manually operated or automated commercial viscoelastometer. These clamps and block provide uniformity of sample gripping and alignment in instrument. Dependence on operator and variability of data greatly reduced.

  16. Boll sampling protocols and their impact on measurements of cotton fiber quality

    USDA-ARS?s Scientific Manuscript database

    Within plant fiber variability has long contributed to product inconsistency in the cotton industry. Fiber quality uniformity is a primary plant breeding objective related to cotton commodity economic value. The physiological impact of source and sink relationships renders stress on the upper bran...

  17. Reliability analysis of structures under periodic proof tests in service

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.

  18. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  19. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  20. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

Top