Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
2015-06-01
of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation
A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2010-08-01
A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.
ERIC Educational Resources Information Center
Bhattacharyya, Pratip; Chakrabarti, Bikas K.
2008-01-01
We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…
Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.
2018-05-01
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
Kinetic market models with single commodity having price fluctuations
NASA Astrophysics Data System (ADS)
Chatterjee, A.; Chakrabarti, B. K.
2006-12-01
We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.
Emergence of an optimal search strategy from a simple random walk
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-01-01
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445
Emergence of an optimal search strategy from a simple random walk.
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-09-06
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model
NASA Astrophysics Data System (ADS)
Margarint, Vlad
2018-06-01
We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.
NASA Technical Reports Server (NTRS)
Kaljurand, M.; Valentin, J. R.; Shao, M.
1996-01-01
Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.
Averaging in SU(2) open quantum random walk
NASA Astrophysics Data System (ADS)
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
NASA Astrophysics Data System (ADS)
Zechner, A.; Stock, M.; Kellner, D.; Ziegler, I.; Keuschnigg, P.; Huber, P.; Mayer, U.; Sedlmayer, F.; Deutschmann, H.; Steininger, P.
2016-11-01
Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager’s deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner’s geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.
Zechner, A; Stock, M; Kellner, D; Ziegler, I; Keuschnigg, P; Huber, P; Mayer, U; Sedlmayer, F; Deutschmann, H; Steininger, P
2016-11-21
Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager's deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner's geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.
Explicit equilibria in a kinetic model of gambling
NASA Astrophysics Data System (ADS)
Bassetti, F.; Toscani, G.
2010-06-01
We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.
Some limit theorems for ratios of order statistics from uniform random variables.
Xu, Shou-Fang; Miao, Yu
2017-01-01
In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.
NASA Astrophysics Data System (ADS)
Iwakoshi, Takehisa; Hirota, Osamu
2014-10-01
This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.
Response of moderately thick laminated cross-ply composite shells subjected to random excitation
NASA Technical Reports Server (NTRS)
Elishakoff, Isaak; Cederbaum, Gabriel; Librescu, Liviu
1989-01-01
This study deals with the dynamic response of transverse shear deformable laminated shells subjected to random excitation. The analysis encompasses the following problems: (1) the dynamic response of circular cylindrical shells of finite length excited by an axisymmetric uniform ring loading, stationary in time, and (2) the response of spherical and cylindrical panels subjected to stationary random loadings with uniform spatial distribution. The associated equations governing the structural theory of shells are derived upon discarding the classical Love-Kirchhoff (L-K) assumptions. In this sense, the theory is formulated in the framework of the first-order transverse shear deformation theory (FSDT).
The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.
de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J
2017-05-16
Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks
2006-09-01
time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by
Antipersistent dynamics in kinetic models of wealth exchange
NASA Astrophysics Data System (ADS)
Goswami, Sanchari; Chatterjee, Arnab; Sen, Parongama
2011-11-01
We investigate the detailed dynamics of gains and losses made by agents in some kinetic models of wealth exchange. An earlier work suggested that a walk in an abstract gain-loss space can be conceived for the agents. For models in which agents do not save, or save with uniform saving propensity, the walk has diffusive behavior. For the case in which the saving propensity λ is distributed randomly (0≤λ<1), the resultant walk showed a ballistic nature (except at a particular value of λ*≈0.47). Here we consider several other features of the walk with random λ. While some macroscopic properties of this walk are comparable to a biased random walk, at microscopic level, there are gross differences. The difference turns out to be due to an antipersistent tendency toward making a gain (loss) immediately after making a loss (gain). This correlation is in fact present in kinetic models without saving or with uniform saving as well, such that the corresponding walks are not identical to ordinary random walks. In the distributed saving case, antipersistence occurs with a simultaneous overall bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the criticalmore » value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).« less
Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1991-01-01
The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.
Measuring Symmetry, Asymmetry and Randomness in Neural Network Connectivity
Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni
2014-01-01
Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity. PMID:25006663
Measuring symmetry, asymmetry and randomness in neural network connectivity.
Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni
2014-01-01
Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.
Quasirandom geometric networks from low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2017-08-01
We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.
The current impact flux on Mars and its seasonal variation
NASA Astrophysics Data System (ADS)
JeongAhn, Youngmin; Malhotra, Renu
2015-12-01
We calculate the present-day impact flux on Mars and its variation over the martian year, using the current data on the orbital distribution of known Mars-crossing minor planets. We adapt the Öpik-Wetherill formulation for calculating collision probabilities, paying careful attention to the non-uniform distribution of the perihelion longitude and the argument of perihelion owed to secular planetary perturbations. We find that, at the current epoch, the Mars crossers have an axial distribution of the argument of perihelion, and the mean direction of their eccentricity vectors is nearly aligned with Mars' eccentricity vector. These previously neglected angular non-uniformities have the effect of depressing the mean annual impact flux by a factor of about 2 compared to the estimate based on a uniform random distribution of the angular elements of Mars-crossers; the amplitude of the seasonal variation of the impact flux is likewise depressed by a factor of about 4-5. We estimate that the flux of large impactors (of absolute magnitude H < 16) within ±30° of Mars' aphelion is about three times larger than when the planet is near perihelion. Extrapolation of our results to a model population of meter-size Mars-crossers shows that if these small impactors have a uniform distribution of their angular elements, then their aphelion-to-perihelion impact flux ratio would be 11-15, but if they track the orbital distribution of the large impactors, including their non-uniform angular elements, then this ratio would be about 3. Comparison of our results with the current dataset of fresh impact craters on Mars (detected with Mars-orbiting spacecraft) appears to rule out the uniform distribution of angular elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, R.; Skupsky, S.
1990-08-01
The uniformity of focused laser beams, that has been modified with randomly phased distributed phase plates (C. B. Burckhardt, Appl. Opt. {bold 9}, 695 (1970); Kato and Mima, Appl. Phys. B {bold 29}, 186 (1982); Kato {ital et} {ital al}., Phys. Rev. Lett. {bold 53}, 1057 (1984); LLE Rev. {bold 33}, 1 (1987)), can be improved further by constructing patterns of phase elements which minimize phase correlations over small separations. Long-wavelength nonuniformities in the intensity distribution, which are relatively difficult to overcome in the target by thermal smoothing and in the laser by, e.g., spectral dispersion (Skupsky {ital et} {italmore » al}., J. Appl. Phys. {bold 66}, 3456 (1989); LLE Rev. {bold 36}, 158 (1989); {bold 37}, 29 (1989); {bold 37}, 40 (1989)), result largely from short-range phase correlations between phase plate elements. To reduce the long-wavelength structure, we have constructed phase patterns with smaller short-range correlations than would occur randomly. Calculations show that long-wavelength nonuniformities in single-beam intensity patterns can be reduced with these masks when the intrinsic phase error of the beam falls below certain limits. We show the effect of this improvement on uniformity for spherical irradiation by a multibeam system.« less
Adapting radiotherapy to hypoxic tumours
NASA Astrophysics Data System (ADS)
Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag
2006-10-01
In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.
Directed Random Markets: Connectivity Determines Money
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; López-Ruiz, Ricardo
2013-12-01
Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.
A distributed scheduling algorithm for heterogeneous real-time systems
NASA Technical Reports Server (NTRS)
Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi
1991-01-01
Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.
Spatial pattern of Baccharis platypoda shrub as determined by sex and life stages
NASA Astrophysics Data System (ADS)
Fonseca, Darliana da Costa; de Oliveira, Marcio Leles Romarco; Pereira, Israel Marinho; Gonzaga, Anne Priscila Dias; de Moura, Cristiane Coelho; Machado, Evandro Luiz Mendonça
2017-11-01
Spatial patterns of dioecious species can be determined by their nutritional requirements and intraspecific competition, apart from being a response to environmental heterogeneity. The aim of the study was to evaluate the spatial pattern of populations of a dioecious shrub reporting to sex and reproductive stage patterns of individuals. Sampling was carried out in three areas located in the meridional portion of Serra do Espinhaço, where in individuals of the studied species were mapped. The spatial pattern was determined through O-ring analysis and Ripley's K-function and the distribution of individuals' frequencies was verified through x2 test. Populations in two areas showed an aggregate spatial pattern tending towards random or uniform according to the observed scale. Male and female adults presented an aggregate pattern at smaller scales, while random and uniform patterns were verified above 20 m for individuals of both sexes of the areas A2 and A3. Young individuals presented an aggregate pattern in all areas and spatial independence in relation to adult individuals, especially female plants. The interactions between individuals of both genders presented spatial independence with respect to spatial distribution. Baccharis platypoda showed characteristics in accordance with the spatial distribution of savannic and dioecious species, whereas the population was aggregated tending towards random at greater spatial scales. Young individuals showed an aggregated pattern at different scales compared to adults, without positive association between them. Female and male adult individuals presented similar characteristics, confirming that adult individuals at greater scales are randomly distributed despite their distinct preferences for environments with moisture variation.
Discrete disorder models for many-body localization
NASA Astrophysics Data System (ADS)
Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub
2018-04-01
Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data
NASA Astrophysics Data System (ADS)
Mobli, Mehdi
2015-07-01
The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.
Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders
2014-09-21
Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.
NASA Astrophysics Data System (ADS)
Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders
2014-09-01
Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.
NASA Astrophysics Data System (ADS)
Popov, S. M.; Butov, O. V.; Chamorovski, Y. K.; Isaev, V. A.; Mégret, P.; Korobko, D. A.; Zolotovskii, I. O.; Fotiadi, A. A.
2018-06-01
We report on random lasing observed with 100-m-long fiber comprising an array of weak FBGs inscribed in the fiber core and uniformly distributed over the fiber length. Extended fluctuation-free oscilloscope traces highlight power dynamics typical for lasing. An additional piece of Er-doped fiber included into the laser cavity enables a stable laser generation with a linewidth narrower than 10 kHz.
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field
NASA Astrophysics Data System (ADS)
Borelli, M. E. S.; Carneiro, C. E. I.
1996-02-01
We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.
Effect of particle size distribution on permeability in the randomly packed porous media
NASA Astrophysics Data System (ADS)
Markicevic, Bojan
2017-11-01
An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.
Kanerva's sparse distributed memory with multiple hamming thresholds
NASA Technical Reports Server (NTRS)
Pohja, Seppo; Kaski, Kimmo
1992-01-01
If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.
Effect of Rayleigh-scattering distributed feedback on multiwavelength Raman fiber laser generation.
El-Taher, A E; Harper, P; Babin, S A; Churkin, D V; Podivilov, E V; Ania-Castanon, J D; Turitsyn, S K
2011-01-15
We experimentally demonstrate a Raman fiber laser based on multiple point-action fiber Bragg grating reflectors and distributed feedback via Rayleigh scattering in an ~22-km-long optical fiber. Twenty-two lasing lines with spacing of ~100 GHz (close to International Telecommunication Union grid) in the C band are generated at the watt level. In contrast to the normal cavity with competition between laser lines, the random distributed feedback cavity exhibits highly stable multiwavelength generation with a power-equalized uniform distribution, which is almost independent on power.
Robustness of optimal random searches in fragmented environments
NASA Astrophysics Data System (ADS)
Wosniack, M. E.; Santos, M. C.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.
2015-05-01
The random search problem is a challenging and interdisciplinary topic of research in statistical physics. Realistic searches usually take place in nonuniform heterogeneous distributions of targets, e.g., patchy environments and fragmented habitats in ecological systems. Here we present a comprehensive numerical study of search efficiency in arbitrarily fragmented landscapes with unlimited visits to targets that can only be found within patches. We assume a random walker selecting uniformly distributed turning angles and step lengths from an inverse power-law tailed distribution with exponent μ . Our main finding is that for a large class of fragmented environments the optimal strategy corresponds approximately to the same value μopt≈2 . Moreover, this exponent is indistinguishable from the well-known exact optimal value μopt=2 for the low-density limit of homogeneously distributed revisitable targets. Surprisingly, the best search strategies do not depend (or depend only weakly) on the specific details of the fragmentation. Finally, we discuss the mechanisms behind this observed robustness and comment on the relevance of our results to both the random search theory in general, as well as specifically to the foraging problem in the biological context.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Statistical Modeling of Robotic Random Walks on Different Terrain
NASA Astrophysics Data System (ADS)
Naylor, Austin; Kinnaman, Laura
Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.
NASA Astrophysics Data System (ADS)
Atsumi, Yu; Nakao, Hiroya
2012-05-01
A system of phase oscillators with repulsive global coupling and periodic external forcing undergoing asynchronous rotation is considered. The synchronization rate of the system can exhibit persistent fluctuations depending on parameters and initial phase distributions, and the amplitude of the fluctuations scales with the system size for uniformly random initial phase distributions. Using the Watanabe-Strogatz transformation that reduces the original system to low-dimensional macroscopic equations, we show that the fluctuations are collective dynamics of the system corresponding to low-dimensional trajectories of the reduced equations. It is argued that the amplitude of the fluctuations is determined by the inhomogeneity of the initial phase distribution, resulting in system-size scaling for the random case.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes
NASA Astrophysics Data System (ADS)
Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac
2012-11-01
In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Englander, Arnold C.
2014-01-01
Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.
Underestimating extreme events in power-law behavior due to machine-dependent cutoffs
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2014-11-01
Power-law distributions are typical macroscopic features occurring in almost all complex systems observable in nature. As a result, researchers in quantitative analyses must often generate random synthetic variates obeying power-law distributions. The task is usually performed through standard methods that map uniform random variates into the desired probability space. Whereas all these algorithms are theoretically solid, in this paper we show that they are subject to severe machine-dependent limitations. As a result, two dramatic consequences arise: (i) the sampling in the tail of the distribution is not random but deterministic; (ii) the moments of the sample distribution, which are theoretically expected to diverge as functions of the sample sizes, converge instead to finite values. We provide quantitative indications for the range of distribution parameters that can be safely handled by standard libraries used in computational analyses. Whereas our findings indicate possible reinterpretations of numerical results obtained through flawed sampling methodologies, they also pave the way for the search for a concrete solution to this central issue shared by all quantitative sciences dealing with complexity.
Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.
Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T
2010-03-10
Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.
Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers
NASA Astrophysics Data System (ADS)
Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan
2018-04-01
The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1 × 2.1 × 2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D-MRI with high temporal and spatial resolution within short scan time for visualization of organ or tumor motion during free breathing. Further studies, e.g. the application of the method for radiotherapy planning are needed to investigate the clinical applicability and diagnostic value of the approach.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Zipf's law in city size from a resource utilization model.
Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S; Chakrabarti, Bikas K
2014-10-01
We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.
Zipf's law in city size from a resource utilization model
NASA Astrophysics Data System (ADS)
Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S.; Chakrabarti, Bikas K.
2014-10-01
We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.
Distinguishability of generic quantum states
NASA Astrophysics Data System (ADS)
Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol
2016-06-01
Properties of random mixed states of dimension N distributed uniformly with respect to the Hilbert-Schmidt measure are investigated. We show that for large N , due to the concentration of measure, the trace distance between two random states tends to a fixed number D ˜=1 /4 +1 /π , which yields the Helstrom bound on their distinguishability. To arrive at this result, we apply free random calculus and derive the symmetrized Marchenko-Pastur distribution, which is shown to describe numerical data for the model of coupled quantum kicked tops. Asymptotic value for the root fidelity between two random states, √{F }=3/4 , can serve as a universal reference value for further theoretical and experimental studies. Analogous results for quantum relative entropy and Chernoff quantity provide other bounds on the distinguishablity of both states in a multiple measurement setup due to the quantum Sanov theorem. We study also mean entropy of coherence of random pure and mixed states and entanglement of a generic mixed state of a bipartite system.
An invariance property of generalized Pearson random walks in bounded geometries
NASA Astrophysics Data System (ADS)
Mazzolo, Alain
2009-03-01
Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.
Sampling large random knots in a confined space
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
Fast self contained exponential random deviate algorithm
NASA Astrophysics Data System (ADS)
Fernández, Julio F.
1997-03-01
An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.
Extracting DNA words based on the sequence features: non-uniform distribution and integrity.
Li, Zhi; Cao, Hongyan; Cui, Yuehua; Zhang, Yanbo
2016-01-25
DNA sequence can be viewed as an unknown language with words as its functional units. Given that most sequence alignment algorithms such as the motif discovery algorithms depend on the quality of background information about sequences, it is necessary to develop an ab initio algorithm for extracting the "words" based only on the DNA sequences. We considered that non-uniform distribution and integrity were two important features of a word, based on which we developed an ab initio algorithm to extract "DNA words" that have potential functional meaning. A Kolmogorov-Smirnov test was used for consistency test of uniform distribution of DNA sequences, and the integrity was judged by the sequence and position alignment. Two random base sequences were adopted as negative control, and an English book was used as positive control to verify our algorithm. We applied our algorithm to the genomes of Saccharomyces cerevisiae and 10 strains of Escherichia coli to show the utility of the methods. The results provide strong evidences that the algorithm is a promising tool for ab initio building a DNA dictionary. Our method provides a fast way for large scale screening of important DNA elements and offers potential insights into the understanding of a genome.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...
2017-01-31
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
NASA Astrophysics Data System (ADS)
Tomita, Toshihiro; Miyaji, Kousuke
2015-04-01
The dependence of spatial and statistical distribution of random telegraph noise (RTN) in a 30 nm NAND flash memory on channel doping concentration NA and cell program state Vth is comprehensively investigated using three-dimensional Monte Carlo device simulation considering random dopant fluctuation (RDF). It is found that single trap RTN amplitude ΔVth is larger at the center of the channel region in the NAND flash memory, which is closer to the jellium (uniform) doping results since NA is relatively low to suppress junction leakage current. In addition, ΔVth peak at the center of the channel decreases in the higher Vth state due to the current concentration at the shallow trench isolation (STI) edges induced by the high vertical electrical field through the fringing capacitance between the channel and control gate. In such cases, ΔVth distribution slope λ cannot be determined by only considering RDF and single trap.
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
Magnetic noise as the cause of the spontaneous magnetization reversal of RE–TM–B permanent magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitriev, A. I., E-mail: aid@icp.ac.ru; Talantsev, A. D., E-mail: artgtx32@mail.ru; Kunitsyna, E. I.
2016-08-15
The relation between the macroscopic spontaneous magnetization reversal (magnetic viscosity) of (NdDySm)(FeCo)B alloys and the spectral characteristics of magnetic noise, which is caused by the random microscopic processes of thermally activated domain wall motion in a potential landscape with uniformly distributed potential barrier heights, is found.
Linking of uniform random polygons in confined spaces
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.
2007-03-01
In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .
Signs of universality in the structure of culture
NASA Astrophysics Data System (ADS)
Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego
2017-11-01
Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.
Dendritic growth model of multilevel marketing
NASA Astrophysics Data System (ADS)
Pang, James Christopher S.; Monterola, Christopher P.
2017-02-01
Biologically inspired dendritic network growth is utilized to model the evolving connections of a multilevel marketing (MLM) enterprise. Starting from agents at random spatial locations, a network is formed by minimizing a distance cost function controlled by a parameter, termed the balancing factor bf, that weighs the wiring and the path length costs of connection. The paradigm is compared to an actual MLM membership data and is shown to be successful in statistically capturing the membership distribution, better than the previously reported agent based preferential attachment or analytic branching process models. Moreover, it recovers the known empirical statistics of previously studied MLM, specifically: (i) a membership distribution characterized by the existence of peak levels indicating limited growth, and (ii) an income distribution obeying the 80 - 20 Pareto principle. Extensive types of income distributions from uniform to Pareto to a "winner-take-all" kind are also modeled by varying bf. Finally, the robustness of our dendritic growth paradigm to random agent removals is explored and its implications to MLM income distributions are discussed.
Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.
The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)
Random isotropic one-dimensional XY-model
NASA Astrophysics Data System (ADS)
Gonçalves, L. L.; Vieira, A. P.
1998-01-01
The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .
NASA Astrophysics Data System (ADS)
Bouleau, Nicolas; Chorro, Christophe
2017-08-01
In this paper we consider some elementary and fair zero-sum games of chance in order to study the impact of random effects on the wealth distribution of N interacting players. Even if an exhaustive analytical study of such games between many players may be tricky, numerical experiments highlight interesting asymptotic properties. In particular, we emphasize that randomness plays a key role in concentrating wealth in the extreme, in the hands of a single player. From a mathematical perspective, we interestingly adopt some diffusion limits for small and high-frequency transactions which are otherwise extensively used in population genetics. Finally, the impact of small tax rates on the preceding dynamics is discussed for several regulation mechanisms. We show that taxation of income is not sufficient to overcome this extreme concentration process in contrast to the uniform taxation of capital which stabilizes the economy and prevents agents from being ruined.
Formation and evolution of magnetised filaments in wind-swept turbulent clumps
NASA Astrophysics Data System (ADS)
Banda-Barragan, Wladimir Eduardo; Federrath, Christoph; Crocker, Roland M.; Bicknell, Geoffrey Vincent; Parkin, Elliot Ross
2015-08-01
Using high-resolution three-dimensional simulations, we examine the formation and evolution of filamentary structures arising from magnetohydrodynamic interactions between supersonic winds and turbulent clumps in the interstellar medium. Previous numerical studies assumed homogenous density profiles, null velocity fields, and uniformly distributed magnetic fields as the initial conditions for interstellar clumps. Here, we have, for the first time, incorporated fractal clumps with log-normal density distributions, random velocity fields and turbulent magnetic fields (superimposed on top of a uniform background field). Disruptive processes, instigated by dynamical instabilities and akin to those observed in simulations with uniform media, lead to stripping of clump material and the subsequent formation of filamentary tails. The evolution of filaments in uniform and turbulent models is, however, radically different as evidenced by comparisons of global quantities in both scenarios. We show, for example, that turbulent clumps produce tails with higher velocity dispersions, increased gas mixing, greater kinetic energy, and lower plasma beta than their uniform counterparts. We attribute the observed differences to: 1) the turbulence-driven enhanced growth of dynamical instabilities (e.g. Kelvin-Helmholtz and Rayleigh-Taylor instabilities) at fluid interfaces, and 2) the localised amplification of magnetic fields caused by the stretching of field lines trapped in the numerous surface deformations of fractal clumps. We briefly discuss the implications of this work to the physics of the optical filaments observed in the starburst galaxy M82.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Bujkiewicz, Sylwia; Riley, Richard D
2016-01-01
Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929
Scaling, clustering and avalanches for steel beads in an external magnetic field
NASA Astrophysics Data System (ADS)
Marquinez, Alyse; Thvedt, Ingrid; Lehman, S. Y.; Jacobs, D. T.
2011-03-01
We investigated avalanches using uniform 3mm steel spheres (``beads'') dropped onto a conical bead pile within a uniform magnetic field. The bead pile is built by pouring beads onto a circular base where the bottom layer of beads had been glued randomly. Beads are then individually dropped from a fixed height after which the pile is massed. This process is repeated for thousands of bead drops. By measuring the number of avalanches of a given size that occurred during the experiment, the resulting avalanche size distribution was compared to a power law description as predicted by self-organized criticality. As the magnetic field intensity increased, the beads clustered to give a larger angle of repose and we measured the change in the avalanche size distribution. The moments of the distribution give a sensitive test of mean-field theory as the universality class for these bead piles. We acknowledge support from Research Corporation and NSF-REU grant DMR 0649112.
Singular unlocking transition in the Winfree model of coupled oscillators.
Quinn, D Dane; Rand, Richard H; Strogatz, Steven H
2007-03-01
The Winfree model consists of a population of globally coupled phase oscillators with randomly distributed natural frequencies. As the coupling strength and the spread of natural frequencies are varied, the various stable states of the model can undergo bifurcations, nearly all of which have been characterized previously. The one exception is the unlocking transition, in which the frequency-locked state disappears abruptly as the spread of natural frequencies exceeds a critical width. Viewed as a function of the coupling strength, this critical width defines a bifurcation curve in parameter space. For the special case where the frequency distribution is uniform, earlier work had uncovered a puzzling singularity in this bifurcation curve. Here we seek to understand what causes the singularity. Using the Poincaré-Lindstedt method of perturbation theory, we analyze the locked state and its associated unlocking transition, first for an arbitrary distribution of natural frequencies, and then for discrete systems of N oscillators. We confirm that the bifurcation curve becomes singular for a continuum uniform distribution, yet find that it remains well behaved for any finite N , suggesting that the continuum limit is responsible for the singularity.
NASA Technical Reports Server (NTRS)
Englander, Jacob; Englander, Arnold
2014-01-01
Trajectory optimization methods using MBH have become well developed during the past decade. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing RVs from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by Englander significantly improves MBH performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness, where efficiency is finding better solutions in less time, and robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive RWs originally developed in the field of statistical physics.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Howlett, Cullan
2018-06-01
In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.
Probability distributions for Markov chain based quantum walks
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Chang, Jenghwa
2017-06-01
To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.
The random energy model in a magnetic field and joint source channel coding
NASA Astrophysics Data System (ADS)
Merhav, Neri
2008-09-01
We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.
Exact Markov chains versus diffusion theory for haploid random mating.
Tyvand, Peder A; Thorvaldsen, Steinar
2010-05-01
Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
Concurrent infection with sibling Trichinella species in a natural host.
Pozio, E; Bandi, C; La Rosa, G; Järvis, T; Miller, I; Kapel, C M
1995-10-01
Random amplified polymorphic DNA (RAPD) analysis of individual Trichinella muscle larvae, collected from several sylvatic and domestic animals in Estonia, revealed concurrent infection of a racoon dog with Trichinella nativa and Trichinella britovi. This finding provides strong support for their taxonomic ranking as sibling species. These 2 species appear uniformly distributed among sylvatic animals through Estonia, while Trichinella spiralis appears restricted to the domestic habitat.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tsung-Jui; Wu, Yuh-Renn, E-mail: yrwu@ntu.edu.tw; Shivaraman, Ravi
2014-09-21
In this paper, we describe the influence of the intrinsic indium fluctuation in the InGaN quantum wells on the carrier transport, efficiency droop, and emission spectrum in GaN-based light emitting diodes (LEDs). Both real and randomly generated indium fluctuations were used in 3D simulations and compared to quantum wells with a uniform indium distribution. We found that without further hypothesis the simulations of electrical and optical properties in LEDs such as carrier transport, radiative and Auger recombination, and efficiency droop are greatly improved by considering natural nanoscale indium fluctuations.
Standard random number generation for MBASIC
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.
Does the central limit theorem always apply to phase noise? Some implications for radar problems
NASA Astrophysics Data System (ADS)
Gray, John E.; Addison, Stephen R.
2017-05-01
The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704
Rigorous Results for the Distribution of Money on Connected Graphs
NASA Astrophysics Data System (ADS)
Lanchier, Nicolas; Reed, Stephanie
2018-05-01
This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.
The Ciliate Paramecium Shows Higher Motility in Non-Uniform Chemical Landscapes
Giuffre, Carl; Hinow, Peter; Vogel, Ryan; Ahmed, Tanvir; Stocker, Roman; Consi, Thomas R.; Strickler, J. Rudi
2011-01-01
We study the motility behavior of the unicellular protozoan Paramecium tetraurelia in a microfluidic device that can be prepared with a landscape of attracting or repelling chemicals. We investigate the spatial distribution of the positions of the individuals at different time points with methods from spatial statistics and Poisson random point fields. This makes quantitative the informal notion of “uniform distribution” (or lack thereof). Our device is characterized by the absence of large systematic biases due to gravitation and fluid flow. It has the potential to be applied to the study of other aquatic chemosensitive organisms as well. This may result in better diagnostic devices for environmental pollutants. PMID:21494596
NASA Astrophysics Data System (ADS)
Hilarov, V. L.
2017-09-01
The response of a material with a random uniform distribution of pores to a sound impulse was studied. The behavior of the numerical characteristics of the recurrence plots (RP) of the normal displacement vector component depending on the degree of damage was investigated. It was shown that the recurrence quantification analysis (RQA) parameters could be very informative for sonic fault detection.
Determining irrigation distribution uniformity and efficiency for nurseries
R. Thomas Fernandez
2010-01-01
A simple method for testing the distribution uniformity of overhead irrigation systems is described. The procedure is described step-by-step along with an example. Other uses of distribution uniformity testing are presented, as well as common situations that affect distribution uniformity and how to alleviate them.
An In-Depth Analysis of the Chung-Lu Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winlaw, M.; DeSterck, H.; Sanders, G.
2015-10-28
In the classic Erd}os R enyi random graph model [5] each edge is chosen with uniform probability and the degree distribution is binomial, limiting the number of graphs that can be modeled using the Erd}os R enyi framework [10]. The Chung-Lu model [1, 2, 3] is an extension of the Erd}os R enyi model that allows for more general degree distributions. The probability of each edge is no longer uniform and is a function of a user-supplied degree sequence, which by design is the expected degree sequence of the model. This property makes it an easy model to work withmore » theoretically and since the Chung-Lu model is a special case of a random graph model with a given degree sequence, many of its properties are well known and have been studied extensively [2, 3, 13, 8, 9]. It is also an attractive null model for many real-world networks, particularly those with power-law degree distributions and it is sometimes used as a benchmark for comparison with other graph generators despite some of its limitations [12, 11]. We know for example, that the average clustering coe cient is too low relative to most real world networks. As well, measures of a nity are also too low relative to most real-world networks of interest. However, despite these limitations or perhaps because of them, the Chung-Lu model provides a basis for comparing new graph models.« less
Geometric evolution of complex networks with degree correlations
NASA Astrophysics Data System (ADS)
Murphy, Charles; Allard, Antoine; Laurence, Edward; St-Onge, Guillaume; Dubé, Louis J.
2018-03-01
We present a general class of geometric network growth mechanisms by homogeneous attachment in which the links created at a given time t are distributed homogeneously between a new node and the existing nodes selected uniformly. This is achieved by creating links between nodes uniformly distributed in a homogeneous metric space according to a Fermi-Dirac connection probability with inverse temperature β and general time-dependent chemical potential μ (t ) . The chemical potential limits the spatial extent of newly created links. Using a hidden variable framework, we obtain an analytical expression for the degree sequence and show that μ (t ) can be fixed to yield any given degree distributions, including a scale-free degree distribution. Additionally, we find that depending on the order in which nodes appear in the network—its history—the degree-degree correlations can be tuned to be assortative or disassortative. The effect of the geometry on the structure is investigated through the average clustering coefficient 〈c 〉 . In the thermodynamic limit, we identify a phase transition between a random regime where 〈c 〉→0 when β <βc and a geometric regime where 〈c 〉>0 when β >βc .
Characteristics of grouping colors for figure segregation on a multicolored background.
Nagai, Takehiro; Uchikawa, Keiji
2008-11-01
A figure is segregated from its background when the colored elements belonging to the figure are grouped together. We investigated the range of color distribution conditions in which a figure could be segregated from its background using the color distribution differences. The stimulus was a multicolored texture composed of randomly shaped pieces. It was divided into two regions: a test region and a background region. The pieces in these two regions had different color distributions in the OSA Uniform Color Space. In our experiments, the subject segregated the figure of the test region using two different procedures. Since the Euclidean distance in the OSA Uniform Color Space corresponds to perceived color difference, if segregation thresholds are determined by only color difference, the thresholds should be independent of position and direction in the color space. In the results, however, the thresholds did depend on position and direction in the OSA Uniform Color Space. This suggests that color difference is not the only factor in figure segregation by color. Moreover, the threshold dependence on position and direction is influenced by the distances in the cone-opponent space whose axes are normalized by discrimination thresholds, suggesting that figure segregation threshold is determined by similar factors in the cone-opponent space for color discrimination. The analysis of the results by categorical color naming suggests that categorical color perception may affect figure segregation only slightly.
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
Results on angular distributions of thermal dileptons in nuclear collisions
NASA Astrophysics Data System (ADS)
Usai, Gianluca; NA60 Collaboration
2009-11-01
The NA60 experiment at the CERN SPS has studied dimuon production in 158 AGeV In-In collisions. The strong pair excess above the known sources found in the mass region 0.2
First Results on Angular Distributions of Thermal Dileptons in Nuclear Collisions
NASA Astrophysics Data System (ADS)
Arnaldi, R.; Banicz, K.; Castor, J.; Chaurand, B.; Cicalò, C.; Colla, A.; Cortese, P.; Damjanovic, S.; David, A.; de Falco, A.; Devaux, A.; Ducroux, L.; En'Yo, H.; Fargeix, J.; Ferretti, A.; Floris, M.; Förster, A.; Force, P.; Guettet, N.; Guichard, A.; Gulkanian, H.; Heuser, J. M.; Keil, M.; Kluberg, L.; Lourenço, C.; Lozano, J.; Manso, F.; Martins, P.; Masoni, A.; Neves, A.; Ohnishi, H.; Oppedisano, C.; Parracho, P.; Pillot, P.; Poghosyan, T.; Puddu, G.; Radermacher, E.; Ramalhete, P.; Rosinsky, P.; Scomparin, E.; Seixas, J.; Serci, S.; Shahoyan, R.; Sonderegger, P.; Specht, H. J.; Tieulent, R.; Usai, G.; Veenhof, R.; Wöhri, H. K.
2009-06-01
The NA60 experiment at the CERN Super Proton Synchrotron has studied dimuon production in 158AGeV In-In collisions. The strong excess of pairs above the known sources found in the complete mass region 0.2
The energy density distribution of an ideal gas and Bernoulli’s equations
NASA Astrophysics Data System (ADS)
Santos, Leonardo S. F.
2018-05-01
This work discusses the energy density distribution in an ideal gas and the consequences of Bernoulli’s equation and the corresponding relation for compressible fluids. The aim of this work is to study how Bernoulli’s equation determines the energy flow in a fluid, although Bernoulli’s equation does not describe the energy density itself. The model from molecular dynamic considerations that describes an ideal gas at rest with uniform density is modified to explore the gas in motion with non-uniform density and gravitational effects. The difference between the component of the speed of a particle that is parallel to the gas speed and the gas speed itself is called ‘parallel random speed’. The pressure from the ‘parallel random speed’ is denominated as parallel pressure. The modified model predicts that the energy density is the sum of kinetic and potential gravitational energy densities plus two terms with static and parallel pressures. The application of Bernoulli’s equation and the corresponding relation for compressible fluids in the energy density expression has resulted in two new formulations. For incompressible and compressible gas, the energy density expressions are written as a function of stagnation, static and parallel pressures, without any dependence on kinetic or gravitational potential energy densities. These expressions of the energy density are the main contributions of this work. When the parallel pressure was uniform, the energy density distribution for incompressible approximation and compressible gas did not converge to zero for the limit of null static pressure. This result is rather unusual because the temperature tends to zero for null pressure. When the gas was considered incompressible and the parallel pressure was equal to static pressure, the energy density maintained this unusual behaviour with small pressures. If the parallel pressure was equal to static pressure, the energy density converged to zero for the limit of the null pressure only if the gas was compressible. Only the last situation describes an intuitive behaviour for an ideal gas.
A new model of the lunar ejecta cloud
NASA Astrophysics Data System (ADS)
Christou, A. A.
2014-04-01
Every airless body in the solar system is surrounded by a cloud of ejecta produced by the impact of interplanetary meteoroids on its surface [1]. Such "dust exospheres" have been observed around the Galilean satellites of Jupiter [2, 3]. The prospect of long-term robotic and human operations on the Moon by the US and other countries has rekindled interest on the subject [4]. This interest has culminated with the recent investigation of the Moon's dust exosphere by the LADEE spacecraft [5]. Here a model is presented of a ballistic, collisionless, steady state population of ejecta launched vertically at randomly distributed times and velocities. Assuming a uniform distribution of launch times I derive closed form solutions for the probability density functions (pdfs) of the height distribution of particles and the distribution of their speeds in a rest frame both at the surface and at altitude. The treatment is then extended to particle motion with respect to a moving platform such as an orbiting spacecraft. These expressions are compared with numerical simulations under lunar surface gravity where the underlying ejection speed distribution is (a) uniform (b) a power law. I discuss the predictions of the model, its limitations, and how it can be validated against near-surface and orbital measurements.
Trophallaxis-inspired model for distributed transport between randomly interacting agents
NASA Astrophysics Data System (ADS)
Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G.; Katifori, Eleni
2017-08-01
Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.
Geologic map of the Agnesi quadrangle (V-45), Venus
Hansen, Vicki L.; Tharalson, Erik R.
2014-01-01
Two general classes of hypotheses have emerged to address the near random spatial distribution of ~970 apparently pristine impact craters across the surface of Venus: (1) catastrophic/episodic resurfacing and (2) equilibrium/evolutionary resurfacing. Catastrophic/episodic hypotheses propose that a global-scale, temporally punctuated event or events dominated Venus’ evolution and that the generally uniform impact crater distribution (Schaber and others, 1992; Phillips and others, 1992; Herrick and others, 1997) reflects craters that accumulated during relative global quiescence since that event (for example, Strom and others, 1994; Herrick, 1994; Turcotte and others, 1999). Equilibrium/evolutionary hypotheses suggest instead that the near random crater distribution results from relatively continuous, but spatially localized, resurfacing in which volcanic and (or) tectonic processes occur across the planet through time, although the style of operative processes may have varied temporally and spatially (for example, Phillips and others, 1992; Guest and Stofan, 1999; Hansen and Young, 2007). Geologic relations within the map area allow us to test the catastrophic/episodic versus equilibrium/evolutionary resurfacing hypotheses.
Trophallaxis-inspired model for distributed transport between randomly interacting agents.
Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G; Katifori, Eleni
2017-08-01
Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.
Phase transition in nonuniform Josephson arrays: Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Pomirchy, L. M.
1994-01-01
Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.
Effects of fixture rotation on coating uniformity for high-performance optical filter fabrication
NASA Astrophysics Data System (ADS)
Rubin, Binyamin; George, Jason; Singhal, Riju
2018-04-01
Coating uniformity is critical in fabricating high-performance optical filters by various vacuum deposition methods. Simple and planetary rotation systems with shadow masks are used to achieve the required uniformity [J. B. Oliver and D. Talbot, Appl. Optics 45, 13, 3097 (2006); O. Lyngnes, K. Kraus, A. Ode and T. Erguder, in `Method for Designing Coating Thickness Uniformity Shadow Masks for Deposition Systems with a Planetary Fixture', 2014 Technical Conference Proceedings, Optical Coatings, August 13, 2014, DOI: 10.14332/svc14.proc.1817.]. In this work, we discuss the effect of rotation pattern and speed on thickness uniformity in an ion beam sputter deposition system. Numerical modeling is used to determine statistical distribution of random thickness errors in coating layers. The relationship between thickness tolerance and production yield are simulated theoretically and demonstrated experimentally. Production yields for different optical filters produced in an ion beam deposition system with planetary rotation are presented. Single-wavelength and broadband optical monitoring systems were used for endpoint monitoring during filter deposition. Limitations of thickness tolerances that can be achieved in systems with planetary rotation are shown. Paths for improving production yield in an ion beam deposition system are described.
Development of a methodology to evaluate material accountability in pyroprocess
NASA Astrophysics Data System (ADS)
Woo, Seungmin
This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).
Dynamical properties of the S =1/2 random Heisenberg chain
NASA Astrophysics Data System (ADS)
Shu, Yu-Rong; Dupont, Maxime; Yao, Dao-Xin; Capponi, Sylvain; Sandvik, Anders W.
2018-03-01
We study dynamical properties at finite temperature (T ) of Heisenberg spin chains with random antiferromagnetic exchange couplings, which realize the random singlet phase in the low-energy limit, using three complementary numerical methods: exact diagonalization, matrix-product-state algorithms, and stochastic analytic continuation of quantum Monte Carlo results in imaginary time. Specifically, we investigate the dynamic spin structure factor S (q ,ω ) and its ω →0 limit, which are closely related to inelastic neutron scattering and nuclear magnetic resonance (NMR) experiments (through the spin-lattice relaxation rate 1 /T1 ). Our study reveals a continuous narrow band of low-energy excitations in S (q ,ω ) , extending throughout the q space, instead of being restricted to q ≈0 and q ≈π as found in the uniform system. Close to q =π , the scaling properties of these excitations are well captured by the random-singlet theory, but disagreements also exist with some aspects of the predicted q dependence further away from q =π . Furthermore we also find spin diffusion effects close to q =0 that are not contained within the random-singlet theory but give non-negligible contributions to the mean 1 /T1 . To compare with NMR experiments, we consider the distribution of the local relaxation rates 1 /T1 . We show that the local 1 /T1 values are broadly distributed, approximately according to a stretched exponential. The mean 1 /T1 first decreases with T , but below a crossover temperature it starts to increase and likely diverges in the limit of a small nuclear resonance frequency ω0. Although a similar divergent behavior has been predicted and experimentally observed for the static uniform susceptibility, this divergent behavior of the mean 1 /T1 has never been experimentally observed. Indeed, we show that the divergence of the mean 1 /T1 is due to rare events in the disordered chains and is concealed in experiments, where the typical 1 /T1 value is accessed.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.
Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh
2017-06-01
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
A Search Model for Imperfectly Detected Targets
NASA Technical Reports Server (NTRS)
Ahumada, Albert
2012-01-01
Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
RELATIVE ORIENTATION OF PAIRS OF SPIRAL GALAXIES IN THE SLOAN DIGITAL SKY SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buxton, Jesse; Ryden, Barbara S., E-mail: buxton.45@osu.edu, E-mail: ryden@astronomy.ohio-state.edu
2012-09-10
From our study of binary spiral galaxies in the Sloan Digital Sky Survey Data Release 6, we find that the relative orientation of disks in binary spiral galaxies is consistent with their being drawn from a random distribution of orientations. For 747 isolated pairs of luminous disk galaxies, the distribution of {phi}, the angle between the major axes of the galaxy images, is consistent with a uniform distribution on the interval [0 Degree-Sign , 90 Degree-Sign ]. With the assumption that the disk galaxies are oblate spheroids, we can compute cos {beta}, where {beta} is the angle between the rotationmore » axes of the disks. In the case that one galaxy in the binary is face-on or edge-on, the tilt ambiguity is resolved, and cos {beta} can be computed unambiguously. For 94 isolated pairs with at least one face-on member, and for 171 isolated pairs with at least one edge-on member, the distribution of cos {beta} is statistically consistent with the distribution of cos i for isolated disk galaxies. This result is consistent with random orientations of the disks within pairs.« less
NASA Astrophysics Data System (ADS)
Russell, Matthew J.; Jensen, Oliver E.; Galla, Tobias
2016-10-01
Motivated by uncertainty quantification in natural transport systems, we investigate an individual-based transport process involving particles undergoing a random walk along a line of point sinks whose strengths are themselves independent random variables. We assume particles are removed from the system via first-order kinetics. We analyze the system using a hierarchy of approaches when the sinks are sparsely distributed, including a stochastic homogenization approximation that yields explicit predictions for the extrinsic disorder in the stationary state due to sink strength fluctuations. The extrinsic noise induces long-range spatial correlations in the particle concentration, unlike fluctuations due to the intrinsic noise alone. Additionally, the mean concentration profile, averaged over both intrinsic and extrinsic noise, is elevated compared with the corresponding profile from a uniform sink distribution, showing that the classical homogenization approximation can be a biased estimator of the true mean.
Schlemm, Eckhard
2015-09-01
The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. Copyright © 2015 Elsevier Inc. All rights reserved.
2013-10-15
statistic,” in Artifical Intelligence and Statistics (AISTATS), 2013. [6] ——, “Detecting activity in graphs via the Graph Ellipsoid Scan Statistic... Artifical Intelligence and Statistics (AISTATS), 2013. [8] ——, “Near-optimal anomaly detection in graphs using Lovász Extended Scan Statistic,” in Neural...networks,” in Artificial Intelligence and Statistics (AISTATS), 2010. 11 [11] D. Aldous, “The random walk construction of uniform spanning trees and
Optimal hash arrangement of tentacles in jellyfish
NASA Astrophysics Data System (ADS)
Okabe, Takuya; Yoshimura, Jin
2016-06-01
At first glance, the trailing tentacles of a jellyfish appear to be randomly arranged. However, close examination of medusae has revealed that the arrangement and developmental order of the tentacles obey a mathematical rule. Here, we show that medusa jellyfish adopt the best strategy to achieve the most uniform distribution of a variable number of tentacles. The observed order of tentacles is a real-world example of an optimal hashing algorithm known as Fibonacci hashing in computer science.
Dynamic Snap-Through of Thin-Walled Structures by a Reduced Order Method
NASA Technical Reports Server (NTRS)
Przekop, Adam; Rizzi, Stephen A.
2006-01-01
The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures exposed to combined high intensity random pressure fluctuations and thermal loadings. The study is conducted on a flat aluminum beam, which permits a comparison of results obtained by a reduced-order analysis with those obtained from a numerically intensive simulation in physical degrees-of-freedom. A uniformly distributed thermal loading is first applied to investigate the dynamic instability associated with thermal buckling. A uniformly distributed random loading is added to investigate the combined thermal-acoustic response. In the latter case, three types of response characteristics are considered, namely: (i) small amplitude vibration around one of the two stable buckling equilibrium positions, (ii) intermittent snap-through response between the two equilibrium positions, and (iii) persistent snap-through response between the two equilibrium positions. For the reduced order analysis, four categories of modal basis functions are identified including those having symmetric transverse (ST), anti-symmetric transverse (AT), symmetric in-plane (SI), and anti-symmetric in-plane (AI) displacements. The effect of basis selection on the quality of results is investigated for the dynamic thermal buckling and combined thermal-acoustic response. It is found that despite symmetric geometry, loading, and boundary conditions, the AT and SI modes must be included in the basis as they participate in the snap-through behavior.
Dynamic Snap-Through of Thermally Buckled Structures by a Reduced Order Method
NASA Technical Reports Server (NTRS)
Przekop, Adam; Rizzi, Stephen A.
2007-01-01
The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures exposed to combined high intensity random pressure fluctuations and thermal loadings. The study is conducted on a flat aluminum beam, which permits a comparison of results obtained by a reduced-order analysis with those obtained from a numerically intensive simulation in physical degrees-of-freedom. A uniformly distributed thermal loading is first applied to investigate the dynamic instability associated with thermal buckling. A uniformly distributed random loading is added to investigate the combined thermal-acoustic response. In the latter case, three types of response characteristics are considered, namely: (i) small amplitude vibration around one of the two stable buckling equilibrium positions, (ii) intermittent snap-through response between the two equilibrium positions, and (iii) persistent snap-through response between the two equilibrium positions. For the reduced-order analysis, four categories of modal basis functions are identified including those having symmetric transverse, anti-symmetric transverse, symmetric in-plane, and anti-symmetric in-plane displacements. The effect of basis selection on the quality of results is investigated for the dynamic thermal buckling and combined thermal-acoustic response. It is found that despite symmetric geometry, loading, and boundary conditions, the anti-symmetric transverse and symmetric in-plane modes must be included in the basis as they participate in the snap-through behavior.
Envelope and phase distribution of a resonance transmission through a complex environment
NASA Astrophysics Data System (ADS)
Savin, Dmitry V.
2018-06-01
A transmission amplitude is considered for quantum or wave transport mediated by a single resonance coupled to the background of many chaotic states. Such a model provides a useful approach to quantify fluctuations in an established signal induced by a complex environment. Applying random matrix theory to the problem, we derive an exact result for the joint distribution of the transmission intensity (envelope) and the transmission phase at arbitrary coupling to the background with finite absorption. The intensity and phase are distributed within a certain region, revealing essential correlations even at strong absorption. In the latter limit, we obtain a simple asymptotic expression that provides a uniformly good approximation of the exact distribution within its whole support, thus going beyond the Rician distribution often used for such purposes. Exact results are also derived for the marginal distribution of the phase, including its limiting forms at weak and strong absorption.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1992-01-01
In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.
First Results on Angular Distributions of Thermal Dileptons in Nuclear Collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnaldi, R.; Colla, A.; Cortese, P.
The NA60 experiment at the CERN Super Proton Synchrotron has studied dimuon production in 158A GeV In-In collisions. The strong excess of pairs above the known sources found in the complete mass region 0.2
Probabilistic pathway construction.
Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha
2011-07-01
Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin
2018-04-01
The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
Apparatus for synthesis of a solar spectrum
Sopori, Bhushan L.
1993-01-01
A xenon arc lamp and a tungsten filament lamp provide light beams that together contain all the wavelengths required to accurately simulate a solar spectrum. Suitable filter apparatus selectively direct visible and ultraviolet light from the xenon arc lamp into two legs of a trifurcated randomized fiber optic cable. Infrared light selectively filtered from the tungsten filament lamp is directed into the third leg of the fiber optic cable. The individual optic fibers from the three legs are brought together in a random fashion into a single output leg. The output beam emanating from the output leg of the trifurcated randomized fiber optic cable is extremely uniform and contains wavelengths from each of the individual filtered light beams. This uniform output beam passes through suitable collimation apparatus before striking the surface of the solar cell being tested. Adjustable aperture apparatus located between the lamps and the input legs of the trifurcated fiber optic cable can be selectively adjusted to limit the amount of light entering each leg, thereby providing a means of "fine tuning" or precisely adjusting the spectral content of the output beam. Finally, an adjustable aperture apparatus may also be placed in the output beam to adjust the intensity of the output beam without changing the spectral content and distribution of the output beam.
Wear behavioral study of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy at constant load
NASA Astrophysics Data System (ADS)
Harlapur, M. D.; Sondur, D. G.; Akkimardi, V. G.; Mallapur, D. G.
2018-04-01
In the current study, the wear behavior of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy has been investigated. Microstructure, SEM and EDS results confirm the presence of different intermetallic and their effects on wear properties of Al25Mg2Si2Cu4Ni alloy in as cast as well as aged condition. Alloying main elements like Si, Cu, Mg and Ni partly dissolve in the primary α-Al matrix and to some amount present in the form of intermetallic phases. SEM structure of as cast alloy shows blocks of Mg2Si which is at random distributed in the aluminium matrix. Precipitates of Al2Cu in the form of Chinese script are also observed. Also `Q' phase (Al-Si-Cu-Mg) be distributed uniformly into the aluminium matrix. Few coarsened platelets of Ni are seen. In case of 7 hr homogenized samples blocks of Mg2Si get rounded at the corners, Platelets of Ni get fragmented and distributed uniformly in the aluminium matrix. Results show improved volumetric wear resistance and reduced coefficient of friction after homogenizing heat treatment.
Collision Models for Particle Orbit Code on SSX
NASA Astrophysics Data System (ADS)
Fisher, M. W.; Dandurand, D.; Gray, T.; Brown, M. R.; Lukin, V. S.
2011-10-01
Coulomb collision models are being developed and incorporated into the Hamiltonian particle pushing code (PPC) for applications to the Swarthmore Spheromak eXperiment (SSX). A Monte Carlo model based on that of Takizuka and Abe [JCP 25, 205 (1977)] performs binary collisions between test particles and thermal plasma field particles randomly drawn from a stationary Maxwellian distribution. A field-based electrostatic fluctuation model scatters particles from a spatially uniform random distribution of positive and negative spherical potentials generated throughout the plasma volume. The number, radii, and amplitude of these potentials are chosen to mimic the correct particle diffusion statistics without the use of random particle draws or collision frequencies. An electromagnetic fluctuating field model will be presented, if available. These numerical collision models will be benchmarked against known analytical solutions, including beam diffusion rates and Spitzer resistivity, as well as each other. The resulting collisional particle orbit models will be used to simulate particle collection with electrostatic probes in the SSX wind tunnel, as well as particle confinement in typical SSX fields. This work has been supported by US DOE, NSF and ONR.
Single-mode SOA-based 1kHz-linewidth dual-wavelength random fiber laser.
Xu, Yanping; Zhang, Liang; Chen, Liang; Bao, Xiaoyi
2017-07-10
Narrow-linewidth multi-wavelength fiber lasers are of significant interests for fiber-optic sensors, spectroscopy, optical communications, and microwave generation. A novel narrow-linewidth dual-wavelength random fiber laser with single-mode operation, based on the semiconductor optical amplifier (SOA) gain, is achieved in this work for the first time, to the best of our knowledge. A simplified theoretical model is established to characterize such kind of random fiber laser. The inhomogeneous gain in SOA mitigates the mode competition significantly and alleviates the laser instability, which are frequently encountered in multi-wavelength fiber lasers with Erbium-doped fiber gain. The enhanced random distributed feedback from a 5km non-uniform fiber provides coherent feedback, acting as mode selection element to ensure single-mode operation with narrow linewidth of ~1kHz. The laser noises are also comprehensively investigated and studied, showing the improvements of the proposed random fiber laser with suppressed intensity and frequency noises.
A fast ergodic algorithm for generating ensembles of equilateral random polygons
NASA Astrophysics Data System (ADS)
Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.
2009-03-01
Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.
Planar spatial correlations, anisotropy, and specific surface area of stationary random porous media
NASA Astrophysics Data System (ADS)
Berryman, James G.
1998-02-01
An earlier result of the author showed that an anisotropic spatial correlation function of a random porous medium could be used to compute the specific surface area when it is stationary as well as anisotropic by first performing a three-dimensional radial average and then taking the first derivative with respect to lag at the origin. This result generalized the earlier result for isotropic porous media of Debye et al. [J. Appl. Phys. 28, 679 (1957)]. The present article provides more detailed information about the use of spatial correlation functions for anisotropic porous media and in particular shows that, for stationary anisotropic media, the specific surface area can be related to the derivative of the two-dimensional radial average of the correlation function measured from cross sections taken through the anisotropic medium. The main concept is first illustrated using a simple pedagogical example for an anisotropic distribution of spherical voids. Then, a general derivation of formulas relating the derivative of the planar correlation functions to surface integrals is presented. When the surface normal is uniformly distributed (as is the case for any distribution of spherical voids), our formulas can be used to relate a specific surface area to easily measurable quantities from any single cross section. When the surface normal is not distributed uniformly (as would be the case for an oriented distribution of ellipsoidal voids), our results show how to obtain valid estimates of specific surface area by averaging measurements on three orthogonal cross sections. One important general observation for porous media is that the surface area from nearly flat cracks may be underestimated from measurements on orthogonal cross sections if any of the cross sections happen to lie in the plane of the cracks. This result is illustrated by taking the very small aspect ratio (penny-shaped crack) limit of an oblate spheroid, but holds for other types of flat surfaces as well.
Planar spatial correlations, anisotropy, and specific surface area of stationary random porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J.G.
1998-02-01
An earlier result of the author showed that an anisotropic spatial correlation function of a random porous medium could be used to compute the specific surface area when it is stationary as well as anisotropic by first performing a three-dimensional radial average and then taking the first derivative with respect to lag at the origin. This result generalized the earlier result for isotropic porous media of Debye {ital et al.} [J. Appl. Phys. {bold 28}, 679 (1957)]. The present article provides more detailed information about the use of spatial correlation functions for anisotropic porous media and in particular shows that,more » for stationary anisotropic media, the specific surface area can be related to the derivative of the two-dimensional radial average of the correlation function measured from cross sections taken through the anisotropic medium. The main concept is first illustrated using a simple pedagogical example for an anisotropic distribution of spherical voids. Then, a general derivation of formulas relating the derivative of the planar correlation functions to surface integrals is presented. When the surface normal is uniformly distributed (as is the case for any distribution of spherical voids), our formulas can be used to relate a specific surface area to easily measurable quantities from any single cross section. When the surface normal is not distributed uniformly (as would be the case for an oriented distribution of ellipsoidal voids), our results show how to obtain valid estimates of specific surface area by averaging measurements on three orthogonal cross sections. One important general observation for porous media is that the surface area from nearly flat cracks may be underestimated from measurements on orthogonal cross sections if any of the cross sections happen to lie in the plane of the cracks. This result is illustrated by taking the very small aspect ratio (penny-shaped crack) limit of an oblate spheroid, but holds for other types of flat surfaces as well.« less
Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator
NASA Technical Reports Server (NTRS)
Bents, D. J.
1982-01-01
A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
Secure uniform random-number extraction via incoherent strategies
NASA Astrophysics Data System (ADS)
Hayashi, Masahito; Zhu, Huangjun
2018-01-01
To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Pseudorandom number generation using chaotic true orbits of the Bernoulli map
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro
We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.
CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amos, D.E.
1977-04-01
A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.
The Newcomb-Benford law in its relation to some common distributions.
Formann, Anton K
2010-05-07
An often reported, but nevertheless persistently striking observation, formalized as the Newcomb-Benford law (NBL), is that the frequencies with which the leading digits of numbers occur in a large variety of data are far away from being uniform. Most spectacular seems to be the fact that in many data the leading digit 1 occurs in nearly one third of all cases. Explanations for this uneven distribution of the leading digits were, among others, scale- and base-invariance. Little attention, however, found the interrelation between the distribution of the significant digits and the distribution of the observed variable. It is shown here by simulation that long right-tailed distributions of a random variable are compatible with the NBL, and that for distributions of the ratio of two random variables the fit generally improves. Distributions not putting most mass on small values of the random variable (e.g. symmetric distributions) fail to fit. Hence, the validity of the NBL needs the predominance of small values and, when thinking of real-world data, a majority of small entities. Analyses of data on stock prices, the areas and numbers of inhabitants of countries, and the starting page numbers of papers from a bibliography sustain this conclusion. In all, these findings may help to understand the mechanisms behind the NBL and the conditions needed for its validity. That this law is not only of scientific interest per se, but that, in addition, it has also substantial implications can be seen from those fields where it was suggested to be put into practice. These fields reach from the detection of irregularities in data (e.g. economic fraud) to optimizing the architecture of computers regarding number representation, storage, and round-off errors.
Spatial Distribution of Iron Within the Normal Human Liver Using Dual-Source Dual-Energy CT Imaging.
Abadia, Andres F; Grant, Katharine L; Carey, Kathleen E; Bolch, Wesley E; Morin, Richard L
2017-11-01
Explore the potential of dual-source dual-energy (DSDE) computed tomography (CT) to retrospectively analyze the uniformity of iron distribution and establish iron concentration ranges and distribution patterns found in healthy livers. Ten mixtures consisting of an iron nitrate solution and deionized water were prepared in test tubes and scanned using a DSDE 128-slice CT system. Iron images were derived from a 3-material decomposition algorithm (optimized for the quantification of iron). A conversion factor (mg Fe/mL per Hounsfield unit) was calculated from this phantom study as the quotient of known tube concentrations and their corresponding CT values. Retrospective analysis was performed of patients who had undergone DSDE imaging for renal stones. Thirty-seven patients with normal liver function were randomly selected (mean age, 52.5 years). The examinations were processed for iron concentration. Multiple regions of interest were analyzed, and iron concentration (mg Fe/mL) and distribution was reported. The mean conversion factor obtained from the phantom study was 0.15 mg Fe/mL per Hounsfield unit. Whole-liver mean iron concentrations yielded a range of 0.0 to 2.91 mg Fe/mL, with 94.6% (35/37) of the patients exhibiting mean concentrations below 1.0 mg Fe/mL. The most important finding was that iron concentration was not uniform and patients exhibited regionally high concentrations (36/37). These regions of higher concentration were observed to be dominant in the middle-to-upper part of the liver (75%), medially (72.2%), and anteriorly (83.3%). Dual-source dual-energy CT can be used to assess the uniformity of iron distribution in healthy subjects. Applying similar techniques to unhealthy livers, future research may focus on the impact of hepatic iron content and distribution for noninvasive assessment in diseased subjects.
Understanding spatial connectivity of individuals with non-uniform population density.
Wang, Pu; González, Marta C
2009-08-28
We construct a two-dimensional geometric graph connecting individuals placed in space within a given contact distance. The individuals are distributed using a measured country's density of population. We observe that while large clusters (group of individuals connected) emerge within some regions, they are trapped in detached urban areas owing to the low population density of the regions bordering them. To understand the emergence of a giant cluster that connects the entire population, we compare the empirical geometric graph with the one generated by placing the same number of individuals randomly in space. We find that, for small contact distances, the empirical distribution of population dominates the growth of connected components, but no critical percolation transition is observed in contrast to the graph generated by a random distribution of population. Our results show that contact distances from real-world situations as for WIFI and Bluetooth connections drop in a zone where a fully connected cluster is not observed, hinting that human mobility must play a crucial role in contact-based diseases and wireless viruses' large-scale spreading.
Michiels, Bart; Heyvaert, Mieke; Onghena, Patrick
2018-04-01
The conditional power (CP) of the randomization test (RT) was investigated in a simulation study in which three different single-case effect size (ES) measures were used as the test statistics: the mean difference (MD), the percentage of nonoverlapping data (PND), and the nonoverlap of all pairs (NAP). Furthermore, we studied the effect of the experimental design on the RT's CP for three different single-case designs with rapid treatment alternation: the completely randomized design (CRD), the randomized block design (RBD), and the restricted randomized alternation design (RRAD). As a third goal, we evaluated the CP of the RT for three types of simulated data: data generated from a standard normal distribution, data generated from a uniform distribution, and data generated from a first-order autoregressive Gaussian process. The results showed that the MD and NAP perform very similarly in terms of CP, whereas the PND performs substantially worse. Furthermore, the RRAD yielded marginally higher power in the RT, followed by the CRD and then the RBD. Finally, the power of the RT was almost unaffected by the type of the simulated data. On the basis of the results of the simulation study, we recommend at least 20 measurement occasions for single-case designs with a randomized treatment order that are to be evaluated with an RT using a 5% significance level. Furthermore, we do not recommend use of the PND, because of its low power in the RT.
Statistical characteristics of dynamics for population migration driven by the economic interests
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Zhao, Ning; Hao, Rui
2016-06-01
Population migration typically occurs under some constraints, which can deeply affect the structure of a society and some other related aspects. Therefore, it is critical to investigate the characteristics of population migration. Data from the China Statistical Yearbook indicate that the regional gross domestic product per capita relates to the population size via a linear or power-law relation. In addition, the distribution of population migration sizes or relative migration strength introduced here is dominated by a shifted power-law relation. To reveal the mechanism that creates the aforementioned distributions, a dynamic model is proposed based on the population migration rule that migration is facilitated by higher financial gains and abated by fewer employment opportunities at the destination, considering the migration cost as a function of the migration distance. The calculated results indicate that the distribution of the relative migration strength is governed by a shifted power-law relation, and that the distribution of migration distances is dominated by a truncated power-law relation. These results suggest the use of a power-law to fit a distribution may be not always suitable. Additionally, from the modeling framework, one can infer that it is the randomness and determinacy that jointly create the scaling characteristics of the distributions. The calculation also demonstrates that the network formed by active nodes, representing the immigration and emigration regions, usually evolves from an ordered state with a non-uniform structure to a disordered state with a uniform structure, which is evidenced by the increasing structural entropy.
The one-dimensional asymmetric persistent random walk
NASA Astrophysics Data System (ADS)
Rossetto, Vincent
2018-04-01
Persistent random walks are intermediate transport processes between a uniform rectilinear motion and a Brownian motion. They are formed by successive steps of random finite lengths and directions travelled at a fixed speed. The isotropic and symmetric 1D persistent random walk is governed by the telegrapher’s equation, also called the hyperbolic heat conduction equation. These equations have been designed to resolve the paradox of the infinite speed in the heat and diffusion equations. The finiteness of both the speed and the correlation length leads to several classes of random walks: Persistent random walk in one dimension can display anomalies that cannot arise for Brownian motion such as anisotropy and asymmetries. In this work we focus on the case where the mean free path is anisotropic, the only anomaly leading to a physics that is different from the telegrapher’s case. We derive exact expression of its Green’s function, for its scattering statistics and distribution of first-passage time at the origin. The phenomenology of the latter shows a transition for quantities like the escape probability and the residence time.
Olson, Gordon Lee
2016-12-06
Here, gray and multigroup radiation is transported through 3D media consisting of spheres randomly placed in a uniform background. Comparisons are made between using constant radii spheres and three different distributions of sphere radii. Because of the computational cost of 3D calculations, only the lowest angle order, n=1, is tested. If the mean chord length is held constant, using different radii distributions makes little difference. This is true for both gray and multigroup solutions. 3D transport solutions are compared to 2D and 1D solutions with the same mean chord lengths. 2D disk and 3D sphere media give solutions that aremore » nearly identical while 1D slab solutions are fundamentally different.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, Gordon Lee
Here, gray and multigroup radiation is transported through 3D media consisting of spheres randomly placed in a uniform background. Comparisons are made between using constant radii spheres and three different distributions of sphere radii. Because of the computational cost of 3D calculations, only the lowest angle order, n=1, is tested. If the mean chord length is held constant, using different radii distributions makes little difference. This is true for both gray and multigroup solutions. 3D transport solutions are compared to 2D and 1D solutions with the same mean chord lengths. 2D disk and 3D sphere media give solutions that aremore » nearly identical while 1D slab solutions are fundamentally different.« less
Electric-field-induced plasmon in AA-stacked bilayer graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuang, Y.C., E-mail: yingchih.chuang@gmail.com; Wu, J.Y., E-mail: yarst5@gmail.com; Lin, M.F., E-mail: mflin@mail.ncku.edu.tw
2013-12-15
The collective excitations in AA-stacked bilayer graphene for a perpendicular electric field are investigated analytically within the tight-binding model and the random-phase approximation. Such a field destroys the uniform probability distribution of the four sublattices. This drives a symmetry breaking between the intralayer and interlayer polarization intensities from the intrapair band excitations. A field-induced acoustic plasmon thus emerges in addition to the strongly field-tunable intrinsic acoustic and optical plasmons. At long wavelengths, the three modes show different dispersions and field dependence. The definite physical mechanism of the electrically inducible and tunable mode can be expected to also be present inmore » other AA-stacked few-layer graphenes. -- Highlights: •The analytical derivations are performed by the tight-binding model. •An electric field drives the non-uniformity of the charge distribution. •A symmetry breaking between the intralayer and interlayer polarizations is illustrated. •An extra plasmon emerges besides two intrinsic modes in AA-stacked bilayer graphene. •The mechanism of a field-induced mode is present in AA-stacked few-layer graphenes.« less
Social influence in small-world networks
NASA Astrophysics Data System (ADS)
Sun, Kai; Mao, Xiao-Ming; Ouyang, Qi
2002-12-01
We report on our numerical studies of the Axelrod model for social influence in small-world networks. Our simulation results show that the topology of the network has a crucial effect on the evolution of cultures. As the randomness of the network increases, the system undergoes a transition from a highly fragmented phase to a uniform phase. We also find that the power-law distribution at the transition point, reported by Castellano et al, is not a critical phenomenon; it exists not only at the onset of transition but also for almost any control parameters. All these power-law distributions are stable against perturbations. A mean-field theory is developed to explain these phenomena.
NASA Astrophysics Data System (ADS)
Ren, Wei; Geng, Huiyuan; Zhang, Zihao; Zhang, Lixia
2017-06-01
It is generally believed that filling atoms randomly and uniformly distribute in caged crystals, such as skutterudite compounds. Here, we report first-principles and experimental discovery of a multiscale filling-fraction fluctuation in the R Fe4Sb12 system. La0.8Ti0.1Ga0.1Fe4Sb12 spontaneously separates into La-rich and La-poor skutterudite phases, leading to multiscale strain field fluctuations. As a result, glasslike ultralow lattice thermal conductivity approaching the theoretical minimum is achieved, mainly due to strain field scattering of high-energy phonons. These findings reveal that an uneven distribution of filling atoms is efficient to further reduce the lattice thermal conductivity of caged crystals.
Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto
2005-08-01
Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.
Random walks of colloidal probes in viscoelastic materials
NASA Astrophysics Data System (ADS)
Khan, Manas; Mason, Thomas G.
2014-04-01
To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-01-01
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-10-22
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Monte Carlo Sampling in Fractal Landscapes
NASA Astrophysics Data System (ADS)
Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.
2013-05-01
We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.
Robust PRNG based on homogeneously distributed chaotic dynamics
NASA Astrophysics Data System (ADS)
Garasym, Oleg; Lozi, René; Taralova, Ina
2016-02-01
This paper is devoted to the design of new chaotic Pseudo Random Number Generator (CPRNG). Exploring several topologies of network of 1-D coupled chaotic mapping, we focus first on two dimensional networks. Two topologically coupled maps are studied: TTL rc non-alternate, and TTL SC alternate. The primary idea of the novel maps has been based on an original coupling of the tent and logistic maps to achieve excellent random properties and homogeneous /uniform/ density in the phase plane, thus guaranteeing maximum security when used for chaos base cryptography. In this aim two new nonlinear CPRNG: MTTL 2 sc and NTTL 2 are proposed. The maps successfully passed numerous statistical, graphical and numerical tests, due to proposed ring coupling and injection mechanisms.
Scalar mixtures in porous media
NASA Astrophysics Data System (ADS)
Kree, Mihkel; Villermaux, Emmanuel
2017-10-01
Using a technique allowing for in situ measurements of concentrations fields, the evolution of scalar mixtures flowing within a porous medium made of a three-dimensional random stack of solid spheres, is addressed. Two distinct fluorescent dyes are injected from separate sources. Their evolution as they disperse and mix through the medium is directly observed and quantified, which is made possible by matching the refractive indices of the spheres and the flowing interstitial liquid. We decipher the nature of the interaction rule between the scalar sources, explaining the phenomenon that alters the concentration distribution of the overall mixture as it decays toward uniformity. Any residual correlation of the initially merged sources is progressively hidden, leading to an effective fully random interaction rule of the two distinct subfields.
Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks
Ranganathan, Radha; Kannan, Kathiravan
2015-01-01
IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. PMID:25879066
Impact of uniform electrode current distribution on ETF
NASA Technical Reports Server (NTRS)
Bents, D. J.
1982-01-01
The design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution are examined and the alternate consolidation design which occur are presented compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is given for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
Crack surface roughness in three-dimensional random fuse networks
NASA Astrophysics Data System (ADS)
Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan
2006-08-01
Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.
Huang, Liangliang; Zhu, Lei; Shi, Xiaowei; Xia, Bing; Liu, Zhongyang; Zhu, Shu; Yang, Yafeng; Ma, Teng; Cheng, Pengzhen; Luo, Kai; Huang, Jinghui; Luo, Zhuojing
2018-03-01
Scaffolds with inner fillers that convey directional guidance cues represent promising candidates for nerve repair. However, incorrect positioning or non-uniform distribution of intraluminal fillers might result in regeneration failure. In addition, proper porosity (to enhance nutrient and oxygen exchange but prevent fibroblast infiltration) and mechanical properties (to ensure fixation and to protect regenerating axons from compression) of the outer sheath are also highly important for constructing advanced nerve scaffolds. In this study, we constructed a compound scaffold using a stage-wise strategy, including directionally freezing orientated collagen-chitosan (O-CCH) filler, electrospinning poly(ε-caprolactone) (PCL) sheaths and assembling O-CCH/PCL scaffolds. Based on scanning electron microscopy (SEM) and mechanical tests, a blend of collagen/chitosan (1:1) was selected for filler fabrication, and a wall thickness of 400 μm was selected for PCL sheath production. SEM and three-dimensional (3D) reconstruction further revealed that the O-CCH filler exhibited a uniform, longitudinally oriented microstructure (over 85% of pores were 20-50 μm in diameter). The electrospun PCL porous sheath with pore sizes of 6.5 ± 3.3 μm prevented fibroblast invasion. The PCL sheath exhibited comparable mechanical properties to commercially available nerve conduits, and the O-CCH filler showed a physiologically relevant substrate stiffness of 2.0 ± 0.4 kPa. The differential degradation time of the filler and sheath allows the O-CCH/PCL scaffold to protect regenerating axons from compression stress while providing enough space for regenerating nerves. In vitro and in vivo studies indicated that the O-CCH/PCL scaffolds could promote axonal regeneration and Schwann cell migration. More importantly, functional results indicated that the CCH/PCL compound scaffold induced comparable functional recovery to that of the autograft group at the end of the study. Our findings demonstrated that the O-CCH/PCL scaffold with uniform longitudinal guidance filler and a porous sheath exhibits favorable properties for clinical use and promotes nerve regeneration and functional recovery. The O-CCH/PCL scaffold provides a promising new path for developing an optimal therapeutic alternative for peripheral nerve reconstruction. Scaffolds with inner fillers displaying directional guidance cues represent a promising candidate for nerve repair. However, further clinical translation should pay attention to the problem of non-uniform distribution of inner fillers, the porosity and mechanical properties of the outer sheath and the morphological design facilitating operation. In this study, a stage-wise fabrication strategy was used, which made it possible to develop an O-CCH/PCL compound scaffold with a uniform longitudinally oriented inner filler and a porous outer sheath. The uniform distribution of the pores in the O-CCH/PCL scaffold provides a solution to resolve the problem of non-uniform distribution of inner fillers, which impede the clinical translation of scaffolds with longitudinal microstructured fillers, especially for aligned-fiber-based scaffolds. In vitro and in vivo studies indicated that the O-CCH/PCL scaffolds could provide topographical cues for axonal regeneration and SC migration, which were not found for random scaffolds (with random microstructure resemble sponge-based scaffolds). The electrospun porous PCL sheath of the O-CCH/PCL scaffold not only prevented fibroblast infiltration, but also satisfied the mechanical requirements for clinical use, paving the way for clinical translation. The differential degradation time of the O-CCH filler and the PCL sheath makes O-CCH/PCL scaffold able to provide long protection for regenerating axons from compression stress, but enough space for regenerating nerve. These findings highlight the possibility of developing an optimal therapeutic alternative for nerve defects using the O-CCH/PCL scaffold. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Symmetries and synchronization in multilayer random networks
NASA Astrophysics Data System (ADS)
Saa, Alberto
2018-04-01
In the light of the recently proposed scenario of asymmetry-induced synchronization (AISync), in which dynamical uniformity and consensus in a distributed system would demand certain asymmetries in the underlying network, we investigate here the influence of some regularities in the interlayer connection patterns on the synchronization properties of multilayer random networks. More specifically, by considering a Stuart-Landau model of complex oscillators with random frequencies, we report for multilayer networks a dynamical behavior that could be also classified as a manifestation of AISync. We show, namely, that the presence of certain symmetries in the interlayer connection pattern tends to diminish the synchronization capability of the whole network or, in other words, asymmetries in the interlayer connections would enhance synchronization in such structured networks. Our results might help the understanding not only of the AISync mechanism itself but also its possible role in the determination of the interlayer connection pattern of multilayer and other structured networks with optimal synchronization properties.
Generalized Entanglement Entropies of Quantum Designs.
Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun
2018-03-30
The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.
Generalized Entanglement Entropies of Quantum Designs
NASA Astrophysics Data System (ADS)
Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun
2018-03-01
The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.
Time-dependent Hartree-Fock approach to nuclear ``pasta'' at finite temperature
NASA Astrophysics Data System (ADS)
Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.
2013-05-01
We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature. In addition, we propose the variance in the cell density distribution as a measure to distinguish pasta matter from uniform matter.
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
A Velocity Distribution Model for Steady State Heat Transfer
NASA Technical Reports Server (NTRS)
Hall, Eric B.
1996-01-01
Consider a box that is filled with an ideal gas and that is aligned along Cartesian coordinates (x, y, z) having until length in the 'y' direction and unspecified length in the 'x' and 'z' directions. Heat is applied uniformly over the 'hot' end of the box (y = 1) and is removed uniformly over the 'cold' end (y = O) at a constant rate such that the ends of the box are maintained at temperatures T(sub 0) at y = O and T(sub 1) at y = 1. Let U, V, and W denote the respective velocity components of a molecule inside the box selected at some random time and at some location (x, y, z). If T(sub 0) = T(sub 1), then U, Y, and W are mutually independent and Gaussian, each with mean zero and variance RT(sub 0), where R is the gas constant. When T(sub 0) does not equal T(sub 1) the velocity components are not independent and are not Gaussian. Our objective is to characterize the joint distribution of the velocity components U, Y, and W as a function of y, and, in particular, to characterize the distribution of V given y. It is hoped that this research will lead to an increased physical understanding of the nature of turbulence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safronov, V.; Feigin, L.A.; Budovskaya, L.D.
1994-12-31
Langmuir-Blodgett films of amphiphilic fluorinated copolymers were fabricated and studied by X-ray diffraction. Although these films show poor interlayer periodicity, they possess a uniform thickness even in the case of very thin films of one bilayer (22 {angstrom}). This feature was used to obtain complex LB structures (superlattices) with alteration of copolymer and fatty acid bilayers. X-ray diffraction data proved the regular periodical organization of these structures and allowed to calculate electron density distribution across the superlattices.
On Tree-Based Phylogenetic Networks.
Zhang, Louxin
2016-07-01
A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree-based networks. We present a simple necessary and sufficient condition for tree-based networks and prove that a universal tree-based network exists for any number of taxa that contains as its base every phylogenetic tree on the same set of taxa. This answers two problems posted by Francis and Steel recently. A byproduct is a computer program for generating random binary phylogenetic networks under the uniform distribution model.
2014-06-30
b 1 , . . . , b0m, bm) fm(b0) + Pm i=1 1bi 6=b0 i 1b i 6=b j for j<i. 4.8 ( Travelling salesman problem ). Let X 1 , . . . ,Xn be i.i.d. points that...are uniformly distributed in the unit square [0, 1]2. We think of Xi as the location of city i. The goal of the travelling salesman problem is to find... salesman problem , . . . • Probability in Banach spaces: probabilistic limit theorems for Banach- valued random variables, empirical processes, local
Rapid learning of visual ensembles.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-02-01
We recently demonstrated that observers are capable of encoding not only summary statistics, such as mean and variance of stimulus ensembles, but also the shape of the ensembles. Here, for the first time, we show the learning dynamics of this process, investigate the possible priors for the distribution shape, and demonstrate that observers are able to learn more complex distributions, such as bimodal ones. We used speeding and slowing of response times between trials (intertrial priming) in visual search for an oddly oriented line to assess internal models of distractor distributions. Experiment 1 demonstrates that two repetitions are sufficient for enabling learning of the shape of uniform distractor distributions. In Experiment 2, we compared Gaussian and uniform distractor distributions, finding that following only two repetitions Gaussian distributions are represented differently than uniform ones. Experiment 3 further showed that when distractor distributions are bimodal (with a 30° distance between two uniform intervals), observers initially treat them as uniform, and only with further repetitions do they begin to treat the distributions as bimodal. In sum, observers do not have strong initial priors for distribution shapes and quickly learn simple ones but have the ability to adjust their representations to more complex feature distributions as information accumulates with further repetitions of the same distractor distribution.
Uniform irradiation of irregularly shaped cavities for photodynamic therapy.
Rem, A I; van Gemert, M J; van der Meulen, F W; Gijsbers, G H; Beek, J F
1997-03-01
It is difficult to achieve a uniform light distribution in irregularly shaped cavities. We have conducted a study on the use of hollow 'integrating' moulds for more uniform light delivery of photodynamic therapy in irregularly shaped cavities such as the oral cavity. Simple geometries such as a cubical box, a sphere, a cylinder and a 'bottle-neck' geometry have been investigated experimentally and the results have been compared with computed light distributions obtained using the 'radiosity method'. A high reflection coefficient of the mould and the best uniform direct irradiance possible on the inside of the mould were found to be important determinants for achieving a uniform light distribution.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Banerjee, Abhirup; Maji, Pradipta
2015-12-01
The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.
Stochastic analysis of pitch angle scattering of charged particles by transverse magnetic waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemons, Don S.; Liu Kaijun; Winske, Dan
2009-11-15
This paper describes a theory of the velocity space scattering of charged particles in a static magnetic field composed of a uniform background field and a sum of transverse, circularly polarized, magnetic waves. When that sum has many terms the autocorrelation time required for particle orbits to become effectively randomized is small compared with the time required for the particle velocity distribution to change significantly. In this regime the deterministic equations of motion can be transformed into stochastic differential equations of motion. The resulting stochastic velocity space scattering is described, in part, by a pitch angle diffusion rate that ismore » a function of initial pitch angle and properties of the wave spectrum. Numerical solutions of the deterministic equations of motion agree with the theory at all pitch angles, for wave energy densities up to and above the energy density of the uniform field, and for different wave spectral shapes.« less
Pattern Selection and Super-Patterns in Opinion Dynamics
NASA Astrophysics Data System (ADS)
Ben-Naim, Eli; Scheel, Arnd
We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
Set statistics in conductive bridge random access memory device with Cu/HfO{sub 2}/Pt structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Meiyun; Long, Shibing, E-mail: longshibing@ime.ac.cn; Wang, Guoming
2014-11-10
The switching parameter variation of resistive switching memory is one of the most important challenges in its application. In this letter, we have studied the set statistics of conductive bridge random access memory with a Cu/HfO{sub 2}/Pt structure. The experimental distributions of the set parameters in several off resistance ranges are shown to nicely fit a Weibull model. The Weibull slopes of the set voltage and current increase and decrease logarithmically with off resistance, respectively. This experimental behavior is perfectly captured by a Monte Carlo simulator based on the cell-based set voltage statistics model and the Quantum Point Contact electronmore » transport model. Our work provides indications for the improvement of the switching uniformity.« less
NASA Astrophysics Data System (ADS)
Bokhari, Abdullah
Demarcations between traditional distribution power systems and distributed generation (DG) architectures are increasingly evolving as higher DG penetration is introduced in the system. The concerns in existing electric power systems (EPSs) to accommodate less restrictive interconnection policies while maintaining reliability and performance of power delivery have been the major challenge for DG growth. In this dissertation, the work is aimed to study power quality, energy saving and losses in a low voltage distributed network under various DG penetration cases. Simulation platform suite that includes electric power system, distributed generation and ZIP load models is implemented to determine the impact of DGs on power system steady state performance and the voltage profile of the customers/loads in the network under the voltage reduction events. The investigation designed to test the DG impact on power system starting with one type of DG, then moves on multiple DG types distributed in a random case and realistic/balanced case. The functionality of the proposed DG interconnection is designed to meet the basic requirements imposed by the various interconnection standards, most notably IEEE 1547, public service commission, and local utility regulation. It is found that implementation of DGs on the low voltage secondary network would improve customer's voltage profile, system losses and significantly provide energy savings and economics for utilities. In a network populated with DGs, utility would have a uniform voltage profile at the customers end as the voltage profile becomes more concentrated around targeted voltage level. The study further reinforced the concept that the behavior of DG in distributed network would improve voltage regulation as certain percentage reduction on utility side would ensure uniform percentage reduction seen by all customers and reduce number of voltage violations.
NASA Astrophysics Data System (ADS)
Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing
2017-11-01
In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.
Population pharmacokinetics of valnemulin in swine.
Zhao, D H; Zhang, Z; Zhang, C Y; Liu, Z C; Deng, H; Yu, J J; Guo, J P; Liu, Y H
2014-02-01
This study was carried out in 121 pigs to develop a population pharmacokinetic (PPK) model by oral (p.o.) administration of valnemulin at a single dose of 10 mg/kg. Serum biochemistry parameters of each pig were determined prior to drug administration. Three to five blood samples were collected at random time points, but uniformly distributed in the absorption, distribution, and elimination phases of drug disposition. Plasma concentrations of valnemulin were determined by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The concentration-time data were fitted to PPK models using nonlinear mixed effect modeling (NONMEM) with G77 FORTRAN compiler. NONMEM runs were executed using Wings for NONMEM. Fixed effects of weight, age, sex as well as biochemistry parameters, which may influence the PK of valnemulin, were investigated. The drug concentration-time data were adequately described by a one-compartmental model with first-order absorption. A random effect model of valnemulin revealed a pattern of log-normal distribution, and it satisfactorily characterized the observed interindividual variability. The distribution of random residual errors, however, suggested an additive model for the initial phase (<12 h) followed by a combined model that consists of both proportional and additive features (≥ 12 h), so that the intra-individual variability could be sufficiently characterized. Covariate analysis indicated that body weight had a conspicuous effect on valnemulin clearance (CL/F). The featured population PK values of Ka , V/F and CL/F were 0.292/h, 63.0 L and 41.3 L/h, respectively. © 2013 John Wiley & Sons Ltd.
Geomasking sensitive health data and privacy protection: an evaluation using an E911 database.
Allshouse, William B; Fitch, Molly K; Hampton, Kristen H; Gesink, Dionne C; Doherty, Irene A; Leone, Peter A; Serre, Marc L; Miller, William C
2010-10-01
Geomasking is used to provide privacy protection for individual address information while maintaining spatial resolution for mapping purposes. Donut geomasking and other random perturbation geomasking algorithms rely on the assumption of a homogeneously distributed population to calculate displacement distances, leading to possible under-protection of individuals when this condition is not met. Using household data from 2007, we evaluated the performance of donut geomasking in Orange County, North Carolina. We calculated the estimated k-anonymity for every household based on the assumption of uniform household distribution. We then determined the actual k-anonymity by revealing household locations contained in the county E911 database. Census block groups in mixed-use areas with high population distribution heterogeneity were the most likely to have privacy protection below selected criteria. For heterogeneous populations, we suggest tripling the minimum displacement area in the donut to protect privacy with a less than 1% error rate.
Geomasking sensitive health data and privacy protection: an evaluation using an E911 database
Allshouse, William B; Fitch, Molly K; Hampton, Kristen H; Gesink, Dionne C; Doherty, Irene A; Leone, Peter A; Serre, Marc L; Miller, William C
2010-01-01
Geomasking is used to provide privacy protection for individual address information while maintaining spatial resolution for mapping purposes. Donut geomasking and other random perturbation geomasking algorithms rely on the assumption of a homogeneously distributed population to calculate displacement distances, leading to possible under-protection of individuals when this condition is not met. Using household data from 2007, we evaluated the performance of donut geomasking in Orange County, North Carolina. We calculated the estimated k-anonymity for every household based on the assumption of uniform household distribution. We then determined the actual k-anonymity by revealing household locations contained in the county E911 database. Census block groups in mixed-use areas with high population distribution heterogeneity were the most likely to have privacy protection below selected criteria. For heterogeneous populations, we suggest tripling the minimum displacement area in the donut to protect privacy with a less than 1% error rate. PMID:20953360
NASA Astrophysics Data System (ADS)
Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
NASA Astrophysics Data System (ADS)
Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
Spatial and temporal distribution of trunk-injected imidacloprid in apple tree canopies.
Aćimović, Srđan G; VanWoerkom, Anthony H; Reeb, Pablo D; Vandervoort, Christine; Garavaglia, Thomas; Cregg, Bert M; Wise, John C
2014-11-01
Pesticide use in orchards creates drift-driven pesticide losses which contaminate the environment. Trunk injection of pesticides as a target-precise delivery system could greatly reduce pesticide losses. However, pesticide efficiency after trunk injection is associated with the underinvestigated spatial and temporal distribution of the pesticide within the tree crown. This study quantified the spatial and temporal distribution of trunk-injected imidacloprid within apple crowns after trunk injection using one, two, four or eight injection ports per tree. The spatial uniformity of imidacloprid distribution in apple crowns significantly increased with more injection ports. Four ports allowed uniform spatial distribution of imidacloprid in the crown. Uniform and non-uniform spatial distributions were established early and lasted throughout the experiment. The temporal distribution of imidacloprid was significantly non-uniform. Upper and lower crown positions did not significantly differ in compound concentration. Crown concentration patterns indicated that imidacloprid transport in the trunk occurred through radial diffusion and vertical uptake with a spiral pattern. By showing where and when a trunk-injected compound is distributed in the apple tree canopy, this study addresses a key knowledge gap in terms of explaining the efficiency of the compound in the crown. These findings allow the improvement of target-precise pesticide delivery for more sustainable tree-based agriculture. © 2014 Society of Chemical Industry.
A New Family of Solvable Pearson-Dirichlet Random Walks
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2011-07-01
An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.
NASA Astrophysics Data System (ADS)
Wang, Zhe; Wang, Wen-Qin; Shao, Huaizong
2016-12-01
Different from the phased-array using the same carrier frequency for each transmit element, the frequency diverse array (FDA) uses a small frequency offset across the array elements to produce range-angle-dependent transmit beampattern. FDA radar provides new application capabilities and potentials due to its range-dependent transmit array beampattern, but the FDA using linearly increasing frequency offsets will produce a range and angle coupled transmit beampattern. In order to decouple the range-azimuth beampattern for FDA radar, this paper proposes a uniform linear array (ULA) FDA using Costas-sequence modulated frequency offsets to produce random-like energy distribution in the transmit beampattern and thumbtack transmit-receive beampattern. In doing so, the range and angle of targets can be unambiguously estimated through matched filtering and subspace decomposition algorithms in the receiver signal processor. Moreover, random-like energy distributed beampattern can also be utilized for low probability of intercept (LPI) radar applications. Numerical results show that the proposed scheme outperforms the standard FDA in focusing the transmit energy, especially in the range dimension.
Generalized Nonlinear Yule Models
NASA Astrophysics Data System (ADS)
Lansky, Petr; Polito, Federico; Sacerdote, Laura
2016-11-01
With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.
Modeling of chromosome intermingling by partially overlapping uniform random polygons.
Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J
2011-03-01
During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.
Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis
Hensen, B.; Kalb, N.; Blok, M. S.; Dréau, A. E.; Reiserer, A.; Vermeulen, R. F. L.; Schouten, R. N.; Markham, M.; Twitchen, D. J.; Goodenough, K.; Elkouss, D.; Wehner, S.; Taminiau, T. H.; Hanson, R.
2016-01-01
The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator. PMID:27509823
NASA Astrophysics Data System (ADS)
Asano, Takanori; Takaishi, Riichiro; Oda, Minoru; Sakuma, Kiwamu; Saitoh, Masumi; Tanaka, Hiroki
2018-04-01
We visualize the grain structures for individual nanosized thin film transistors (TFTs), which are electrically characterized, with an improved data processing technique for the dark-field image reconstruction of nanobeam electron diffraction maps. Our individual crystal analysis gives the one-to-one correspondence of TFTs with different grain boundary structures, such as random and coherent boundaries, to the characteristic degradations of ON-current and threshold voltage. Furthermore, the local crystalline uniformity inside a single grain is detected as the difference in diffraction intensity distribution.
NASA Astrophysics Data System (ADS)
Ni, Yong; Song, Zhaoqiang; Jiang, Hongyuan; Yu, Shu-Hong; He, Linghui
2015-08-01
How nacreous nanocomposites with optimal combinations of stiffness, strength and toughness depend on constituent property and microstructure parameters is studied using a nonlinear shear-lag model. We show that the interfacial elasto-plasticity and the overlapping length between bricks dependent on the brick size and brick staggering mode significantly affect the nonuniformity of the shear stress, the stress-transfer efficiency and thus the failure path. There are two characteristic lengths at which the strength and toughness are optimized respectively. Simultaneous optimization of the strength and toughness is achieved by matching these lengths as close as possible in the nacreous nanocomposite with regularly staggered brick-and-mortar (BM) structure where simultaneous uniform failures of the brick and interface occur. In the randomly staggered BM structure, as the overlapping length is distributed, the nacreous nanocomposite turns the simultaneous uniform failure into progressive interface or brick failure with moderate decrease of the strength and toughness. Specifically there is a parametric range at which the strength and toughness are insensitive to the brick staggering randomness. The obtained results propose a parametric selection guideline based on the length matching for rational design of nacreous nanocomposites. Such guideline explains why nacre is strong and tough while most artificial nacreous nanocomposites aere not.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Moorthy, Jayashree
1995-01-01
A time-domain study of the random response of a laminated plate subjected to combined acoustic and thermal loads is carried out. The features of this problem also include given uniform static inplane forces. The formulation takes into consideration a possible initial imperfection in the flatness of the plate. High decibel sound pressure levels along with high thermal gradients across thickness drive the plate response into nonlinear regimes. This calls for the analysis to use von Karman large deflection strain-displacement relationships. A finite element model that combines the von Karman strains with the first-order shear deformation plate theory is developed. The development of the analytical model can accommodate an anisotropic composite laminate built up of uniformly thick layers of orthotropic, linearly elastic laminae. The global system of finite element equations is then reduced to a modal system of equations. Numerical simulation using a single-step algorithm in the time-domain is then carried out to solve for the modal coordinates. Nonlinear algebraic equations within each time-step are solved by the Newton-Raphson method. The random gaussian filtered white noise load is generated using Monte Carlo simulation. The acoustic pressure distribution over the plate is capable of accounting for a grazing incidence wavefront. Numerical results are presented to study a variety of cases.
A Distributed Data-Gathering Protocol Using AUV in Underwater Sensor Networks.
Khan, Jawaad Ullah; Cho, Ho-Shin
2015-08-06
In this paper, we propose a distributed data-gathering scheme using an autonomous underwater vehicle (AUV) working as a mobile sink to gather data from a randomly distributed underwater sensor network where sensor nodes are clustered around several cluster headers. Unlike conventional data-gathering schemes where the AUV visits either every node or every cluster header, the proposed scheme allows the AUV to visit some selected nodes named path-nodes in a way that reduces the overall transmission power of the sensor nodes. Monte Carlo simulations are performed to investigate the performance of the proposed scheme compared with several preexisting techniques employing the AUV in terms of total amount of energy consumption, standard deviation of each node's energy consumption, latency to gather data at a sink, and controlling overhead. Simulation results show that the proposed scheme not only reduces the total energy consumption but also distributes the energy consumption more uniformly over the network, thereby increasing the lifetime of the network.
A Distributed Data-Gathering Protocol Using AUV in Underwater Sensor Networks
Khan, Jawaad Ullah; Cho, Ho-Shin
2015-01-01
In this paper, we propose a distributed data-gathering scheme using an autonomous underwater vehicle (AUV) working as a mobile sink to gather data from a randomly distributed underwater sensor network where sensor nodes are clustered around several cluster headers. Unlike conventional data-gathering schemes where the AUV visits either every node or every cluster header, the proposed scheme allows the AUV to visit some selected nodes named path-nodes in a way that reduces the overall transmission power of the sensor nodes. Monte Carlo simulations are performed to investigate the performance of the proposed scheme compared with several preexisting techniques employing the AUV in terms of total amount of energy consumption, standard deviation of each node’s energy consumption, latency to gather data at a sink, and controlling overhead. Simulation results show that the proposed scheme not only reduces the total energy consumption but also distributes the energy consumption more uniformly over the network, thereby increasing the lifetime of the network. PMID:26287189
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Emergence of small-world structure in networks of spiking neurons through STDP plasticity.
Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas
2011-01-01
In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.
Hu, Wei; Zou, Lilan; Chen, Xinman; Qin, Ni; Li, Shuwei; Bao, Dinghua
2014-04-09
We report on highly uniform resistive switching properties of amorphous InGaZnO (a-IGZO) thin films. The thin films were fabricated by a low temperature photochemical solution deposition method, a simple process combining chemical solution deposition and ultraviolet (UV) irradiation treatment. The a-IGZO based resistive switching devices exhibit long retention, good endurance, uniform switching voltages, and stable distribution of low and high resistance states. Electrical conduction mechanisms were also discussed on the basis of the current-voltage characteristics and their temperature dependence. The excellent resistive switching properties can be attributed to the reduction of organic- and hydrogen-based elements and the formation of enhanced metal-oxide bonding and metal-hydroxide bonding networks by hydrogen bonding due to UV irradiation, based on Fourier-transform-infrared spectroscopy, X-ray photoelectron spectroscopy, and Field emission scanning electron microscopy analysis of the thin films. This study suggests that a-IGZO thin films have potential applications in resistive random access memory and the low temperature photochemical solution deposition method can find the opportunity for further achieving system on panel applications if the a-IGZO resistive switching cells were integrated with a-IGZO thin film transistors.
Effects of beam irregularity on uniform scanning
NASA Astrophysics Data System (ADS)
Kim, Chang Hyeuk; Jang, Sea duk; Yang, Tae-Keun
2016-09-01
An active scanning beam delivery method has many advantages in particle beam applications. For the beam is to be successfully delivered to the target volume by using the active scanning technique, the dose uniformity must be considered and should be at least 2.5% in the case of therapy application. During beam irradiation, many beam parameters affect the 2-dimensional uniformity at the target layer. A basic assumption in the beam irradiation planning stage is that the shape of the beam is symmetric and follows a Gaussian distribution. In this study, a pure Gaussian-shaped beam distribution was distorted by adding parasitic Gaussian distribution. An appropriate uniform scanning condition was deduced by using a quantitative analysis based on the gamma value of the distorted beam and 2-dimensional uniformities.
Evaluation of dripper clogging using magnetic water in drip irrigation
NASA Astrophysics Data System (ADS)
Khoshravesh, Mojtaba; Mirzaei, Sayyed Mohammad Javad; Shirazi, Pooya; Valashedi, Reza Norooz
2018-06-01
This study was performed to investigate the uniformity of distribution of water and discharge variations in drip irrigation using magnetic water. Magnetic water was achieved by transition of water using a robust permanent magnet connected to a feed pipeline. Two main factors including magnetic and non-magnetic water and three sub-factor of salt concentration including well water, addition of 150 and 300 mg L-1 calcium carbonate to irrigation water with three replications were applied. The result of magnetic water on average dripper discharge was significant at ( P ≤ 0.05). At the final irrigation, the average dripper discharge and distribution uniformity were higher for the magnetic water compared to the non-magnetic water. The magnetic water showed a significant effect ( P ≤ 0.01) on distribution uniformity of drippers. At the first irrigation, the water distribution uniformity was almost the same for both the magnetic water and the non-magnetic water. The use of magnetic water for drip irrigation is recommended to achieve higher uniformity.
The concept of entropy in landscape evolution
Leopold, Luna Bergere; Langbein, Walter Basil
1962-01-01
The concept of entropy is expressed in terms of probability of various states. Entropy treats of the distribution of energy. The principle is introduced that the most probable condition exists when energy in a river system is as uniformly distributed as may be permitted by physical constraints. From these general considerations equations for the longitudinal profiles of rivers are derived that are mathematically comparable to those observed in the field. The most probable river profiles approach the condition in which the downstream rate of production of entropy per unit mass is constant. Hydraulic equations are insufficient to determine the velocity, depths, and slopes of rivers that are themselves authors of their own hydraulic geometries. A solution becomes possible by introducing the concept that the distribution of energy tends toward the most probable. This solution leads to a theoretical definition of the hydraulic geometry of river channels that agrees closely with field observations. The most probable state for certain physical systems can also be illustrated by random-walk models. Average longitudinal profiles and drainage networks were so derived and these have the properties implied by the theory. The drainage networks derived from random walks have some of the principal properties demonstrated by the Horton analysis; specifically, the logarithms of stream length and stream numbers are proportional to stream order.
Gravitational Wakes Sizes from Multiple Cassini Radio Occultations of Saturn's Rings
NASA Astrophysics Data System (ADS)
Marouf, E. A.; Wong, K. K.; French, R. G.; Rappaport, N. J.; McGhee, C. A.; Anabtawi, A.
2016-12-01
Voyager and Cassini radio occultation extinction and forward scattering observations of Saturn's C-Ring and Cassini Division imply power law particle size distributions extending from few millimeters to several meters with power law index in the 2.8 to 3.2 range, depending on the specific ring feature. We extend size determination to the elongated and canted particle clusters (gravitational wakes) known to permeate Saturn's A- and B-Rings. We use multiple Cassini radio occultation observations over a range of ring opening angle B and wake viewing angle α to constrain the mean wake width W and thickness/height H, and average ring area coverage fraction. The rings are modeled as randomly blocked diffraction screen in the plane normal to the incidence direction. Collective particle shadows define the blocked area. The screen's transmittance is binary: blocked or unblocked. Wakes are modeled as thin layer of elliptical cylinders populated by random but uniformly distributed spherical particles. The cylinders can be immersed in a "classical" layer of spatially uniformly distributed particles. Numerical simulations of model diffraction patterns reveal two distinct components: cylindrical and spherical. The first dominates at small scattering angles and originates from specific locations within the footprint of the spacecraft antenna on the rings. The second dominates at large scattering angles and originates from the full footprint. We interpret Cassini extinction and scattering observations in the light of the simulation results. We compute and remove contribution of the spherical component to observed scattered signal spectra assuming known particle size distribution. A large residual spectral component is interpreted as contribution of cylindrical (wake) diffraction. Its angular width determines a cylindrical shadow width that depends on the wake parameters (W,H) and the viewing geometry (α,B). Its strength constrains the mean fractional area covered (optical depth), hence constrains the mean wakes spacing. Self-consistent (W,H) are estimated using least-square fit to results from multiple occultations. Example results for observed scattering by several inner A-Ring features suggest particle clusters (wakes) that are few tens of meters wide and several meters thick.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Expert Assessment of Stigmergy: A Report for the Department of National Defence
2005-10-01
pheromone table may be reduced by implementing a clustering scheme. Termite can take advantage of the wireless broadcast medium, since it is possible for...comparing it with any other routing scheme. The Termite scheme [RW] differs from the source routing [ITT] by applying pheromone trails or random walks...rather than uniform or probabilistic ones. Random walk ants differ from uniform ants since they follow pheromone trails, if any. Termite [RW] also
NASA Astrophysics Data System (ADS)
Emoto, K.; Saito, T.; Shiomi, K.
2017-12-01
Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.
Turbulent transport with intermittency: Expectation of a scalar concentration.
Rast, Mark Peter; Pinton, Jean-François; Mininni, Pablo D
2016-04-01
Scalar transport by turbulent flows is best described in terms of Lagrangian parcel motions. Here we measure the Eulerian distance travel along Lagrangian trajectories in a simple point vortex flow to determine the probabilistic impulse response function for scalar transport in the absence of molecular diffusion. As expected, the mean squared Eulerian displacement scales ballistically at very short times and diffusively for very long times, with the displacement distribution at any given time approximating that of a random walk. However, significant deviations in the displacement distributions from Rayleigh are found. The probability of long distance transport is reduced over inertial range time scales due to spatial and temporal intermittency. This can be modeled as a series of trapping events with durations uniformly distributed below the Eulerian integral time scale. The probability of long distance transport is, on the other hand, enhanced beyond that of the random walk for both times shorter than the Lagrangian integral time and times longer than the Eulerian integral time. The very short-time enhancement reflects the underlying Lagrangian velocity distribution, while that at very long times results from the spatial and temporal variation of the flow at the largest scales. The probabilistic impulse response function, and with it the expectation value of the scalar concentration at any point in space and time, can be modeled using only the evolution of the lowest spatial wave number modes (the mean and the lowest harmonic) and an eddy based constrained random walk that captures the essential velocity phase relations associated with advection by vortex motions. Preliminary examination of Lagrangian tracers in three-dimensional homogeneous isotropic turbulence suggests that transport in that setting can be similarly modeled.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Nonlinear dynamic evolution and control in CCFN with mixed attachment mechanisms
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Jianping; Han, Dun
2017-01-01
In recent years, wireless communication plays an important role in our lives. Cooperative communication, is used by a mobile station with single antenna to share with each other forming a virtual MIMO antenna system, will become a development with a diversity gain for wireless communication in tendency future. In this paper, a fitness model of evolution network based on complex networks with mixed attachment mechanisms is devised in order to study an actual network-CCFN (cooperative communication fitness network). Firstly, the evolution of CCFN is given by four cases with different probabilities, and the rate equations of nodes degree are presented to analyze the evolution of CCFN. Secondly, the degree distribution is analyzed by calculating the rate equation and numerical simulation with the examples of four fitness distributions such as power law, uniform fitness distribution, exponential fitness distribution and Rayleigh fitness distribution. Finally, the robustness of CCFN is studied by numerical simulation with four fitness distributions under random attack and intentional attack to analyze the effects of degree distribution, average path length and average degree. The results of this paper offers insights for building CCFN systems in order to program communication resources.
NASA Astrophysics Data System (ADS)
Jasenak, Brian
2017-02-01
Ultraviolet light-emitting diode (UV LED) adoption is accelerating; they are being used in new applications such as UV curing, germicidal irradiation, nondestructive testing, and forensic analysis. In many of these applications, it is critically important to produce a uniform light distribution and consistent surface irradiance. Flat panes of fused quartz, silica, or glass are commonly used to cover and protect UV LED arrays. However, they don't offer the advantages of an optical lens design. An investigation was conducted to determine the effect of a secondary glass optic on the uniformity of the light distribution and irradiance. Glass optics capable of transmitting UV-A, UV-B, and UV-C wavelengths can improve light distribution, uniformity, and intensity. In this work, two simulation studies were created to illustrate distinct irradiance patterns desirable for potential real world applications. The first study investigates the use of a multi-UV LED array and optic to create a uniform irradiance pattern on the flat two dimensional (2D) target surface. The uniformity was improved by designing both the LED array and molded optic to produce a homogenous pattern. The second study investigated the use of an LED light source and molded optic to improve the light uniformity on the inside of a canister. The case study illustrates the requirements for careful selection of LED based on light distribution and subsequent design of optics. The optic utilizes total internal reflection to create optimized light distribution. The combination of the LED and molded optic showed significant improvement in uniformity on the inner surface of the canister. The simulations illustrate how the application of optics can significantly improve UV light distribution which can be critical in applications such as UV curing and sterilization.
Numerical simulation of a helical shape electric arc in the external axial magnetic field
NASA Astrophysics Data System (ADS)
Urusov, R. M.; Urusova, I. R.
2016-10-01
Within the frameworks of non-stationary three-dimensional mathematical model, in approximation of a partial local thermodynamic equilibrium, a numerical calculation was made of characteristics of DC electric arc burning in a cylindrical channel in the uniform external axial magnetic field. The method of numerical simulation of the arc of helical shape in a uniform external axial magnetic field was proposed. This method consists in that that in the computational algorithm, a "scheme" analog of fluctuations for electrons temperature is supplemented. The "scheme" analogue of fluctuations increases a weak numerical asymmetry of electrons temperature distribution, which occurs randomly in the course of computing. This asymmetry can be "picked up" by the external magnetic field that continues to increase up to a certain value, which is sufficient for the formation of helical structure of the arc column. In the absence of fluctuations in the computational algorithm, the arc column in the external axial magnetic field maintains cylindrical axial symmetry, and a helical form of the arc is not observed.
Pattern selection and super-patterns in the bounded confidence model
Ben-Naim, E.; Scheel, A.
2015-10-26
We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes ofmore » the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. Furthermore, the spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.« less
Pattern selection and super-patterns in the bounded confidence model
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Scheel, A.
2015-10-01
We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.
NASA Astrophysics Data System (ADS)
Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.
2018-01-01
A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
NASA Astrophysics Data System (ADS)
Gori-Giorgi, Paola; Ziesche, Paul
2002-12-01
The momentum distribution of the unpolarized uniform electron gas in its Fermi-liquid regime, n(k,rs), with the momenta k measured in units of the Fermi wave number kF and with the density parameter rs, is constructed with the help of the convex Kulik function G(x). It is assumed that n(0,rs),n(1±,rs), the on-top pair density g(0,rs), and the kinetic energy t(rs) are known (respectively, from accurate calculations for rs=1,…,5, from the solution of the Overhauser model, and from quantum Monte Carlo calculations via the virial theorem). Information from the high- and the low-density limit, corresponding to the random-phase approximation and to the Wigner crystal limit, is used. The result is an accurate parametrization of n(k,rs), which fulfills most of the known exact constraints. It is in agreement with the effective-potential calculations of Takada and Yasuhara [Phys. Rev. B 44, 7879 (1991)], is compatible with quantum Monte Carlo data, and is valid in the density range rs≲12. The corresponding cumulant expansions of the pair density and of the static structure factor are discussed, and some exact limits are derived.
Gravitational Effects on Closed-Cellular-Foam Microstructure
NASA Technical Reports Server (NTRS)
Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas
1996-01-01
Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
NASA Astrophysics Data System (ADS)
Flynn, Ryan
2007-12-01
The distribution of biological characteristics such as clonogen density, proliferation, and hypoxia throughout tumors is generally non-uniform, therefore it follows that the optimal dose prescriptions should also be non-uniform and tumor-specific. Advances in intensity modulated x-ray therapy (IMXT) technology have made the delivery of custom-made non-uniform dose distributions possible in practice. Intensity modulated proton therapy (IMPT) has the potential to deliver non-uniform dose distributions as well, while significantly reducing normal tissue and organ at risk dose relative to IMXT. In this work, a specialized treatment planning system was developed for the purpose of optimizing and comparing biologically based IMXT and IMPT plans. The IMXT systems of step-and-shoot (IMXT-SAS) and helical tomotherapy (IMXT-HT) and the IMPT systems of intensity modulated spot scanning (IMPT-SS) and distal gradient tracking (IMPT-DGT), were simulated. A thorough phantom study was conducted in which several subvolumes, which were contained within a base tumor region, were boosted or avoided with IMXT and IMPT. Different boosting situations were simulated by varying the size, proximity, and the doses prescribed to the subvolumes, and the size of the phantom. IMXT and IMPT were also compared for a whole brain radiation therapy (WBRT) case, in which a brain metastasis was simultaneously boosted and the hippocampus was avoided. Finally, IMXT and IMPT dose distributions were compared for the case of non-uniform dose prescription in a head and neck cancer patient that was based on PET imaging with the Cu(II)-diacetyl-bis(N4-methylthiosemicarbazone (Cu-ATSM) hypoxia marker. The non-uniform dose distributions within the tumor region were comparable for IMXT and IMPT. IMPT, however, was capable of delivering the same non-uniform dose distributions within a tumor using a 180° arc as for a full 360° rotation, which resulted in the reduction of normal tissue integral dose by a factor of up to three relative to IMXT, and the complete sparing of organs at risk distal to the tumor region.
NASA Astrophysics Data System (ADS)
Chuang, Kai-Chi; Chung, Hao-Tung; Chu, Chi-Yan; Luo, Jun-Dao; Li, Wei-Shuo; Li, Yi-Shao; Cheng, Huang-Chung
2018-06-01
An AlO x layer was deposited on HfO x , and bilayered dielectric films were found to confine the formation locations of conductive filaments (CFs) during the forming process and then improve device-to-device uniformity. In addition, the Ti interposing layer was also adopted to facilitate the formation of oxygen vacancies. As a result, the resistive random access memory (RRAM) device with TiN/Ti/AlO x (1 nm)/HfO x (6 nm)/TiN stack layers demonstrated excellent device-to-device uniformity although it achieved slightly larger resistive switching characteristics, which were forming voltage (V Forming) of 2.08 V, set voltage (V Set) of 1.96 V, and reset voltage (V Reset) of ‑1.02 V, than the device with TiN/Ti/HfO x (6 nm)/TiN stack layers. However, the device with a thicker 2-nm-thick AlO x layer showed worse uniformity than the 1-nm-thick one. It was attributed to the increased oxygen atomic percentage in the bilayered dielectric films of the 2-nm-thick one. The difference in oxygen content showed that there would be less oxygen vacancies to form CFs. Therefore, the random growth of CFs would become severe and the device-to-device uniformity would degrade.
NASA Astrophysics Data System (ADS)
Hsu, Jiann-wien; Huang, Ding-wei
2009-12-01
We study the survival of extreme opinions in various processes of consensus formation. All the opinions are treated equally and subjected to the same rules of changing. We investigate three typical models to reach a consensus in each case: (A) personal influence, (B) influence from surroundings, and (C) influence to surroundings. Starting with uniformly distributed random opinions, our calculated results show that the extreme opinions can survive in both models (A) and (B), but not in model (C). We obtain a conclusion that both personal influence and passive adaptation to the environment are not sufficient enough to eradicate all the extreme opinions. Only the active persuasion to change the surroundings eliminates the extreme opinions completely.
Diaconis, Persi; Holmes, Susan; Janson, Svante
2015-01-01
We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368
Theoretical model for plasmonic photothermal response of gold nanostructures solutions
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Nga, Do T.; Viet, Nguyen A.
2018-03-01
Photothermal effects of gold core-shell nanoparticles and nanorods dispersed in water are theoretically investigated using the transient bioheat equation and the extended Mie theory. Properly calculating the absorption cross section is an extremely crucial milestone to determine the elevation of solution temperature. The nanostructures are assumed to be randomly and uniformly distributed in the solution. Compared to previous experiments, our theoretical temperature increase during laser light illumination provides, in various systems, both reasonable qualitative and quantitative agreement. This approach can be a highly reliable tool to predict photothermal effects in experimentally unexplored structures. We also validate our approach and discuss itslimitations.
Three-phase boundary length in solid-oxide fuel cells: A mathematical model
NASA Astrophysics Data System (ADS)
Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf
A mathematical model to calculate the volume specific three-phase boundary length in the porous composite electrodes of solid-oxide fuel cell is presented. The model is exclusively based on geometrical considerations accounting for porosity, particle diameter, particle size distribution, and solids phase distribution. Results are presented for uniform particle size distribution as well as for non-uniform particle size distribution.
Spatial Burnout in Water Reactors with Nonuniform Startup Distributions of Uranium and Boron
NASA Technical Reports Server (NTRS)
Fox, Thomas A.; Bogart, Donald
1955-01-01
Spatial burnout calculations have been made of two types of water moderated cylindrical reactor using boron as a burnable poison to increase reactor life. Specific reactors studied were a version of the Submarine Advanced Reactor (sAR) and a supercritical water reactor (SCW) . Burnout characteristics such as reactivity excursion, neutron-flux and heat-generation distributions, and uranium and boron distributions have been determined for core lives corresponding to a burnup of approximately 7 kilograms of fully enriched uranium. All reactivity calculations have been based on the actual nonuniform distribution of absorbers existing during intervals of core life. Spatial burnout of uranium and boron and spatial build-up of fission products and equilibrium xenon have been- considered. Calculations were performed on the NACA nuclear reactor simulator using two-group diff'usion theory. The following reactor burnout characteristics have been demonstrated: 1. A significantly lower excursion in reactivity during core life may be obtained by nonuniform rather than uniform startup distribution of uranium. Results for SCW with uranium distributed to provide constant radial heat generation and a core life corresponding to a uranium burnup of 7 kilograms indicated a maximum excursion in reactivity of 2.5 percent. This compared to a maximum excursion of 4.2 percent obtained for the same core life when w'anium was uniformly distributed at startup. Boron was incorporated uniformly in these cores at startup. 2. It is possible to approach constant radial heat generation during the life of a cylindrical core by means of startup nonuniform radial and axial distributions of uranium and boron. Results for SCW with nonuniform radial distribution of uranium to provide constant radial heat generation at startup and with boron for longevity indicate relatively small departures from the initially constant radial heat generation distribution during core life. Results for SAR with a sinusoidal distribution rather than uniform axial distributions of boron indicate significant improvements in axial heat generation distribution during the greater part of core life. 3. Uranium investments for cylindrical reactors with nonuniform radial uranium distributions which provide constant radial heat generation per unit core volume are somewhat higher than for reactors with uniform uranium concentration at startup. On the other hand, uranium investments for reactors with axial boron distributions which approach constant axial heat generation are somewhat smaller than for reactors with uniform boron distributions at startup.
Application of Statistically Derived CPAS Parachute Parameters
NASA Technical Reports Server (NTRS)
Romero, Leah M.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs
NASA Astrophysics Data System (ADS)
Lanchier, Nicolas
2017-04-01
Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.
Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps
NASA Astrophysics Data System (ADS)
Brooks, E. M.; Stein, S. A.; Spencer, B. D.
2015-12-01
The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.
A novel image encryption algorithm based on chaos maps with Markov properties
NASA Astrophysics Data System (ADS)
Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang
2015-02-01
In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.
Experimentally generated randomness certified by the impossibility of superluminal signals.
Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K
2018-04-01
From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.
Improvement of illumination uniformity for LED flat panel light by using micro-secondary lens array.
Lee, Hsiao-Wen; Lin, Bor-Shyh
2012-11-05
LED flat panel light is an innovative lighting product in recent years. However, current flat panel light products still contain some drawbacks, such as narrow lighting areas and hot spots. In this study, a micro-secondary lens array technique was proposed and applied for the design of the light guide surface to improve the illumination uniformity. By using the micro-secondary lens array, the candela distribution of the LED flat panel light can be adjusted to similar to batwing distribution to improve the illumination uniformity. The experimental results show that the enhancement of the floor illumination uniformity is about 61%, and that of the wall illumination uniformity is about 20.5%.
Objective sea level pressure analysis for sparse data areas
NASA Technical Reports Server (NTRS)
Druyan, L. M.
1972-01-01
A computer procedure was used to analyze the pressure distribution over the North Pacific Ocean for eleven synoptic times in February, 1967. Independent knowledge of the central pressures of lows is shown to reduce the analysis errors for very sparse data coverage. The application of planned remote sensing of sea-level wind speeds is shown to make a significant contribution to the quality of the analysis especially in the high gradient mid-latitudes and for sparse coverage of conventional observations (such as over Southern Hemisphere oceans). Uniform distribution of the available observations of sea-level pressure and wind velocity yields results far superior to those derived from a random distribution. A generalization of the results indicates that the average lower limit for analysis errors is between 2 and 2.5 mb based on the perfect specification of the magnitude of the sea-level pressure gradient from a known verification analysis. A less than perfect specification will derive from wind-pressure relationships applied to satellite observed wind speeds.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
A Bayesian Approach to the Paleomagnetic Conglomerate Test
NASA Astrophysics Data System (ADS)
Heslop, David; Roberts, Andrew P.
2018-02-01
The conglomerate test has served the paleomagnetic community for over 60 years as a means to detect remagnetizations. The test states that if a suite of clasts within a bed have uniformly random paleomagnetic directions, then the conglomerate cannot have experienced a pervasive event that remagnetized the clasts in the same direction. The current form of the conglomerate test is based on null hypothesis testing, which results in a binary "pass" (uniformly random directions) or "fail" (nonrandom directions) outcome. We have recast the conglomerate test in a Bayesian framework with the aim of providing more information concerning the level of support a given data set provides for a hypothesis of uniformly random paleomagnetic directions. Using this approach, we place the conglomerate test in a fully probabilistic framework that allows for inconclusive results when insufficient information is available to draw firm conclusions concerning the randomness or nonrandomness of directions. With our method, sample sets larger than those typically employed in paleomagnetism may be required to achieve strong support for a hypothesis of random directions. Given the potentially detrimental effect of unrecognized remagnetizations on paleomagnetic reconstructions, it is important to provide a means to draw statistically robust data-driven inferences. Our Bayesian analysis provides a means to do this for the conglomerate test.
The nonuniformity of antibody distribution in the kidney and its influence on dosimetry.
Flynn, Aiden A; Pedley, R Barbara; Green, Alan J; Dearling, Jason L; El-Emir, Ethaar; Boxer, Geoffrey M; Boden, Robert; Begent, Richard H J
2003-02-01
The therapeutic efficacy of radiolabeled antibody fragments can be limited by nephrotoxicity, particularly when the kidney is the major route of extraction from the circulation. Conventional dose estimates in kidney assume uniform dose deposition, but we have shown increased antibody localization in the cortex after glomerular filtration. The purpose of this study was to measure the radioactivity in cortex relative to medulla for a range of antibodies and to assess the validity of the assumption of uniformity of dose deposition in the whole kidney and in the cortex for these antibodies with a range of radionuclides. Storage phosphor plate technology (radioluminography) was used to acquire images of the distributions of a range of antibodies of various sizes, labeled with 125I, in kidney sections. This allowed the calculation of the antibody concentration in the cortex relative to the medulla. Beta-particle point dose kernels were then used to generate the dose-rate distributions from 14C, 131I, 186Re, 32P and 90Y. The correlation between the actual dose-rate distribution and the corresponding distribution calculated assuming uniform antibody distribution throughout the kidney was used to test the validity of estimating dose by assuming uniformity in the kidney and in the cortex. There was a strong inverse relationship between the ratio of the radioactivity in the cortex relative to that in the medulla and the antibody size. The nonuniformity of dose deposition was greatest with the smallest antibody fragments but became more uniform as the range of the emissions from the radionuclide increased. Furthermore, there was a strong correlation between the actual dose-rate distribution and the distribution when assuming a uniform source in the kidney for intact antibodies along with medium- to long-range radionuclides, but there was no correlation for small antibody fragments with any radioisotope or for short-range radionuclides with any antibody. However, when the cortex was separated from the whole kidney, the correlation between the actual dose-rate distribution and the assumed dose-rate distribution, if the source was uniform, increased significantly. During radioimmunotherapy, the extent of nonuniformity of dose deposition in the kidney depends on the properties of the antibody and radionuclide. For dosimetry estimates, the cortex should be taken as a separate source region when the radiopharmaceutical is small enough to be filtered by the glomerulus.
NASA Astrophysics Data System (ADS)
Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal
2013-10-01
Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.
Yuan, Cheng-song; Chen, Wan; Chen, Chen; Yang, Guang-hua; Hu, Chao; Tang, Kang-lai
2015-01-01
We investigated the effects on subtalar joint stress distribution after cannulated screw insertion at different positions and directions. After establishing a 3-dimensional geometric model of a normal subtalar joint, we analyzed the most ideal cannulated screw insertion position and approach for subtalar joint stress distribution and compared the differences in loading stress, antirotary strength, and anti-inversion/eversion strength among lateral-medial antiparallel screw insertion, traditional screw insertion, and ideal cannulated screw insertion. The screw insertion approach allowing the most uniform subtalar joint loading stress distribution was lateral screw insertion near the border of the talar neck plus medial screw insertion close to the ankle joint. For stress distribution uniformity, antirotary strength, and anti-inversion/eversion strength, lateral-medial antiparallel screw insertion was superior to traditional double-screw insertion. Compared with ideal cannulated screw insertion, slightly poorer stress distribution uniformity and better antirotary strength and anti-inversion/eversion strength were observed for lateral-medial antiparallel screw insertion. Traditional single-screw insertion was better than double-screw insertion for stress distribution uniformity but worse for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion was slightly worse for stress distribution uniformity than was ideal cannulated screw insertion but superior to traditional screw insertion. It was better than both ideal cannulated screw insertion and traditional screw insertion for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion is an approach with simple localization, convenient operation, and good safety. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Circular, confined distribution for charged particle beams
Garnett, Robert W.; Dobelbower, M. Christian
1995-01-01
A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location.
Circular, confined distribution for charged particle beams
Garnett, R.W.; Dobelbower, M.C.
1995-11-21
A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location. 26 figs.
On the vertical distribution of water vapor in the Martian tropics
NASA Technical Reports Server (NTRS)
Haberle, Robert M.
1988-01-01
Although measurements of the column abundance of atmospheric water vapor on Mars have been made, measurements of its vertical distribution have not. How water is distributed in the vertical is fundamental to atmosphere-surface exchange processes, and especially to transport within the atmosphere. Several lines of evidence suggest that in the lowest several scale heights of the atmosphere, water vapor is nearly uniformly distributed. However, most of these arguments are suggestive rather than conclusive since they only demonstrate that the altitude to saturation is very high if the observed amount of water vapor is distributed uniformly. A simple argument is presented, independent of the saturation constraint, which suggests that in tropical regions, water vapor on Mars should be very nearly uniformly mixed on an annual and zonally averaged basis.
NASA Astrophysics Data System (ADS)
Wright, K. A.; Hiatt, M. R.; Passalacqua, P.
2017-12-01
The humanitarian and ecological importance of coastal deltas has led many to research the factors influencing their ecogeomorphic evolution, in hopes of predicting the response of these regions to the growing number of natural and anthropogenic threats they face. One area of this effort, in which many unresolved questions remain, concerns the hydrological connectivity between the distributary channels and interdistributary islands, which field observations and numerical modeling have shown to be significant. Island vegetation is known to affect the degree of connectivity, but the effect of the spatial distribution of vegetation on connectivity remains an important question. This research aims to determine to what extent vegetation percent cover, patch size, and plant density affect connectivity in an idealized deltaic system. A 2D hydrodynamic model was used to numerically solve the shallow water equations in an idealized channel-island complex, modeled after Wax Lake Delta in Louisiana. For each model run, vegetation patches were distributed randomly throughout the islands according to a specified percent cover and patch size. Vegetation was modeled as a modified bed roughness, which was varied to represent a range of sparse-to-dense vegetation. To determine the effect of heterogeneity, the results of each patchy scenario were compared to results from a uniform run with the same spatially-averaged roughness. It was found that, while all patchy model runs demonstrated more channel-island connectivity than comparable uniform runs, this was particularly true when vegetation patches were dense and covered <50% of the island domain. Below this threshold, high-velocity pathways form in-between patches, greatly enhancing connectivity and transport capabilities. Above this threshold, however, little discrepancy is seen between patchy and uniform model runs. This threshold sits within the range of percent cover values observed in natural systems, and calculations show that these pathways affect shear stresses and residence time distributions in the deltaic islands, which can have implications for the fate and transport of sediment/nutrients. These results indicate that the spatial distribution of vegetation can have a notable impact on our ability to model connectivity in deltaic systems.
NASA Technical Reports Server (NTRS)
Jahshan, S. N.; Singleterry, R. C.
2001-01-01
The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue,
Stochastic interactions of two Brownian hard spheres in the presence of depletants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karzar-Jeddi, Mehdi; Fan, Tai-Hsi, E-mail: thfan@engr.uconn.edu; Tuinier, Remco
2014-06-07
A quantitative analysis is presented for the stochastic interactions of a pair of Brownian hard spheres in non-adsorbing polymer solutions. The hard spheres are hypothetically trapped by optical tweezers and allowed for random motion near the trapped positions. The investigation focuses on the long-time correlated Brownian motion. The mobility tensor altered by the polymer depletion effect is computed by the boundary integral method, and the corresponding random displacement is determined by the fluctuation-dissipation theorem. From our computations it follows that the presence of depletion layers around the hard spheres has a significant effect on the hydrodynamic interactions and particle dynamicsmore » as compared to pure solvent and uniform polymer solution cases. The probability distribution functions of random walks of the two interacting hard spheres that are trapped clearly shift due to the polymer depletion effect. The results show that the reduction of the viscosity in the depletion layers around the spheres and the entropic force due to the overlapping of depletion zones have a significant influence on the correlated Brownian interactions.« less
Logical optimization for database uniformization
NASA Technical Reports Server (NTRS)
Grant, J.
1984-01-01
Data base uniformization refers to the building of a common user interface facility to support uniform access to any or all of a collection of distributed heterogeneous data bases. Such a system should enable a user, situated anywhere along a set of distributed data bases, to access all of the information in the data bases without having to learn the various data manipulation languages. Furthermore, such a system should leave intact the component data bases, and in particular, their already existing software. A survey of various aspects of the data bases uniformization problem and a proposed solution are presented.
Scafetta, Nicola
2011-12-01
Probability distributions of human displacements have been fit with exponentially truncated Lévy flights or fat tailed Pareto inverse power law probability distributions. Thus, people usually stay within a given location (for example, the city of residence), but with a non-vanishing frequency they visit nearby or far locations too. Herein, we show that an important empirical distribution of human displacements (range: from 1 to 1000 km) can be well fit by three consecutive Pareto distributions with simple integer exponents equal to 1, 2, and (>) 3. These three exponents correspond to three displacement range zones of about 1 km ≲Δr≲10 km, 10 km ≲Δr≲300 km, and 300 km ≲Δr≲1000 km, respectively. These three zones can be geographically and physically well determined as displacements within a city, visits to nearby cities that may occur within just one-day trips, and visit to far locations that may require multi-days trips. The incremental integer values of the three exponents can be easily explained with a three-scale mobility cost∕benefit model for human displacements based on simple geometrical constrains. Essentially, people would divide the space into three major regions (close, medium, and far distances) and would assume that the travel benefits are randomly∕uniformly distributed mostly only within specific urban-like areas. The three displacement distribution zones appear to be characterized by an integer (1, 2, or >3) inverse power exponent because of the specific number (1, 2, or >3) of cost mechanisms (each of which is proportional to the displacement length). The distributions in the first two zones would be associated to Pareto distributions with exponent β = 1 and β = 2 because of simple geometrical statistical considerations due to the a priori assumption that most benefits are searched in the urban area of the city of residence or in the urban area of specific nearby cities. We also show, by using independent records of human mobility, that the proposed model predicts the statistical properties of human mobility below 1 km ranges, where people just walk. In the latter case, the threshold between zone 1 and zone 2 may be around 100-200 m and, perhaps, may have been evolutionary determined by the natural human high resolution visual range, which characterizes an area of interest where the benefits are assumed to be randomly and uniformly distributed. This rich and suggestive interpretation of human mobility may characterize other complex random walk phenomena that may also be described by a N-piece fit Pareto distributions with increasing integer exponents. This study also suggests that distribution functions used to fit experimental probability distributions must be carefully chosen for not improperly obscuring the physics underlying a phenomenon.
The NUONCE engine for LEO networks
NASA Technical Reports Server (NTRS)
Lo, Martin W.; Estabrook, Polly
1995-01-01
Typical LEO networks use constellations which provide a uniform coverage. However, the demand for telecom service is dynamic and unevenly distributed around the world. We examine a more efficient and cost effective design by matching the satellite coverage with the cyclical demand for service around the world. Our approach is to use a non-uniform satellite distribution for the network. We have named this constellation design NUONCE for Non Uniform Optimal Network Communications Engine.
Cooling water distribution system
Orr, Richard
1994-01-01
A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using an interconnected series of radial guide elements, a plurality of circumferential collector elements and collector boxes to collect and feed the cooling water into distribution channels extending along the curved surface of the steel containment vessel. The cooling water is uniformly distributed over the curved surface by a plurality of weirs in the distribution channels.
NASA Astrophysics Data System (ADS)
Memon, Imran; Shen, Yannan; Khan, Abdullah; Woidt, Carsten; Hillmer, Hartmut
2016-04-01
Miniaturized optical spectrometers can be implemented by an array of Fabry-Pérot (FP) filters. FP filters are composed of two highly reflecting parallel mirrors and a resonance cavity. Each filter transmits a small spectral band (filter line) depending on its individual cavity height. The optical nanospectrometer, a miniaturized FP-based spectrometer, implements 3D NanoImprint technology for the fabrication of multiple FP filter cavities in a single process step. However, it is challenging to avoid the dependency of residual layer (RL) thickness on the shape of the printed patterns in NanoImprint. Since in a nanospectrometer the filter cavities vary in height between neighboring FP filters and, thus, the volume of each cavity varies causing that the RL varies slightly or noticeably between different filters. This is one of the few disadvantages of NanoImprint using soft templates such as substrate conformal imprint lithography which is used in this paper. The advantages of large area soft templates can be revealed substantially if the problem of laterally inhomogeneous RLs can be avoided or reduced considerably. In the case of the nanospectrometer, non-uniform RLs lead to random variations in the designed cavity heights resulting in the shift of desired filter lines. To achieve highly uniform RLs, we report a volume-equalized template design with the lateral distribution of 64 different cavity heights into several units with each unit comprising four cavity heights. The average volume of each unit is kept constant to obtain uniform filling of imprint material per unit area. The imprint results, based on the volume-equalized template, demonstrate highly uniform RLs of 110 nm thickness.
Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary
2018-04-29
Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2013-12-01
Climate change may alter the spatial distribution, composition, structure, and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate solar radiation absorbed by individual plants for understanding and predicting their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the analytical solutions of random distributions of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and is suitable for ecological models to simulate long-term transient responses of plant communities to climate change.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
Enhanced hyperuniformity from random reorganization.
Hexner, Daniel; Chaikin, Paul M; Levine, Dov
2017-04-25
Diffusion relaxes density fluctuations toward a uniform random state whose variance in regions of volume [Formula: see text] scales as [Formula: see text] Systems whose fluctuations decay faster, [Formula: see text] with [Formula: see text], are called hyperuniform. The larger [Formula: see text], the more uniform, with systems like crystals achieving the maximum value: [Formula: see text] Although finite temperature equilibrium dynamics will not yield hyperuniform states, driven, nonequilibrium dynamics may. Such is the case, for example, in a simple model where overlapping particles are each given a small random displacement. Above a critical particle density [Formula: see text], the system evolves forever, never finding a configuration where no particles overlap. Below [Formula: see text], however, it eventually finds such a state, and stops evolving. This "absorbing state" is hyperuniform up to a length scale [Formula: see text], which diverges at [Formula: see text] An important question is whether hyperuniformity survives noise and thermal fluctuations. We find that hyperuniformity of the absorbing state is not only robust against noise, diffusion, or activity, but that such perturbations reduce fluctuations toward their limiting behavior, [Formula: see text], a uniformity similar to random close packing and early universe fluctuations, but with arbitrary controllable density.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.
Spatial effect of conical angle on optical-thermal distribution for circumferential photocoagulation
Truong, Van Gia; Park, Suhyun; Tran, Van Nam; Kang, Hyun Wook
2017-01-01
A uniformly diffusing applicator can be advantageous for laser treatment of tubular tissue. The current study investigated various conical angles for diffuser tips as a critical factor for achieving radially uniform light emission. A customized goniometer was employed to characterize the spatial uniformity of the light propagation. An ex vivo model was developed to quantitatively compare the temperature development and irreversible tissue coagulation. The 10-mm diffuser tip with angle at 25° achieved a uniform longitudinal intensity profile (i.e., 0.90 ± 0.07) as well as a consistent thermal denaturation on the tissue. The proposed conical angle can be instrumental in determining the uniformity of light distribution for the photothermal treatment of tubular tissue. PMID:29296495
Statistical time-dependent model for the interstellar gas
NASA Technical Reports Server (NTRS)
Gerola, H.; Kafatos, M.; Mccray, R.
1974-01-01
We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.
Leveraging ecological theory to guide natural product discovery.
Smanski, Michael J; Schlatter, Daniel C; Kinkel, Linda L
2016-03-01
Technological improvements have accelerated natural product (NP) discovery and engineering to the point that systematic genome mining for new molecules is on the horizon. NP biosynthetic potential is not equally distributed across organisms, environments, or microbial life histories, but instead is enriched in a number of prolific clades. Also, NPs are not equally abundant in nature; some are quite common and others markedly rare. Armed with this knowledge, random 'fishing expeditions' for new NPs are increasingly harder to justify. Understanding the ecological and evolutionary pressures that drive the non-uniform distribution of NP biosynthesis provides a rational framework for the targeted isolation of strains enriched in new NP potential. Additionally, ecological theory leads to testable hypotheses regarding the roles of NPs in shaping ecosystems. Here we review several recent strain prioritization practices and discuss the ecological and evolutionary underpinnings for each. Finally, we offer perspectives on leveraging microbial ecology and evolutionary biology for future NP discovery.
A tuneable approach to uniform light distribution for artificial daylight photodynamic therapy.
O'Mahoney, Paul; Haigh, Neil; Wood, Kenny; Brown, C Tom A; Ibbotson, Sally; Eadie, Ewan
2018-06-16
Implementation of daylight photodynamic therapy (dPDT) is somewhat limited by variable weather conditions. Light sources have been employed to provide artificial dPDT indoors, with low irradiances and longer treatment times. Uniform light distribution across the target area is key to ensuring effective treatment, particularly for large areas. A novel light source is developed with tuneable direction of light emission in order to meet this challenge. Wavelength composition of the novel light source is controlled such that the protoporphyrin-IX (PpIX) weighed spectra of both the light source and daylight match. The uniformity of the light source is characterised on a flat surface, a model head and a model leg. For context, a typical conventional PDT light source is also characterised. Additionally, the wavelength uniformity across the treatment site is characterised. The PpIX-weighted spectrum of the novel light source matches with PpIX-weighted daylight spectrum, with irradiance values within the bounds for effective dPDT. By tuning the direction of light emission, improvements are seen in the uniformity across large anatomical surfaces. Wavelength uniformity is discussed. We have developed a light source that addresses the challenges in uniform, multiwavelength light distribution for large area artificial dPDT across curved anatomical surfaces. Copyright © 2018. Published by Elsevier B.V.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-10-01
Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.
Trapping of Neutrinos in Extremely Compact Stars and the Influence of Brane Tension on This Process
NASA Astrophysics Data System (ADS)
Stuchlík, Zdenäěk; Hladík, Jan; Urbanec, Martin
We present estimates on the efficiency of neutrino trapping in brany extremely compact stars, using the simplest model with uniform distribution of energy density, assuming massless neutrinos and uniform distribution of neutrino emissivity. Computation have been done for two different uniform-density stellar solution in the Randall-Sundrum II type braneworld, namely with the Reissner-Nordström-type of geometry and the second one, derived by Germani and Maartens.1
Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei
2018-02-21
Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.
High density, uniformly distributed W/UO2 for use in Nuclear Thermal Propulsion
NASA Astrophysics Data System (ADS)
Tucker, Dennis S.; Barnes, Marvin W.; Hone, Lance; Cook, Steven
2017-04-01
An inexpensive, quick method has been developed to obtain uniform distributions of UO2 particles in a tungsten matrix utilizing 0.5 wt percent low density polyethylene. Powders were sintered in a Spark Plasma Sintering (SPS) furnace at 1600 °C, 1700 °C, 1750 °C, 1800 °C and 1850 °C using a modified sintering profile. This resulted in a uniform distribution of UO2 particles in a tungsten matrix with high densities, reaching 99.46% of theoretical for the sample sintered at 1850 °C. The powder process is described and the results of this study are given below.
NASA Astrophysics Data System (ADS)
Lin, Hai-Nan; Li, Xin; Chang, Zhe
2017-04-01
Linear polarization has been observed in both the prompt phase and afterglow of some bright gamma-ray bursts (GRBs). Polarization in the prompt phase spans a wide range, and may be as high as ≳ 50%. In the afterglow phase, however, it is usually below 10%. According to the standard fireball model, GRBs are produced by synchrotron radiation and Compton scattering process in a highly relativistic jet ejected from the central engine. It is widely accepted that prompt emissions occur in the internal shock when shells with different velocities collide with each other, and the magnetic field advected by the jet from the central engine can be ordered on a large scale. On the other hand, afterglows are often assumed to occur in the external shock when the jet collides with interstellar medium, and the magnetic field produced by the shock through, for example, Weibel instability, is possibly random. In this paper, we calculate the polarization properties of the synchrotron self-Compton process from a highly relativistic jet, in which the magnetic field is randomly distributed in the shock plane. We also consider the generalized situation where a uniform magnetic component perpendicular to the shock plane is superposed on the random magnetic component. We show that it is difficult for the polarization to be larger than 10% if the seed electrons are isotropic in the jet frame. This may account for the observed upper limit of polarization in the afterglow phase of GRBs. In addition, if the random and uniform magnetic components decay with time at different speeds, then the polarization angle may change 90° during the temporal evolution. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11375203, 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)
Borgia, G C; Brown, R J; Fantazzini, P
2000-12-01
The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T(1) data, and all had fixed data spacings, uniform in log-time. However, for T(2) data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T(2) data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise. Copyright 2000 Academic Press.
Tanner, Bertrand C.W.; McNabb, Mark; Palmer, Bradley M.; Toth, Michael J.; Miller, Mark S.
2014-01-01
Diminished skeletal muscle performance with aging, disuse, and disease may be partially attributed to the loss of myofilament proteins. Several laboratories have found a disproportionate loss of myosin protein content relative to other myofilament proteins, but due to methodological limitations, the structural manifestation of this protein loss is unknown. To investigate how variations in myosin content affect ensemble cross-bridge behavior and force production we simulated muscle contraction in the half-sarcomere as myosin was removed either i) uniformly, from the Z-line end of thick-filaments, or ii) randomly, along the length of thick-filaments. Uniform myosin removal decreased force production, showing a slightly steeper force-to-myosin content relationship than the 1:1 relationship that would be expected from the loss of cross-bridges. Random myosin removal also decreased force production, but this decrease was less than observed with uniform myosin loss, largely due to increased myosin attachment time (ton) and fractional cross-bridge binding with random myosin loss. These findings support our prior observations that prolonged ton may augment force production in single fibers with randomly reduced myosin content from chronic heart failure patients. These simulation also illustrate that the pattern of myosin loss along thick-filaments influences ensemble cross-bridge behavior and maintenance of force throughout the sarcomere. PMID:24486373
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Dept of Radiation Oncology, New York Weill Cornell Medical Ctr, New York, NY
Purpose: To develop a generalized statistical model that incorporates the treatment uncertainty from the rotational error of single iso-center technique, and calculate the additional PTV (planning target volume) margin required to compensate for this error. Methods: The random vectors for setup and additional rotation errors in the three-dimensional (3D) patient coordinate system were assumed to follow the 3D independent normal distribution with zero mean, and standard deviations σx, σy, σz, for setup error and a uniform σR for rotational error. Both random vectors were summed, normalized and transformed to the spherical coordinates to derive the chi distribution with 3 degreesmore » of freedom for the radical distance ρ. PTV margin was determined using the critical value of this distribution for 0.05 significant level so that 95% of the time the treatment target would be covered by ρ. The additional PTV margin required to compensate for the rotational error was calculated as a function of σx, σy, σz and σR. Results: The effect of the rotational error is more pronounced for treatments that requires high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2mm PTV margin (or σx =σy=σz=0.7mm), a σR=0.32mm will decrease the PTV coverage from 95% to 90% of the time, or an additional 0.2mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σR>0.3mm will lead to an additional PTV margin that cannot be ignored, and the maximal σR that can be ignored is 0.0064 rad (or 0.37°) for iso-to-target distance=5cm, or 0.0032 rad (or 0.18°) for iso-to-target distance=10cm. Conclusions: The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the iso-center and target is large.« less
Altering surface charge nonuniformity on individual colloidal particles.
Feick, Jason D; Chukwumah, Nkiru; Noel, Alexandra E; Velegol, Darrell
2004-04-13
Charge nonuniformity (sigmazeta) was altered on individual polystyrene latex particles and measured using the novel experimental technique of rotational electrophoresis. It has recently been shown that unaltered sulfated latices often have significant charge nonuniformity (sigmazeta = 100 mV) on individual particles. Here it is shown that anionic polyelectrolytes and surfactants reduce the native charge nonuniformity on negatively charged particles by 80% (sigmazeta = 20 mV), even while leaving the average surface charge density almost unchanged. Reduction of charge uniformity occurs as large domains of nonuniformity are minimized, giving a more random distribution of charge on individual particle surfaces. Targeted reduction of charge nonuniformity opens new opportunities for the dispersion of nanoparticles and the oriented assembly of particles.
A statistical model for radar images of agricultural scenes
NASA Technical Reports Server (NTRS)
Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.
1982-01-01
The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.
NASA Astrophysics Data System (ADS)
Guo, Zhi; Gao, Xing; Shi, Heng; Wang, Weiming
2013-04-01
In this study, the crustal and uppermost mantle shear wave velocities beneath the Japanese islands have been determined by inversion from seismic ambient noise tomography using data recorded at 75 Full Range Seismograph Network of Japan broad-band seismic stations, which are uniformly distributed across the Japanese islands. By cross-correlating 2 yr of vertical component seismic ambient noise recordings, we are able to extract Rayleigh wave empirical Green's functions, which are subsequently used to measure phase velocity dispersion in the period band of 6-50 s. The dispersion data are then inverted to yield 2-D tomographic phase velocity maps and 3-D shear wave velocity models. Our results show that the velocity variations at short periods (˜10 s), or in the uppermost crust, correlate well with the major known surface geological and tectonic features. In particular, the distribution of low-velocity anomalies shows good spatial correlation with active faults, volcanoes and terrains of sediment exposure, whereas the high-velocity anomalies are mainly associated with the mountain ranges. We also observe that large upper crustal earthquakes (5.0 ≤ M ≤ 8.0, depth ≤ 25 km) mainly occurred in low-velocity anomalies or along the boundary between low- and high-velocity anomalies, suggesting that large upper crustal earthquakes do not strike randomly or uniformly; rather they are inclined to nucleate within or adjacent to low-velocity areas.
NASA Technical Reports Server (NTRS)
Yang, Weidong; Marshak, Alexander; Kostinski, Alexander B.; Varnai, Tamas
2013-01-01
Motivated by the physical picture of shape-dependent air resistance and, consequently, shape-induced differential sedimentation of dust particles, we searched for and found evidence of dust particle asphericity affecting the evolution and distribution of dust-scattered light depolarization ratio (delta). Specifically, we examined a large data set of Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) observations of Saharan dust from June to August 2007. Observing along a typical transatlantic dust track, we find that (1) median delta is uniformly distributed between 2 and 5?km altitudes as the elevated dust leaves the west coast of Africa, thereby indicating uniformly random mixing of particle shapes with height; (2) vertical homogeneity of median delta breaks down during the westward transport: between 2 and 5?km delta increases with altitude and this increase becomes more pronounced with westward progress; (3) delta tends to increase at higher altitude (greater than 4?km) and decrease at lower altitude (less than 4?km) during the westward transport. All these features are captured qualitatively by a minimal model (two shapes only), suggesting that shape-induced differential settling and consequent sorting indeed contribute significantly to the observed temporal evolution and vertical stratification of dust properties. By implicating particle shape as a likely cause of gravitational sorting, these results will affect the estimates of radiative transfer through Saharan dust layers.
Passive containment cooling water distribution device
Conway, Lawrence E.; Fanto, Susan V.
1994-01-01
A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using a series of radial guide elements and cascading weir boxes to collect and then distribute the cooling water into a series of distribution areas through a plurality of cascading weirs. The cooling water is then uniformly distributed over the curved surface by a plurality of weir notches in the face plate of the weir box.
NASA Astrophysics Data System (ADS)
Sato, Haruo; Hayakawa, Toshihiko
2014-10-01
Short-period seismograms of earthquakes are complex especially beneath volcanoes, where the S wave mean free path is short and low velocity bodies composed of melt or fluid are expected in addition to random velocity inhomogeneities as scattering sources. Resonant scattering inherent in a low velocity body shows trap and release of waves with a delay time. Focusing of the delay time phenomenon, we have to consider seriously multiple resonant scattering processes. Since wave phases are complex in such a scattering medium, the radiative transfer theory has been often used to synthesize the variation of mean square (MS) amplitude of waves; however, resonant scattering has not been well adopted in the conventional radiative transfer theory. Here, as a simple mathematical model, we study the sequence of isotropic resonant scattering of a scalar wavelet by low velocity spheres at low frequencies, where the inside velocity is supposed to be low enough. We first derive the total scattering cross-section per time for each order of scattering as the convolution kernel representing the decaying scattering response. Then, for a random and uniform distribution of such identical resonant isotropic scatterers, we build the propagator of the MS amplitude by using causality, a geometrical spreading factor and the scattering loss. Using those propagators and convolution kernels, we formulate the radiative transfer equation for a spherically impulsive radiation from a point source. The synthesized MS amplitude time trace shows a dip just after the direct arrival and a delayed swelling, and then a decaying tail at large lapse times. The delayed swelling is a prominent effect of resonant scattering. The space distribution of synthesized MS amplitude shows a swelling near the source region in space, and it becomes a bell shape like a diffusion solution at large lapse times.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Impact of isotopic disorders on thermal transport properties of nanotubes and nanowires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Tao; Kang, Wei; Wang, Jianxiang, E-mail: jxwang@pku.edu.cn
2015-01-21
We present a one-dimensional lattice model to describe thermal transport in isotopically doped nanotubes and nanowires. The thermal conductivities thus predicted, as a function of isotopic concentration, agree well with recent experiments and other simulations. Our results display that for any given concentration of isotopic atoms in a lattice without sharp atomic interfaces, the maximum thermal conductivity is attained when isotopic atoms are placed regularly with an equal space, whereas the minimum is achieved when they are randomly inserted with a uniform distribution. Non-uniformity of disorder can further tune the thermal conductivity between the two values. Moreover, the dependence ofmore » the thermal conductivity on the nanoscale feature size becomes weak at low temperature when disorder exists. In addition, when self-consistent thermal reservoirs are included to describe diffusive nanomaterials, the thermal conductivities predicted by our model are in line with the results of macroscopic theories with an interfacial effect. Our results suggest that the disorder provides an additional freedom to tune the thermal properties of nanomaterials in many technological applications including nanoelectronics, solid-state lighting, energy conservation, and conversion.« less
NASA Astrophysics Data System (ADS)
Wattanasakulpong, Nuttawit; Chaikittiratana, Arisara; Pornpeerakeat, Sacharuck
2018-06-01
In this paper, vibration analysis of functionally graded porous beams is carried out using the third-order shear deformation theory. The beams have uniform and non-uniform porosity distributions across their thickness and both ends are supported by rotational and translational springs. The material properties of the beams such as elastic moduli and mass density can be related to the porosity and mass coefficient utilizing the typical mechanical features of open-cell metal foams. The Chebyshev collocation method is applied to solve the governing equations derived from Hamilton's principle, which is used in order to obtain the accurate natural frequencies for the vibration problem of beams with various general and elastic boundary conditions. Based on the numerical experiments, it is revealed that the natural frequencies of the beams with asymmetric and non-uniform porosity distributions are higher than those of other beams with uniform and symmetric porosity distributions.
NASA Astrophysics Data System (ADS)
Jing, Haiquan; He, Xuhui; Zou, Yunfeng; Wang, Hanfeng
2018-03-01
Stay cables are important load-bearing structural elements of cable-stayed bridges. Suppressing the large vibrations of the stay cables under the external excitations is of worldwide concern for the bridge engineers and researchers. Over the past decade, the use of crosstie has become one of the most practical and effective methods. Extensive research has led to a better understanding of the mechanics of cable networks, and the effects of different parameters, such as length ratio, mass-tension ratio, and segment ratio on the effectiveness of the crosstie have been investigated. In this study, uniformly distributed elastic crossties serve to replace the traditional single, or several cross-ties, aiming to delay "mode localization." A numerical method is developed by replacing the uniformly distributed, discrete elastic cross-tie model with an equivalent, continuously distributed, elastic cross-tie model in order to calculate the modal frequencies and mode shapes of the cable-crosstie system. The effectiveness of the proposed method is verified by comparing the elicited results with those obtained using the previous method. The uniformly distributed elastic cross-ties are shown to significantly delay "mode localization."
Ground States of Random Spanning Trees on a D-Wave 2X
NASA Astrophysics Data System (ADS)
Hall, J. S.; Hobl, L.; Novotny, M. A.; Michielsen, Kristel
The performances of two D-Wave 2 machines (476 and 496 qubits) and of a 1097-qubit D-Wave 2X were investigated. Each chip has a Chimera interaction graph calG . Problem input consists of values for the fields hj and for the two-qubit interactions Ji , j of an Ising spin-glass problem formulated on calG . Output is returned in terms of a spin configuration {sj } , with sj = +/- 1 . We generated random spanning trees (RSTs) uniformly distributed over all spanning trees of calG . On the 476-qubit D-Wave 2, RSTs were generated on the full chip with Ji , j = - 1 and hj = 0 and solved one thousand times. The distribution of solution energies and the average magnetization of each qubit were determined. On both the 476- and 1097-qubit machines, four identical spanning trees were generated on each quadrant of the chip. The statistical independence of these regions was investigated. In another study, on the D-Wave 2X, one hundred RSTs with random Ji , j ∈ { - 1 , 1 } and hj = 0 were generated on the full chip. Each RST problem was solved one hundred times and the number of times the ground state energy was found was recorded. This procedure was repeated for square subgraphs, with dimensions ranging from 7 ×7 to 11 ×11. Supported in part by NSF Grants DGE-0947419 and DMR-1206233. D-Wave time provided by D-Wave Systems and by the USRA Quantum Artificial Intelligence Laboratory Research Opportunity.
Skupsky, S.; Craxton, R.S.; Soures, J.
1990-10-02
In order to control the intensity of a laser beam so that its intensity varies uniformly and provides uniform illumination of a target, such as a laser fusion target, a broad bandwidth laser pulse is spectrally dispersed spatially so that the frequency components thereof are spread apart. A disperser (grating) provides an output beam which varies spatially in wavelength in at least one direction transverse to the direction of propagation of the beam. Temporal spread (time delay) across the beam is corrected by using a phase delay device (a time delay compensation echelon). The dispersed beam may be amplified with laser amplifiers and frequency converted (doubled, tripled or quadrupled in frequency) with nonlinear optical elements (birefringent crystals). The spectral variation across the beam is compensated by varying the angle of incidence on one of the crystals with respect to the crystal optical axis utilizing a lens which diverges the beam. Another lens after the frequency converter may be used to recollimate the beam. The frequency converted beam is recombined so that portions of different frequency interfere and, unlike interference between waves of the same wavelength, there results an intensity pattern with rapid temporal oscillations which average out rapidly in time thereby producing uniform illumination on target. A distributed phase plate (also known as a random phase mask), through which the spectrally dispersed beam is passed and then focused on a target, is used to provide the interference pattern which becomes nearly modulation free and uniform in intensity in the direction of the spectral variation. 16 figs.
Skupsky, Stanley; Craxton, R. Stephen; Soures, John
1990-01-01
In order to control the intensity of a laser beam so that its intensity varies uniformly and provides uniform illumination of a target, such as a laser fusion target, a broad bandwidth laser pulse is spectrally dispersed spatially so that the frequency components thereof are spread apart. A disperser (grating) provides an output beam which varies spatially in wavelength in at least one direction transverse to the direction of propagation of the beam. Temporal spread (time delay) across the beam is corrected by using a phase delay device (a time delay compensation echelon). The dispersed beam may be amplified with laser amplifiers and frequency converted (doubled, tripled or quadrupled in frequency) with nonlinear optical elements (birefringent crystals). The spectral variation across the beam is compensated by varying the angle of incidence on one of the crystals with respect to the crystal optical axis utilizing a lens which diverges the beam. Another lens after the frequency converter may be used to recollimate the beam. The frequency converted beam is recombined so that portions of different frequency interfere and, unlike interference between waves of the same wavelength, there results an intensity pattern with rapid temoral oscillations which average out rapidly in time thereby producing uniform illumination on target. A distributed phase plate (also known as a random phase mask), through which the spectrally dispersed beam is passed and then focused on a target, is used to provide the interference pattern which becomes nearly modulation free and uniform in intensity in the direction of the spectral variation.
Ultra-broadband and planar sound diffuser with high uniformity of reflected intensity
NASA Astrophysics Data System (ADS)
Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Yang, Jing; Yang, Jun; Cheng, Jian-Chun
2017-09-01
Schroeder diffusers, as a classical design of acoustic diffusers proposed over 40 years ago, play key roles in many practical scenarios ranging from architectural acoustics to noise control to particle manipulation. Despite the great success of conventional acoustic diffusers, it is still worth pursuing ideal acoustic diffusers that are essentially expected to produce perfect sound diffuse reflection within the unlimited bandwidth. Here, we propose a different mechanism for designing acoustic diffusers to overcome the basic limits in intensity uniformity and working bandwidth in the previous designs and demonstrate a practical implementation by acoustic metamaterials with dispersionless phase-steering capability. In stark contrast to the existing production of diffuse fields relying on random scattering of sound energy by using a specific mathematical number sequence of periodically distributed unit cells, we directly mold the reflected wavefront into the desired shape by precisely manipulating the local phases of individual subwavelength metastructures. We also benchmark our design via numerical simulation with a commercially available Schroeder diffuser, and the results verify that our proposed diffuser scatters incident acoustic energy into all directions more uniformly within an ultra-broad band regardless of the incident angle. Furthermore, our design enables further improvement of the working bandwidth just by simply downscaling each individual element. With ultra-broadband functionality and high uniformity of reflected intensity, our metamaterial-based production of the diffusive field opens a route to the design and application of acoustic diffusers and may have a significant impact on various fields such as architectural acoustics and medical ultrasound imaging/treatment.
Systematic and random variations in digital Thematic Mapper data
NASA Technical Reports Server (NTRS)
Duggin, M. J. (Principal Investigator); Sakhavat, H.
1985-01-01
Radiance recorded by any remote sensing instrument will contain noise which will consist of both systematic and random variations. Systematic variations may be due to sun-target-sensor geometry, atmospheric conditions, and the interaction of the spectral characteristics of the sensor with those of upwelling radiance. Random variations in the data may be caused by variations in the nature and in the heterogeneity of the ground cover, by variations in atmospheric transmission, and by the interaction of these variations with the sensing device. It is important to be aware of the extent of random and systematic errors in recorded radiance data across ostensibly uniform ground areas in order to assess the impact on quantative image analysis procedures for both the single date and the multidate cases. It is the intention here to examine the systematic and the random variations in digital radiance data recorded in each band by the thematic mapper over crop areas which are ostensibly uniform and which are free from visible cloud.
Robustness of power systems under a democratic-fiber-bundle-like model
NASA Astrophysics Data System (ADS)
Yaǧan, Osman
2015-06-01
We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,...,LN are independent and identically distributed with PL(x ) =P [L ≤x ] . The capacity Ci defines the maximum flow allowed on line i and is assumed to be given by Ci=(1 +α ) Li , with α >0 . We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p=1 -E/[L ] maxx(P [L >x ](α x +E [L |L >x ]) ) , where E [.] is the expectation operator; (ii) we derive conditions on the distribution PL(x ) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions—uniform, Pareto, and Weibull—showing that with the minimum load Lmin and mean load E [L ] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p is maximized when the load distribution is a Dirac delta function centered at E [L ] , i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.
Do simple screening statistical tools help to detect reporting bias?
Pirracchio, Romain; Resche-Rigon, Matthieu; Chevret, Sylvie; Journois, Didier
2013-09-02
As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT). This evaluation roadmap proposed herein relies on four steps: evaluation of the distribution of the reported variables; evaluation of the distribution of the reported p values; data simulation using parametric bootstrap and explicit computation of the p values. Such an approach was illustrated using published data from a retracted RCT comparing a hydroxyethyl starch versus albumin-based priming for cardiopulmonary bypass. Despite obvious nonnormal distributions, several variables are presented as if they were normally distributed. The set of 16 p values testing for differences in baseline characteristics across randomized groups did not follow a Uniform distribution on [0,1] (p = 0.045). The p values obtained by explicit computations were different from the results reported by the authors for the two following variables: urine output at 5 hours (calculated p value < 10-6, reported p ≥ 0.05); packed red blood cells (PRBC) during surgery (calculated p value = 0.08; reported p < 0.05). Finally, parametric bootstrap found p value > 0.05 in only 5 of the 10,000 simulated datasets concerning urine output 5 hours after surgery. Concerning PRBC transfused during surgery, parametric bootstrap showed that only the corresponding p value had less than a 50% chance to be inferior to 0.05 (3,920/10,000, p value < 0.05). Such simple evaluation methods might offer some warning signals. However, it should be emphasized that such methods do not allow concluding to the presence of error or fraud but should rather be used to justify asking for an access to the raw data.
Pathwise upper semi-continuity of random pullback attractors along the time axis
NASA Astrophysics Data System (ADS)
Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke
2018-07-01
The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.
Stacked waveguide reactors with gradient embedded scatterers for high-capacity water cleaning
Ahsan, Syed Saad; Gumus, Abdurrahman; Erickson, David
2015-11-04
We present a compact water-cleaning reactor with stacked layers of waveguides containing gradient patterns of optical scatterers that enable uniform light distribution and augmented water-cleaning rates. Previous photocatalytic reactors using immersion, external, or distributive lamps suffer from poor light distribution that impedes scalability. Here, we use an external UV-source to direct photons into stacked waveguide reactors where we scatter the photons uniformly over the length of the waveguide to thin films of TiO 2-catalysts. In conclusion, we also show 4.5 times improvement in activity over uniform scatterer designs, demonstrate a degradation of 67% of the organic dye, and characterize themore » degradation rate constant.« less
Stacked waveguide reactors with gradient embedded scatterers for high-capacity water cleaning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahsan, Syed Saad; Gumus, Abdurrahman; Erickson, David
We present a compact water-cleaning reactor with stacked layers of waveguides containing gradient patterns of optical scatterers that enable uniform light distribution and augmented water-cleaning rates. Previous photocatalytic reactors using immersion, external, or distributive lamps suffer from poor light distribution that impedes scalability. Here, we use an external UV-source to direct photons into stacked waveguide reactors where we scatter the photons uniformly over the length of the waveguide to thin films of TiO 2-catalysts. In conclusion, we also show 4.5 times improvement in activity over uniform scatterer designs, demonstrate a degradation of 67% of the organic dye, and characterize themore » degradation rate constant.« less
Ahsan, Syed Saad; Pereyra, Brandon; Jung, Erica E; Erickson, David
2014-10-20
Most existing photobioreactors do a poor job of distributing light uniformly due to shading effects. One method by which this could be improved is through the use of internal wave-guiding structures incorporating engineered light scattering schemes. By varying the density of these scatterers, one can control the spatial distribution of light inside the reactor enabling better uniformity of illumination. Here, we compare a number of light scattering schemes and evaluate their ability to enhance biomass accumulation. We demonstrate a design for a gradient distribution of surface scatterers with uniform lateral scattering intensity that is superior for algal biomass accumulation, resulting in a 40% increase in the growth rate.
NASA Astrophysics Data System (ADS)
Delfani, M. R.; Latifi Shahandashti, M.
2017-09-01
In this paper, within the complete form of Mindlin's second strain gradient theory, the elastic field of an isolated spherical inclusion embedded in an infinitely extended homogeneous isotropic medium due to a non-uniform distribution of eigenfields is determined. These eigenfields, in addition to eigenstrain, comprise eigen double and eigen triple strains. After the derivation of a closed-form expression for Green's function associated with the problem, two different cases of non-uniform distribution of the eigenfields are considered as follows: (i) radial distribution, i.e. the distributions of the eigenfields are functions of only the radial distance of points from the centre of inclusion, and (ii) polynomial distribution, i.e. the distributions of the eigenfields are polynomial functions in the Cartesian coordinates of points. While the obtained solution for the elastic field of the latter case takes the form of an infinite series, the solution to the former case is represented in a closed form. Moreover, Eshelby's tensors associated with the two mentioned cases are obtained.
NASA Astrophysics Data System (ADS)
Yang, Ce; Wang, Yingjun; Lao, Dazhong; Tong, Ding; Wei, Longyu; Liu, Yixiong
2016-08-01
The inlet recirculation characteristics of double suction centrifugal compressor with unsymmetrical inlet structures were studied in numerical method, mainly focused on three issues including the amounts and differences of the inlet recirculation in different working conditions, the circumferential non-uniform distributions of the inlet recirculation, the recirculation velocity distributions of the upstream slot of the rear impeller. The results show that there are some differences between the recirculation of the front impeller and that of the rear impeller in whole working conditions. In design speed, the recirculation flow rate of the rear impeller is larger than that of the front impeller in the large flow range, but in the small flow range, the recirculation flow rate of the rear impeller is smaller than that of the front impeller. In different working conditions, the recirculation velocity distributions of the front and rear impeller are non-uniform along the circumferential direction and their non-uniform extents are quite different. The circumferential non-uniform extent of the recirculation velocity varies with the working conditions change. The circumferential non-uniform extent of the recirculation velocity of front impeller and its distribution are determined by the static pressure distribution of the front impeller, but that of the rear impeller is decided by the coupling effects of the inlet flow distortion of the rear impeller, the circumferential unsymmetrical distribution of the upstream slot and the asymmetric structure of the volute. In the design flow and small flow conditions, the recirculation velocities at different circumferential positions of the mean line of the upstream slot cross-section of the rear impeller are quite different, and the recirculation velocities distribution forms at both sides of the mean line are different. The recirculation velocity distributions in the cross-section of the upstream slot depend on the static pressure distributions in the intake duct.
Local Burn-Up Effects in the NBSR Fuel Element
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown N. R.; Hanson A.; Diamond, D.
2013-01-31
This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less
Apparatus and process to enhance the uniform formation of hollow glass microspheres
Schumacher, Ray F
2013-10-01
A process and apparatus is provided for enhancing the formation of a uniform population of hollow glass microspheres. A burner head is used which directs incoming glass particles away from the cooler perimeter of the flame cone of the gas burner and distributes the glass particles in a uniform manner throughout the more evenly heated portions of the flame zone. As a result, as the glass particles are softened and expand by a released nucleating gas so as to form a hollow glass microsphere, the resulting hollow glass microspheres have a more uniform size and property distribution as a result of experiencing a more homogenous heat treatment process.
A steady-state model of the lunar ejecta cloud
NASA Astrophysics Data System (ADS)
Christou, Apostolos
2014-05-01
Every airless body in the solar system is surrounded by a cloud of ejecta produced by the impact of interplanetary meteoroids on its surface [1]. Such ``dust exospheres'' have been observed around the Galilean satellites of Jupiter [2,3]. The prospect of long-term robotic and human operations on the Moon by the US and other countries has rekindled interest on the subject [4]. This interest has culminated with the - currently ongoing - investigation of the Moon's dust exosphere by the LADEE spacecraft [5]. Here a model is presented of a ballistic, collisionless, steady state population of ejecta launched vertically at randomly distributed times and velocities and moving under constant gravity. Assuming a uniform distribution of launch times I derive closed form solutions for the probability density functions (pdfs) of the height distribution of particles and the distribution of their speeds in a rest frame both at the surface and at altitude. The treatment is then extended to particle motion with respect to a moving platform such as an orbiting spacecraft. These expressions are compared with numerical simulations under lunar surface gravity where the underlying ejection speed distribution is (a) uniform (b) a power law. I discuss the predictions of the model, its limitations, and how it can be validated against near-surface and orbital measurements.[1] Gault, D. Shoemaker, E.M., Moore, H.J., 1963, NASA TN-D 1767. [2] Kruger, H., Krivov, A.V., Hamilton, D. P., Grun, E., 1999, Nature, 399, 558. [3] Kruger, H., Krivov, A.V., Sremcevic, M., Grun, E., 2003, Icarus, 164, 170. [4] Grun, E., Horanyi, M., Sternovsky, Z., 2011, Planetary and Space Science, 59, 1672. [5] Elphic, R.C., Hine, B., Delory, G.T., Salute, J.S., Noble, S., Colaprete, A., Horanyi, M., Mahaffy, P., and the LADEE Science Team, 2014, LPSC XLV, LPI Contr. 1777, 2677.
NASA Astrophysics Data System (ADS)
Allen, C. S.; Korkan, K. D.
1991-01-01
A methodology for predicting the performance and acoustics of counterrotating propeller configurations was modified to take into account the effects of a non-uniform free stream velocity distribution entering the disk plane. The method utilizes the analytical techniques of Lock and Theodorson as described by Davidson to determine the influence of the non-uniform free stream velocity distribution in the prediction of the steady aerodynamic loads. The unsteady load contribution is determined according to the procedure of Leseture with rigid helical tip vortices simulating the previous rotations of each propeller. The steady and unsteady loads are combined to obtain the total blade loading required for acoustic prediction employing the Ffowcs Williams-Hawking equation as simplified by Succi with the assumption of compact sources. The numerical method is used to redesign the previous commuter class counterrotating propeller configuration of Denner. The specifications, performance, and acoustics of the new design are compared with the results of Denner thereby determining the influence of the non-uniform free stream velocity distribution on these metrics.
A novel polyimide based micro heater with high temperature uniformity
Yu, Shifeng; Wang, Shuyu; Lu, Ming; ...
2017-02-06
MEMS based micro heaters are a key component in micro bio-calorimetry, nondispersive infrared gas sensors, semiconductor gas sensors and microfluidic actuators. A micro heater with a uniform temperature distribution in the heating area and short response time is desirable in ultrasensitive temperature-dependent measurements. In this study, we propose a novel micro heater design to reach a uniform temperature in a large heating area by optimizing the heating power density distribution in the heating area. A polyimide membrane is utilized as the substrate to reduce the thermal mass and heat loss which allows for fast thermal response as well as amore » simplified fabrication process. A gold and titanium heating element is fabricated on the flexible polyimide substrate using the standard MEMS technique. The temperature distribution in the heating area for a certain power input is measured by an IR camera, and is consistent with FEA simulation results. Finally, this design can achieve fast response and uniform temperature distribution, which is quite suitable for the programmable heating such as impulse and step driving.« less
A novel polyimide based micro heater with high temperature uniformity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Shifeng; Wang, Shuyu; Lu, Ming
MEMS based micro heaters are a key component in micro bio-calorimetry, nondispersive infrared gas sensors, semiconductor gas sensors and microfluidic actuators. A micro heater with a uniform temperature distribution in the heating area and short response time is desirable in ultrasensitive temperature-dependent measurements. In this study, we propose a novel micro heater design to reach a uniform temperature in a large heating area by optimizing the heating power density distribution in the heating area. A polyimide membrane is utilized as the substrate to reduce the thermal mass and heat loss which allows for fast thermal response as well as amore » simplified fabrication process. A gold and titanium heating element is fabricated on the flexible polyimide substrate using the standard MEMS technique. The temperature distribution in the heating area for a certain power input is measured by an IR camera, and is consistent with FEA simulation results. Finally, this design can achieve fast response and uniform temperature distribution, which is quite suitable for the programmable heating such as impulse and step driving.« less
Partial entrainment of gravel bars during floods
Konrad, Christopher P.; Booth, Derek B.; Burges, Stephen J.; Montgomery, David R.
2002-01-01
Spatial patterns of bed material entrainment by floods were documented at seven gravel bars using arrays of metal washers (bed tags) placed in the streambed. The observed patterns were used to test a general stochastic model that bed material entrainment is a spatially independent, random process where the probability of entrainment is uniform over a gravel bar and a function of the peak dimensionless shear stress τ0* of the flood. The fraction of tags missing from a gravel bar during a flood, or partial entrainment, had an approximately normal distribution with respect to τ0* with a mean value (50% of the tags entrained) of 0.085 and standard deviation of 0.022 (root‐mean‐square error of 0.09). Variation in partial entrainment for a given τ0* demonstrated the effects of flow conditioning on bed strength, with lower values of partial entrainment after intermediate magnitude floods (0.065 < τ0*< 0.08) than after higher magnitude floods. Although the probability of bed material entrainment was approximately uniform over a gravel bar during individual floods and independent from flood to flood, regions of preferential stability and instability emerged at some bars over the course of a wet season. Deviations from spatially uniform and independent bed material entrainment were most pronounced for reaches with varied flow and in consecutive floods with small to intermediate magnitudes.
Distribution and regularity of injection from a multicylinder fuel-injection pump
NASA Technical Reports Server (NTRS)
Rothrock, A M; Marsh, E T
1936-01-01
This report presents the results of performance test conducted on a six-cylinder commercial fuel-injection pump that was adjusted to give uniform fuel distribution among the cylinders at a throttle setting of 0.00038 pound per injection and a pump speed of 750 revolutions per minute. The throttle setting and pump speed were then varied through the operating range to determine the uniformity of distribution and regularity of injection.
Flow coating apparatus and method of coating
Hanumanthu, Ramasubrahmaniam; Neyman, Patrick; MacDonald, Niles; Brophy, Brenor; Kopczynski, Kevin; Nair, Wood
2014-03-11
Disclosed is a flow coating apparatus, comprising a slot that can dispense a coating material in an approximately uniform manner along a distribution blade that increases uniformity by means of surface tension and transfers the uniform flow of coating material onto an inclined substrate such as for example glass, solar panels, windows or part of an electronic display. Also disclosed is a method of flow coating a substrate using the apparatus such that the substrate is positioned correctly relative to the distribution blade, a pre-wetting step is completed where both the blade and substrate are completed wetted with a pre-wet solution prior to dispensing of the coating material onto the distribution blade from the slot and hence onto the substrate. Thereafter the substrate is removed from the distribution blade and allowed to dry, thereby forming a coating.
Continuous-variable quantum key distribution in uniform fast-fading channels
NASA Astrophysics Data System (ADS)
Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano
2018-03-01
We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.
NASA Astrophysics Data System (ADS)
Rana, Dipankar; Gangopadhyay, Gautam
2003-01-01
We have analyzed the energy transfer process in a dendrimer supermolecule using a classical random walk model and an Eyring model of membrane permeation. Here the energy transfer is considered as a multiple barrier crossing process by thermal hopping on the backbone of a cayley tree. It is shown that the mean residence time and mean first passage time, which involve explicit local escape rates, depend upon the temperature, size of the molecule, core branching, and the nature of the potential energy landscape along the cayley tree architecture. The effect of branching tries to create a uniform distribution of mean residence time over the generations and the distribution depends upon the interplay of funneling and local rates of transitions. The calculation of flux at the steady state from the Eyring model also gives a useful idea about the rate when the dendrimeric system is considered as an open system where the core is absorbing the transported energy like a photosynthetic reaction center and a continuous supply of external energy is maintained at the peripheral nodes. The effect of the above parameters of the system are shown to depend on the steady-state flux that has a qualitative resemblence with the result of the mean first passage time approach.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Scientific impact: the story of your big hit
NASA Astrophysics Data System (ADS)
Sinatra, Roberta; Wang, Dashun; Deville, Pierre; Song, Chaoming; Barabasi, Albert-Laszlo
2014-03-01
A gradual increase in performance through learning and practice characterize most trades, from sport to music or engineering, and common sense suggests this to be true in science as well. This prompts us to ask: what are the precise patterns that lead to scientific excellence? Does performance indeed improve throughout a scientific career? Are there quantifiable signs of an impending scientific hit? Using citation-based measures as a proxy of impact, we show that (i) major discoveries are not preceded by works of increasing impact, nor are followed by work of higher impact, (ii) the precise time ranking of the highest impact work in a scientist's career is uniformly random, with the higher probability to have a major discovery in the middle of scientific careers being due only to changes in productivity, (iii) there is a strong correlation between the highest impact work and average impact of a scientist's work. These findings suggest that the impact of a paper is drawn randomly from an impact distribution that is unique for each scientist. We present a model which allows to reconstruct the individual impact distribution, making possible to create synthetic careers that exhibit the same properties of the real data and to define a ranking based on the overall impact of a scientist. RS acknowledges support from the James McDonnell Foundation.
Experimental and numerical modeling research of rubber material during microwave heating process
NASA Astrophysics Data System (ADS)
Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling
2018-05-01
This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Regional ventilation-perfusion distribution is more uniform in the prone position
NASA Technical Reports Server (NTRS)
Mure, M.; Domino, K. B.; Lindahl, S. G.; Hlastala, M. P.; Altemeier, W. A.; Glenny, R. W.
2000-01-01
The arterial blood PO(2) is increased in the prone position in animals and humans because of an improvement in ventilation (VA) and perfusion (Q) matching. However, the mechanism of improved VA/Q is unknown. This experiment measured regional VA/Q heterogeneity and the correlation between VA and Q in supine and prone positions in pigs. Eight ketamine-diazepam-anesthetized, mechanically ventilated pigs were studied in supine and prone positions in random order. Regional VA and Q were measured using fluorescent-labeled aerosols and radioactive-labeled microspheres, respectively. The lungs were dried at total lung capacity and cubed into 603-967 small ( approximately 1.7-cm(3)) pieces. In the prone position the homogeneity of the ventilation distribution increased (P = 0.030) and the correlation between VA and Q increased (correlation coefficient = 0.72 +/- 0.08 and 0.82 +/- 0.06 in supine and prone positions, respectively, P = 0.03). The homogeneity of the VA/Q distribution increased in the prone position (P = 0.028). We conclude that the improvement in VA/Q matching in the prone position is secondary to increased homogeneity of the VA distribution and increased correlation of regional VA and Q.
Distribution of Rb atoms on the antirelaxation RbH coating
NASA Astrophysics Data System (ADS)
Zhang, Yi; Wang, Zhiguo; Xia, Tao
2017-04-01
We observe the extension of relaxation time of 131Xe with RbH coating, and compare the different depositions of Rb atoms on the inner surface of the vapor cell with and without RbH coating respectively to research the mechanism of coating prolongation. From the 5*5 um2 images of microscopy, we find that on the bare glass surface the Rb atoms form large random separated islands, and to the contrary they deposite as many regular longitudinal stripe of small islands on the RbH coating. We attribute these different distributions to the different molecular interactions between RbH coating and bare glass to Rb atom and build a simple rational physical model to explain this phenomenon. On the one hand, the small islands, or in other words, the relative uniform distribution on RbH coating may result from the relative stronger interaction of Rb to RbH than to the bare glass. On the other hand, the regular longitudinal stripe may stem from the grain boundaries which is related to the macroscopic shape of the vapor cell. And this longitudinal distribution can generate cylindrically electric gradient as used in some theoretically references before.
Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach
NASA Astrophysics Data System (ADS)
Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.
2016-01-01
We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1]. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature. In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter.
Post-processing of metal matrix composites by friction stir processing
NASA Astrophysics Data System (ADS)
Sharma, Vipin; Singla, Yogesh; Gupta, Yashpal; Raghuwanshi, Jitendra
2018-05-01
In metal matrix composites non-uniform distribution of reinforcement particles resulted in adverse affect on the mechanical properties. It is of great interest to explore post-processing techniques that can eliminate particle distribution heterogeneity. Friction stir processing is a relatively newer technique used for post-processing of metal matrix composites to improve homogeneity in particles distribution. In friction stir processing, synergistic effect of stirring, extrusion and forging resulted in refinement of grains, reduction of reinforcement particles size, uniformity in particles distribution, reduction in microstructural heterogeneity and elimination of defects.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2014-07-01
Climate change may alter the spatial distribution, composition, structure and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate the solar radiation absorbed by individual plants in order to understand and predict their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming that crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the results of random distribution of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and can be included in vegetation models to simulate long-term transient responses of plant communities to climate change. The code and a user's manual are provided as Supplement of the paper.
Nonlinear Reduced Order Random Response Analysis of Structures with Shallow Curvature
NASA Technical Reports Server (NTRS)
Przekop, Adam; Rizzi, Stephen A.
2006-01-01
The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures with shallow curvature under random loadings. For reduced order analysis, the modal basis selection must be capable of reflecting the coupling in both the linear and nonlinear stiffness. For the symmetric shallow arch under consideration, four categories of modal basis functions are defined. Those having symmetric transverse displacements (ST modes) can be designated as transverse dominated (ST-T) modes and in-plane dominated (ST-I) modes. Those having anti-symmetric transverse displacements (AT modes) can similarly be designated as transverse dominated (AT-T) modes and in-plane dominated (AT-I) modes. The response of an aluminum arch under a uniformly distributed transverse random loading is investigated. Results from nonlinear modal simulations made using various modal bases are compared with those obtained from a numerical simulation in physical degrees-of-freedom. While inclusion of ST-T modes is important for all response regimes, it is found that the ST-I modes become increasingly important in the nonlinear response regime, and that AT-T and AT-I modes are critical in the autoparametric regime.
Nonlinear Reduced Order Random Response Analysis of Structures With Shallow Curvature
NASA Technical Reports Server (NTRS)
Przekop, Adam; Rizzi, Stephen A.
2005-01-01
The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures with shallow curvature under random loadings. For reduced order analysis, the modal basis selection must be capable of reflecting the coupling in both the linear and nonlinear stiffness. For the symmetric shallow arch under consideration, four categories of modal basis functions are defined. Those having symmetric transverse displacements (ST modes) can be designated as transverse dominated (ST-T) modes and in-plane dominated (ST-I) modes. Those having anti-symmetric transverse displacements (AT modes) can similarly be designated as transverse dominated (AT-T) modes and in-plane dominated (AT-I) modes. The response of an aluminum arch under a uniformly distributed transverse random loading is investigated. Results from nonlinear modal simulations made using various modal bases are compared with those obtained from a numerical simulation in physical degrees-of-freedom. While inclusion of ST-T modes is important for all response regimes, it is found that the ST-I modes become increasingly important in the nonlinear response regime, and that AT-T and AT-I modes are critical in the autoparametric regime.
V/V(max) test applied to SMM gamma-ray bursts
NASA Technical Reports Server (NTRS)
Matz, S. M.; Higdon, J. C.; Share, G. H.; Messina, D. C.; Iadicicco, A.
1992-01-01
We have applied the V/V(max) test to candidate gamma-ray bursts detected by the Gamma-Ray Spectrometer (GRS) aboard the SMM satellite to examine quantitatively the uniformity of the burst source population. For a sample of 132 candidate bursts identified in the GRS data by an automated search using a single uniform trigger criterion we find average V/V(max) = 0.40 +/- 0.025. This value is significantly different from 0.5, the average for a uniform distribution in space of the parent population of burst sources; however, the shape of the observed distribution of V/V(max) is unusual and our result conflicts with previous measurements. For these reasons we can currently draw no firm conclusion about the distribution of burst sources.
Design and development of novel bandages for compression therapy.
Rajendran, Subbiyan; Anand, Subhash
2003-03-01
During the past few years there have been increasing concerns relating to the performance of bandages, especially their pressure distribution properties for the treatment of venous leg ulcers. This is because compression therapy is a complex system and requires two or multi-layer bandages, and the performance properties of each layer differs from other layers. The widely accepted sustained graduated compression mainly depends on the uniform pressure distribution of different layers of bandages, in which textile fibres and bandage structures play a major role. This article examines how the fibres, fibre blends and structures influence the absorption and pressure distribution properties of bandages. It is hoped that the research findings will help medical professionals, especially nurses, to gain an insight into the development of bandages. A total of 12 padding bandages have been produced using various fibres and fibre blends. A new technique that would facilitate good resilience and cushioning properties, higher and more uniform pressure distribution and enhanced water absorption and retention was adopted during the production. It has been found that the properties of developed padding bandages, which include uniform pressure distribution around the leg, are superior to existing commercial bandages and possess a number of additional properties required to meet the criteria stipulated for an ideal padding bandage. Results have indicated that none of the mostly used commercial padding bandages provide the required uniform pressure distribution around the limb.
Optimizing the LSST Dither Pattern for Survey Uniformity
NASA Astrophysics Data System (ADS)
Awan, Humna; Gawiser, Eric J.; Kurczynski, Peter; Carroll, Christopher M.; LSST Dark Energy Science Collaboration
2015-01-01
The Large Synoptic Survey Telescope (LSST) will gather detailed data of the southern sky, enabling unprecedented study of Baryonic Acoustic Oscillations, which are an important probe of dark energy. These studies require a survey with highly uniform depth, and we aim to find an observation strategy that optimizes this uniformity. We have shown that in the absence of dithering (large telescope-pointing offsets), the LSST survey will vary significantly in depth. Hence, we implemented various dithering strategies, including random and repulsive random pointing offsets and spiral patterns with the spiral reaching completion in either a few months or the entire ten-year run. We employed three different implementations of dithering strategies: a single offset assigned to all fields observed on each night, offsets assigned to each field independently whenever the field is observed, and offsets assigned to each field only when the field is observed on a new night. Our analysis reveals that large dithers are crucial to guarantee survey uniformity and that assigning dithers to each field independently whenever the field is observed significantly increases this uniformity. These results suggest paths towards an optimal observation strategy that will enable LSST to achieve its science goals.We gratefully acknowledge support from the National Science Foundation REU program at Rutgers, PHY-1263280, and the Department of Energy, DE-SC0011636.
Statistical distributions of avalanche size and waiting times in an inter-sandpile cascade model
NASA Astrophysics Data System (ADS)
Batac, Rene; Longjas, Anthony; Monterola, Christopher
2012-02-01
Sandpile-based models have successfully shed light on key features of nonlinear relaxational processes in nature, particularly the occurrence of fat-tailed magnitude distributions and exponential return times, from simple local stress redistributions. In this work, we extend the existing sandpile paradigm into an inter-sandpile cascade, wherein the avalanches emanating from a uniformly-driven sandpile (first layer) is used to trigger the next (second layer), and so on, in a successive fashion. Statistical characterizations reveal that avalanche size distributions evolve from a power-law p(S)≈S-1.3 for the first layer to gamma distributions p(S)≈Sαexp(-S/S0) for layers far away from the uniformly driven sandpile. The resulting avalanche size statistics is found to be associated with the corresponding waiting time distribution, as explained in an accompanying analytic formulation. Interestingly, both the numerical and analytic models show good agreement with actual inventories of non-uniformly driven events in nature.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971
Simulation of air velocity in a vertical perforated air distributor
NASA Astrophysics Data System (ADS)
Ngu, T. N. W.; Chu, C. M.; Janaun, J. A.
2016-06-01
Perforated pipes are utilized to divide a fluid flow into several smaller streams. Uniform flow distribution requirement is of great concern in engineering applications because it has significant influence on the performance of fluidic devices. For industrial applications, it is crucial to provide a uniform velocity distribution through orifices. In this research, flow distribution patterns of a closed-end multiple outlet pipe standing vertically for air delivery in the horizontal direction was simulated. Computational Fluid Dynamics (CFD), a tool of research for enhancing and understanding design was used as the simulator and the drawing software SolidWorks was used for geometry setup. The main purpose of this work is to establish the influence of size of orifices, intervals between outlets, and the length of tube in order to attain uniformity of exit flows through a multi outlet perforated tube. However, due to the gravitational effect, the compactness of paddy increases gradually from top to bottom of dryer, uniform flow pattern was aimed for top orifices and larger flow for bottom orifices.
School Uniform Policies in Public Schools
ERIC Educational Resources Information Center
Brunsma, David L.
2006-01-01
The movement for school uniforms in public schools continues to grow despite the author's research indicating little if any impact on student behavior, achievement, and self-esteem. The author examines the distribution of uniform policies by region and demographics, the impact of these policies on perceptions of school climate and safety, and…
Aging transition in systems of oscillators with global distributed-delay coupling.
Rahman, B; Blyuss, K B; Kyrychko, Y N
2017-09-01
We consider a globally coupled network of active (oscillatory) and inactive (nonoscillatory) oscillators with distributed-delay coupling. Conditions for aging transition, associated with suppression of oscillations, are derived for uniform and gamma delay distributions in terms of coupling parameters and the proportion of inactive oscillators. The results suggest that for the uniform distribution increasing the width of distribution for the same mean delay allows aging transition to happen for a smaller coupling strength and a smaller proportion of inactive elements. For gamma distribution with sufficiently large mean time delay, it may be possible to achieve aging transition for an arbitrary proportion of inactive oscillators, as long as the coupling strength lies in a certain range.
Spectral analysis of pair-correlation bandwidth: application to cell biology images.
Binder, Benjamin J; Simpson, Matthew J
2015-02-01
Images from cell biology experiments often indicate the presence of cell clustering, which can provide insight into the mechanisms driving the collective cell behaviour. Pair-correlation functions provide quantitative information about the presence, or absence, of clustering in a spatial distribution of cells. This is because the pair-correlation function describes the ratio of the abundance of pairs of cells, separated by a particular distance, relative to a randomly distributed reference population. Pair-correlation functions are often presented as a kernel density estimate where the frequency of pairs of objects are grouped using a particular bandwidth (or bin width), Δ>0. The choice of bandwidth has a dramatic impact: choosing Δ too large produces a pair-correlation function that contains insufficient information, whereas choosing Δ too small produces a pair-correlation signal dominated by fluctuations. Presently, there is little guidance available regarding how to make an objective choice of Δ. We present a new technique to choose Δ by analysing the power spectrum of the discrete Fourier transform of the pair-correlation function. Using synthetic simulation data, we confirm that our approach allows us to objectively choose Δ such that the appropriately binned pair-correlation function captures known features in uniform and clustered synthetic images. We also apply our technique to images from two different cell biology assays. The first assay corresponds to an approximately uniform distribution of cells, while the second assay involves a time series of images of a cell population which forms aggregates over time. The appropriately binned pair-correlation function allows us to make quantitative inferences about the average aggregate size, as well as quantifying how the average aggregate size changes with time.
Colombia: A Country Under Constant Threat of Disasters
2014-05-22
disasters strike every nation in the world , and although these events do not occur with uniformity of distribution, developing nations suffer the greatest...strike every nation in the world , and although these events do not occur with uniformity of distribution, developing nations suffer the greatest...have been victims 4IHS Janes, “Jane’s World Insurgency and Terrorism.” Fuerzas Armadas
Hsu, Ya-Chu; Hung, Yu-Chen; Wang, Chiu-Yen
2017-09-15
High uniformity Au-catalyzed indium selenide (In 2 Se 3) nanowires are grown with the rapid thermal annealing (RTA) treatment via the vapor-liquid-solid (VLS) mechanism. The diameters of Au-catalyzed In 2 Se 3 nanowires could be controlled with varied thicknesses of Au films, and the uniformity of nanowires is improved via a fast pre-annealing rate, 100 °C/s. Comparing with the slower heating rate, 0.1 °C/s, the average diameters and distributions (standard deviation, SD) of In 2 Se 3 nanowires with and without the RTA process are 97.14 ± 22.95 nm (23.63%) and 119.06 ± 48.75 nm (40.95%), respectively. The in situ annealing TEM is used to study the effect of heating rate on the formation of Au nanoparticles from the as-deposited Au film. The results demonstrate that the average diameters and distributions of Au nanoparticles with and without the RTA process are 19.84 ± 5.96 nm (30.00%) and about 22.06 ± 9.00 nm (40.80%), respectively. It proves that the diameter size, distribution, and uniformity of Au-catalyzed In 2 Se 3 nanowires are reduced and improved via the RTA pre-treated. The systemic study could help to control the size distribution of other nanomaterials through tuning the annealing rate, temperatures of precursor, and growth substrate to control the size distribution of other nanomaterials. Graphical Abstract Rapid thermal annealing (RTA) process proved that it can uniform the size distribution of Au nanoparticles, and then it can be used to grow the high uniformity Au-catalyzed In 2 Se 3 nanowires via the vapor-liquid-solid (VLS) mechanism. Comparing with the general growth condition, the heating rate is slow, 0.1 °C/s, and the growth temperature is a relatively high growth temperature, > 650 °C. RTA pre-treated growth substrate can form smaller and uniform Au nanoparticles to react with the In 2 Se 3 vapor and produce the high uniformity In 2 Se 3 nanowires. The in situ annealing TEM is used to realize the effect of heating rate on Au nanoparticle formation from the as-deposited Au film. The byproduct of self-catalyzed In 2 Se 3 nanoplates can be inhibited by lowering the precursors and growth temperatures.
Structure and dynamics of an upland old- growth forest at Redwood National Park, California
van Mantgem, Philip J.; Stuart, John D.
2011-01-01
Many current redwood forest management targets are based on old-growth conditions, so it is critical that we understand the variability and range of conditions that constitute these forests. Here we present information on the structure and dynamics from six one-hectare forest monitoring plots in an upland old-growth forest at Redwood National Park, California. We surveyed all stems =20 cm DBH in 1995 and 2010, allowing us to estimate any systematic changes in these stands. Stem size distributions for all species and for redwood (Sequoia sempervirens (D. Don) Endl.) alone did not appreciably change over the 15 year observation interval. Recruitment and mortality rates were roughly balanced, as were basal area dynamics (gains from recruitment and growth versus losses from mortality). Similar patterns were found for Sequoia alone. The spatial structure of stems at the plots suggested a random distribution of trees, though the pattern for Sequoia alone was found to be significantly clumped at small scales (< 5 m) at three of the six plots. These results suggest that these forests, including populations of Sequoia, have been generally stable over the past 15 years at this site, though it is possible that fire exclusion may be affecting recruitment of smaller Sequoia (< 20 cm DBH). The non-uniform spatial arrangement of stems also suggests that restoration prescriptions for second-growth redwood forests that encourage uniform spatial arrangements do not appear to mimic current upland old-growth conditions.
The insertional history of an active family of L1 retrotransposons in humans.
Boissinot, Stéphane; Entezam, Ali; Young, Lynn; Munson, Peter J; Furano, Anthony V
2004-07-01
As humans contain a currently active L1 (LINE-1) non-LTR retrotransposon family (Ta-1), the human genome database likely provides only a partial picture of Ta-1-generated diversity. We used a non-biased method to clone Ta-1 retrotransposon-containing loci from representatives of four ethnic populations. We obtained 277 distinct Ta-1 loci and identified an additional 67 loci in the human genome database. This collection represents approximately 90% of the Ta-1 population in the individuals examined and is thus more representative of the insertional history of Ta-1 than the human genome database, which lacked approximately 40% of our cloned Ta-1 elements. As both polymorphic and fixed Ta-1 elements are as abundant in the GC-poor genomic regions as in ancestral L1 elements, the enrichment of L1 elements in GC-poor areas is likely due to insertional bias rather than selection. Although the chromosomal distribution of Ta-1 inserts is generally a function of chromosomal length and gene density, chromosome 4 significantly deviates from this pattern and has been much more hospitable to Ta-1 insertions than any other chromosome. Also, the intra-chromosomal distribution of Ta-1 elements is not uniform. Ta-1 elements tend to cluster, and the maximal gaps between Ta-1 inserts are larger than would be expected from a model of uniform random insertion. Copyright 2004 Cold Spring Harbor Laboratory Press ISSN
DOE R&D Accomplishments Database
Wigner, E. P.; Wilkins, J. E. Jr.
1944-09-14
In this paper we set up an integral equation governing the energy distribution of neutrons that are being slowed down uniformly throughout the entire space by a uniformly distributed moderator whose atoms are in motion with a Maxwellian distribution of velocities. The effects of chemical binding and crystal reflection are ignored. When the moderator is hydrogen, the integral equation is reduced to a differential equation and solved by numerical methods. In this manner we obtain a refinement of the dv/v{sup 2} law. (auth)
Characterization of Dispersive Ultrasonic Rayleigh Surface Waves in Asphalt Concrete
NASA Astrophysics Data System (ADS)
In, Chi-Won; Kim, Jin-Yeon; Jacobs, Laurence J.; Kurtis, Kimberly E.
2008-02-01
This research focuses on the application of ultrasonic Rayleigh surface waves to nondestructively characterize the mechanical properties and structural defects (non-uniformly distributed aggregate) in asphalt concrete. An efficient wedge technique is developed in this study to generate Rayleigh surface waves that is shown to be effective in characterizing Rayleigh waves in this highly viscoelastic (attenuating) and heterogeneous medium. Experiments are performed on an asphalt-concrete beam produced with uniformly distributed aggregate. Ultrasonic techniques using both contact and non-contact sensors are examined and their results are compared. Experimental results show that the wedge technique along with an air-coupled sensor appears to be effective in characterizing Rayleigh waves in asphalt concrete. Hence, measurement of theses material properties needs to be investigated in non-uniformly distributed aggregate material using these techniques.
Prideaux, Andrew R.; Song, Hong; Hobbs, Robert F.; He, Bin; Frey, Eric C.; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George
2010-01-01
Phantom-based and patient-specific imaging-based dosimetry methodologies have traditionally yielded mean organ-absorbed doses or spatial dose distributions over tumors and normal organs. In this work, radiobiologic modeling is introduced to convert the spatial distribution of absorbed dose into biologically effective dose and equivalent uniform dose parameters. The methodology is illustrated using data from a thyroid cancer patient treated with radioiodine. Methods Three registered SPECT/CT scans were used to generate 3-dimensional images of radionuclide kinetics (clearance rate) and cumulated activity. The cumulated activity image and corresponding CT scan were provided as input into an EGSnrc-based Monte Carlo calculation: The cumulated activity image was used to define the distribution of decays, and an attenuation image derived from CT was used to define the corresponding spatial tissue density and composition distribution. The rate images were used to convert the spatial absorbed dose distribution to a biologically effective dose distribution, which was then used to estimate a single equivalent uniform dose for segmented volumes of interest. Equivalent uniform dose was also calculated from the absorbed dose distribution directly. Results We validate the method using simple models; compare the dose-volume histogram with a previously analyzed clinical case; and give the mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for an illustrative case of a pediatric thyroid cancer patient with diffuse lung metastases. The mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for the tumor were 57.7, 58.5, and 25.0 Gy, respectively. Corresponding values for normal lung tissue were 9.5, 9.8, and 8.3 Gy, respectively. Conclusion The analysis demonstrates the impact of radiobiologic modeling on response prediction. The 57% reduction in the equivalent dose value for the tumor reflects a high level of dose nonuniformity in the tumor and a corresponding reduced likelihood of achieving a tumor response. Such analyses are expected to be useful in treatment planning for radionuclide therapy. PMID:17504874
NASA Astrophysics Data System (ADS)
Nagatani, Takashi; Tainaka, Kei-ichi
2018-01-01
In most cases, physicists have studied the migration of biospecies by the use of random walk. In the present article, we apply cellular automaton of traffic model. For simplicity, we deal with an ecosystem contains a prey and predator, and use one-dimensional lattice with two layers. Preys stay on the first layer, but predators uni-directionally move on the second layer. The spatial and temporal evolution is numerically explored. It is shown that the migration has the important effect on populations of both prey and predator. Without migration, the phase transition between a prey-phase and coexisting-phase occurs. In contrast, the phase transition disappears by migration. This is because predator can survive due to migration. We find another phase transition for spatial distribution: in one phase, prey and predator form a stripe pattern of condensation and rarefaction, while in the other phase, they uniformly distribute. The self-organized stripe may be similar to the migration patterns in real ecosystems.
Monitoring the Wall Mechanics During Stent Deployment in a Vessel
Steinert, Brian D.; Zhao, Shijia; Gu, Linxia
2012-01-01
Clinical trials have reported different restenosis rates for various stent designs1. It is speculated that stent-induced strain concentrations on the arterial wall lead to tissue injury, which initiates restenosis2-7. This hypothesis needs further investigations including better quantifications of non-uniform strain distribution on the artery following stent implantation. A non-contact surface strain measurement method for the stented artery is presented in this work. ARAMIS stereo optical surface strain measurement system uses two optical high speed cameras to capture the motion of each reference point, and resolve three dimensional strains over the deforming surface8,9. As a mesh stent is deployed into a latex vessel with a random contrasting pattern sprayed or drawn on its outer surface, the surface strain is recorded at every instant of the deformation. The calculated strain distributions can then be used to understand the local lesion response, validate the computational models, and formulate hypotheses for further in vivo study. PMID:22588353
Optics of Water Microdroplets with Soot Inclusions: Exact Versus Approximate Results
NASA Technical Reports Server (NTRS)
Liu, Li; Mishchenko, Michael I.
2016-01-01
We use the recently generalized version of the multi-sphere superposition T-matrix method (STMM) to compute the scattering and absorption properties of microscopic water droplets contaminated by black carbon. The soot material is assumed to be randomly distributed throughout the droplet interior in the form of numerous small spherical inclusions. Our numerically-exact STMM results are compared with approximate ones obtained using the Maxwell-Garnett effective-medium approximation (MGA) and the Monte Carlo ray-tracing approximation (MCRTA). We show that the popular MGA can be used to calculate the droplet optical cross sections, single-scattering albedo, and asymmetry parameter provided that the soot inclusions are quasi-uniformly distributed throughout the droplet interior, but can fail in computations of the elements of the scattering matrix depending on the volume fraction of soot inclusions. The integral radiative characteristics computed with the MCRTA can deviate more significantly from their exact STMM counterparts, while accurate MCRTA computations of the phase function require droplet size parameters substantially exceeding 60.
Exploring the effect of the spatial scale of fishery management.
Takashina, Nao; Baskett, Marissa L
2016-02-07
For any spatially explicit management, determining the appropriate spatial scale of management decisions is critical to success at achieving a given management goal. Specifically, managers must decide how much to subdivide a given managed region: from implementing a uniform approach across the region to considering a unique approach in each of one hundred patches and everything in between. Spatially explicit approaches, such as the implementation of marine spatial planning and marine reserves, are increasingly used in fishery management. Using a spatially explicit bioeconomic model, we quantify how the management scale affects optimal fishery profit, biomass, fishery effort, and the fraction of habitat in marine reserves. We find that, if habitats are randomly distributed, the fishery profit increases almost linearly with the number of segments. However, if habitats are positively autocorrelated, then the fishery profit increases with diminishing returns. Therefore, the true optimum in management scale given cost to subdivision depends on the habitat distribution pattern. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.
2016-03-28
Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venezian, G.; Bretschneider, C.L.
1980-08-01
This volume details a new methodology to analyze statistically the forces experienced by a structure at sea. Conventionally a wave climate is defined using a spectral function. The wave climate is described using a joint distribution of wave heights and periods (wave lengths), characterizing actual sea conditions through some measured or estimated parameters like the significant wave height, maximum spectral density, etc. Random wave heights and periods satisfying the joint distribution are then generated. Wave kinetics are obtained using linear or non-linear theory. In the case of currents a linear wave-current interaction theory of Venezian (1979) is used. The peakmore » force experienced by the structure for each individual wave is identified. Finally, the probability of exceedance of any given peak force on the structure may be obtained. A three-parameter Longuet-Higgins type joint distribution of wave heights and periods is discussed in detail. This joint distribution was used to model sea conditions at four potential OTEC locations. A uniform cylindrical pipe of 3 m diameter, extending to a depth of 550 m was used as a sample structure. Wave-current interactions were included and forces computed using Morison's equation. The drag and virtual mass coefficients were interpolated from published data. A Fortran program CUFOR was written to execute the above procedure. Tabulated and graphic results of peak forces experienced by the structure, for each location, are presented. A listing of CUFOR is included. Considerable flexibility of structural definition has been incorporated. The program can easily be modified in the case of an alternative joint distribution or for inclusion of effects like non-linearity of waves, transverse forces and diffraction.« less
Enhancement of viability of muscle precursor cells on 3D scaffold in a perfusion bioreactor.
Cimetta, E; Flaibani, M; Mella, M; Serena, E; Boldrin, L; De Coppi, P; Elvassore, N
2007-05-01
The aim of this study was to develop a methodology for the in vitro expansion of skeletal-muscle precursor cells (SMPC) in a three-dimensional (3D) environment in order to fabricate a cellularized artificial graft characterized by high density of viable cells and uniform cell distribution over the entire 3D domain. Cell seeding and culture within 3D porous scaffolds by conventional static techniques can lead to a uniform cell distribution only on the scaffold surface, whereas dynamic culture systems have the potential of allowing a uniform growth of SMPCs within the entire scaffold structure. In this work, we designed and developed a perfusion bioreactor able to ensure long-term culture conditions and uniform flow of medium through 3D collagen sponges. A mathematical model to assist the design of the experimental setup and of the operative conditions was developed. The effects of dynamic vs static culture in terms of cell viability and spatial distribution within 3D collagen scaffolds were evaluated at 1, 4 and 7 days and for different flow rates of 1, 2, 3.5 and 4.5 ml/min using C2C12 muscle cell line and SMPCs derived from satellite cells. C2C12 cells, after 7 days of culture in our bioreactor, perfused applying a 3.5 ml/min flow rate, showed a higher viability resulting in a three-fold increase when compared with the same parameter evaluated for cultures kept under static conditions. In addition, dynamic culture resulted in a more uniform 3D cell distribution. The 3.5 ml/min flow rate in the bioreactor was also applied to satellite cell-derived SMPCs cultured on 3D collagen scaffolds. The dynamic culture conditions improved cell viability leading to higher cell density and uniform distribution throughout the entire 3D collagen sponge for both C2C12 and satellite cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinouski, M.; Kehr, S.; Finney, L.
2012-04-17
Recent advances in quantitative methods and sensitive imaging techniques of trace elements provide opportunities to uncover and explain their biological roles. In particular, the distribution of selenium in tissues and cells under both physiological and pathological conditions remains unknown. In this work, we applied high-resolution synchrotron X-ray fluorescence microscopy (XFM) to map selenium distribution in mouse liver and kidney. Liver showed a uniform selenium distribution that was dependent on selenocysteine tRNA{sup [Ser]Sec} and dietary selenium. In contrast, kidney selenium had both uniformly distributed and highly localized components, the latter visualized as thin circular structures surrounding proximal tubules. Other parts ofmore » the kidney, such as glomeruli and distal tubules, only manifested the uniformly distributed selenium pattern that co-localized with sulfur. We found that proximal tubule selenium localized to the basement membrane. It was preserved in Selenoprotein P knockout mice, but was completely eliminated in glutathione peroxidase 3 (GPx3) knockout mice, indicating that this selenium represented GPx3. We further imaged kidneys of another model organism, the naked mole rat, which showed a diminished uniformly distributed selenium pool, but preserved the circular proximal tubule signal. We applied XFM to image selenium in mammalian tissues and identified a highly localized pool of this trace element at the basement membrane of kidneys that was associated with GPx3. XFM allowed us to define and explain the tissue topography of selenium in mammalian kidneys at submicron resolution.« less
Diffusion of active chiral particles
NASA Astrophysics Data System (ADS)
Sevilla, Francisco J.
2016-12-01
The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.
The effect of uniform color on judging athletes' aggressiveness, fairness, and chance of winning.
Krenn, Bjoern
2015-04-01
In the current study we questioned the impact of uniform color in boxing, taekwondo and wrestling. On 18 photos showing two athletes competing, the hue of each uniform was modified to blue, green or red. For each photo, six color conditions were generated (blue-red, blue-green, green-red and vice versa). In three experiments these 108 photos were randomly presented. Participants (N = 210) had to select the athlete that seemed to be more aggressive, fairer or more likely to win the fight. Results revealed that athletes wearing red in boxing and wrestling were judged more aggressive and more likely to win than athletes wearing blue or green uniforms. In addition, athletes wearing green were judged fairer in boxing and wrestling than athletes wearing red. In taekwondo we did not find any significant impact of uniform color. Results suggest that uniform color in combat sports carries specific meanings that affect others' judgments.
A Comprehensive Theory of Algorithms for Wireless Networks and Mobile Systems
2016-06-08
David Peleg. Nonuniform SINR+Voronoi Diagrams are Effectively Uniform. In Yoram Moses, editor, Distributed Computing: 29th International Symposium...in Computer Science, page 559. Springer, 2014. [16] Erez Kantor, Zvi Lotker, Merav Parter, and David Peleg. Nonuniform sINR+Voronoi dia- grams are...Merav Parter, and David Peleg. Nonuniform SINR+Voronoi diagrams are effectively uniform. In Yoram Moses, editor, Distributed Computing - 29th
Electrophoretic sample insertion. [device for uniformly distributing samples in flow path
NASA Technical Reports Server (NTRS)
Mccreight, L. R. (Inventor)
1974-01-01
Two conductive screens located in the flow path of an electrophoresis sample separation apparatus are charged electrically. The sample is introduced between the screens, and the charge is sufficient to disperse and hold the samples across the screens. When the charge is terminated, the samples are uniformly distributed in the flow path. Additionally, a first separation by charged properties has been accomplished.
Mirbozorgi, S Abdollah; Bahrami, Hadi; Sawan, Mohamad; Gosselin, Benoit
2016-04-01
This paper presents a novel experimental chamber with uniform wireless power distribution in 3D for enabling long-term biomedical experiments with small freely moving animal subjects. The implemented power transmission chamber prototype is based on arrays of parallel resonators and multicoil inductive links, to form a novel and highly efficient wireless power transmission system. The power transmitter unit includes several identical resonators enclosed in a scalable array of overlapping square coils which are connected in parallel to provide uniform power distribution along x and y. Moreover, the proposed chamber uses two arrays of primary resonators, facing each other, and connected in parallel to achieve uniform power distribution along the z axis. Each surface includes 9 overlapped coils connected in parallel and implemented into two layers of FR4 printed circuit board. The chamber features a natural power localization mechanism, which simplifies its implementation and ease its operation by avoiding the need for active detection and control mechanisms. A single power surface based on the proposed approach can provide a power transfer efficiency (PTE) of 69% and a power delivered to the load (PDL) of 120 mW, for a separation distance of 4 cm, whereas the complete chamber prototype provides a uniform PTE of 59% and a PDL of 100 mW in 3D, everywhere inside the chamber with a size of 27×27×16 cm(3).
NASA Astrophysics Data System (ADS)
Chen, Xiaowei; Wang, Wenping; Wan, Min
2013-12-01
It is essential to calculate magnetic force in the process of studying electromagnetic flat sheet forming. Calculating magnetic force is the basis of analyzing the sheet deformation and optimizing technical parameters. Magnetic force distribution on the sheet can be obtained by numerical simulation of electromagnetic field. In contrast to other computing methods, the method of numerical simulation has some significant advantages, such as higher calculation accuracy, easier using and other advantages. In this paper, in order to study of magnetic force distribution on the small size flat sheet in electromagnetic forming when flat round spiral coil, flat rectangular spiral coil and uniform pressure coil are adopted, the 3D finite element models are established by software ANSYS/EMAG. The magnetic force distribution on the sheet are analyzed when the plane geometries of sheet are equal or less than the coil geometries under fixed discharge impulse. The results showed that when the physical dimensions of sheet are less than the corresponding dimensions of the coil, the variation of induced current channel width on the sheet will cause induced current crowding effect that seriously influence the magnetic force distribution, and the degree of inhomogeneity of magnetic force distribution is increase nearly linearly with the variation of induced current channel width; the small size uniform pressure coil will produce approximately uniform magnetic force distribution on the sheet, but the coil is easy to early failure; the desirable magnetic force distribution can be achieved when the unilateral placed flat rectangular spiral coil is adopted, and this program can be take as preferred one, because the longevity of flat rectangular spiral coil is longer than the working life of small size uniform pressure coil.
Cylindrically distributing optical fiber tip for uniform laser illumination of hollow organs
NASA Astrophysics Data System (ADS)
Buonaccorsi, Giovanni A.; Burke, T.; MacRobert, Alexander J.; Hill, P. D.; Essenpreis, Matthias; Mills, Timothy N.
1993-05-01
To predict the outcome of laser therapy it is important to possess, among other things, an accurate knowledge of the intensity and distribution of the laser light incident on the tissue. For irradiation of the internal surfaces of hollow organs, modified fiber tips can be used to shape the light distribution to best suit the treatment geometry. There exist bulb-tipped optical fibers emitting a uniform isotropic distribution of light suitable for the treatment of organs which approximate a spherical geometry--the bladder, for example. For the treatment of organs approximating a cylindrical geometry--e.g. the oesophagus--an optical fiber tip which emits a uniform cylindrical distribution of light is required. We report on the design, development and testing of such a device, the CLD fiber tip. The device was made from a solid polymethylmethacrylate (PMMA) rod, 27 mm in length and 4 mm in diameter. One end was shaped and 'silvered' to form a mirror which reflected the light emitted from the delivery fiber positioned at the other end of the rod. The shape of the mirror was such that the light fell with uniform intensity on the circumferential surface of the rod. This surface was coated with BaSO4 reflectance paint to couple the light out of the rod and onto the surface of the tissue.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
Effect of sputtering atmosphere on the characteristics of ZrOx resistive switching memory
NASA Astrophysics Data System (ADS)
He, Pin; Ye, Cong; Wu, Jiaji; Wei, Wei; Wei, Xiaodi; Wang, Hao; Zhang, Rulin; Zhang, Li; Xia, Qing; Wang, Hanbin
2017-05-01
A ZrOx switching layer with different oxygen content for TiN/ZrOx/Pt resistive switching (RS) memory was prepared by magnetron sputtering in different atmospheres such as N2/Ar mixture, O2/Ar mixture as well as pure Ar. The morphology, structure and RS characteristics were systemically investigated and it was found that the RS performance is highly dependent on the sputtering atmosphere. For the memory device sputtered in N2/Ar mixture, with 8.06% nitrogen content in the ZrOx switching layer, the highest uniformity with smallest distribution of V set and high resistance states (HRS)/low resistance states (LRS) values were achieved. By analyzing the current conduction mechanisms combined with possible RS mechanisms for three devices, we deduce that for the device with a ZrOx layer sputtered in N2/Ar mixture, oxygen ions (O2-), which are decisive to the disruption/formation of the conductive filament, will gather around the tip of the filament due to the existence of doping nitrogen, and lead to the reduction of O2- migration randomness in the operation process, so that the uniformity of the N-doped ZrOx device can be improved.
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure.
Muhlestein, Michael B; Haberman, Michael R
2016-08-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure
Haberman, Michael R.
2016-01-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed. PMID:27616932
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure
NASA Astrophysics Data System (ADS)
Muhlestein, Michael B.; Haberman, Michael R.
2016-08-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.
Apparent negative mass in QCM sensors due to punctual rigid loading
NASA Astrophysics Data System (ADS)
Castro, P.; Resa, P.; Elvira, L.
2012-12-01
Quartz Crystal Microbalances (QCM) are highly sensitive piezoelectric sensors able to detect very small loads attached to them. These devices are widely employed in many applications including process control and industrial and environmental monitoring. Mass loading is usually related to frequency shift by the well-known Sauerbrey's equation, valid for thin rigid homogeneous films. However, a significant deviation from this equation can occur when the mass is not uniformly distributed over the surface. Whereas the effects of a thin film on a QCM have been thoroughly studied, there are relatively few results on punctual loads, even though particles are usually deposited randomly and non-uniformly on the resonator surface. In this work, we have studied the effect of punctual rigid loading on the resonant frequency shift of a QCM sensor, both experimentally and using finite element method (FEM). The FEM numerical analysis was done using COMSOL software, 3D modeling a linear elastic piezoelectric solid and introducing the properties of an AT-cut quartz crystal. It is shown that a punctual rigid mass deposition on the surface of a QCM sensor can lead to positive shifts of resonance frequency, contrary to Sauerbrey's equation.
SETI and SEH (Statistical Equation for Habitables)
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-01-01
The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.
Lu, Jennifer Q; Yi, Sung Soo
2006-04-25
A monolayer of gold-containing surface micelles has been produced by spin-coating solution micelles formed by the self-assembly of the gold-modified polystyrene-b-poly(2-vinylpyridine) block copolymer in toluene. After oxygen plasma removed the block copolymer template, highly ordered and uniformly sized nanoparticles have been generated. Unlike other published methods that require reduction treatments to form gold nanoparticles in the zero-valent state, these as-synthesized nanoparticles are in form of metallic gold. These gold nanoparticles have been demonstrated to be an excellent catalyst system for growing small-diameter silicon nanowires. The uniformly sized gold nanoparticles have promoted the controllable synthesis of silicon nanowires with a narrow diameter distribution. Because of the ability to form a monolayer of surface micelles with a high degree of order, evenly distributed gold nanoparticles have been produced on a surface. As a result, uniformly distributed, high-density silicon nanowires have been generated. The process described herein is fully compatible with existing semiconductor processing techniques and can be readily integrated into device fabrication.
Filippov, Alexander E; Gorb, Stanislav N
2015-02-06
One of the important problems appearing in experimental realizations of artificial adhesives inspired by gecko foot hair is so-called clusterization. If an artificially produced structure is flexible enough to allow efficient contact with natural rough surfaces, after a few attachment-detachment cycles, the fibres of the structure tend to adhere one to another and form clusters. Normally, such clusters are much larger than original fibres and, because they are less flexible, form much worse adhesive contacts especially with the rough surfaces. Main problem here is that the forces responsible for the clusterization are the same intermolecular forces which attract fibres to fractal surface of the substrate. However, arrays of real gecko setae are much less susceptible to this problem. One of the possible reasons for this is that ends of the seta have more sophisticated non-uniformly distributed three-dimensional structure than that of existing artificial systems. In this paper, we simulated three-dimensional spatial geometry of non-uniformly distributed branches of nanofibres of the setal tip numerically, studied its attachment-detachment dynamics and discussed its advantages versus uniformly distributed geometry.
Pattern optimization of compound optical film for uniformity improvement in liquid-crystal displays
NASA Astrophysics Data System (ADS)
Huang, Bing-Le; Lin, Jin-tang; Ye, Yun; Xu, Sheng; Chen, En-guo; Guo, Tai-Liang
2017-12-01
The density dynamic adjustment algorithm (DDAA) is designed to efficiently promote the uniformity of the integrated backlight module (IBLM) by adjusting the microstructures' distribution on the compound optical film (COF), in which the COF is constructed in the SolidWorks and simulated in the TracePro. In order to demonstrate the universality of the proposed algorithm, the initial distribution is allocated by the Bezier curve instead of an empirical value. Simulation results maintains that the uniformity of the IBLM reaches over 90% only after four rounds. Moreover, the vertical and horizontal full width at half maximum of angular intensity are collimated to 24 deg and 14 deg, respectively. Compared with the current industry requirement, the IBLM has an 85% higher luminance uniformity of the emerging light, which demonstrate the feasibility and universality of the proposed algorithm.
Terawatt x-ray free-electron-laser optimization by transverse electron distribution shaping
Emma, C.; Wu, J.; Fang, K.; ...
2014-11-03
We study the dependence of the peak power of a 1.5 Å Terawatt (TW), tapered x-ray free-electron laser (FEL) on the transverse electron density distribution. Multidimensional optimization schemes for TW hard x-ray free-electron lasers are applied to the cases of transversely uniform and parabolic electron beam distributions and compared to a Gaussian distribution. The optimizations are performed for a 200 m undulator and a resonant wavelength of λ r = 1.5 Å using the fully three-dimensional FEL particle code GENESIS. The study shows that the flatter transverse electron distributions enhance optical guiding in the tapered section of the undulator andmore » increase the maximum radiation power from a maximum of 1.56 TW for a transversely Gaussian beam to 2.26 TW for the parabolic case and 2.63 TW for the uniform case. Spectral data also shows a 30%–70% reduction in energy deposited in the sidebands for the uniform and parabolic beams compared with a Gaussian. An analysis of the transverse coherence of the radiation shows the coherence area to be much larger than the beam spotsize for all three distributions, making coherent diffraction imaging experiments possible.« less
The Unevenly Distributed Nearest Brown Dwarfs
NASA Astrophysics Data System (ADS)
Bihain, Gabriel; Scholz, Ralf-Dieter
2016-08-01
To address the questions of how many brown dwarfs there are in the Milky Way, how do these objects relate to star formation, and whether the brown dwarf formation rate was different in the past, the star-to-brown dwarf number ratio can be considered. While main sequence stars are well known components of the solar neighborhood, lower mass, substellar objects increasingly add to the census of the nearest objects. The sky projection of the known objects at <6.5 pc shows that stars present a uniform distribution and brown dwarfs a non-uniform distribution, with about four times more brown dwarfs behind than ahead of the Sun relative to the direction of rotation of the Galaxy. Assuming that substellar objects distribute uniformly, their observed configuration has a probability of 0.1 %. The helio- and geocentricity of the configuration suggests that it probably results from an observational bias, which if compensated for by future discoveries, would bring the star-to-brown dwarf ratio in agreement with the average ratio found in star forming regions.
Integrated Joule switches for the control of current dynamics in parallel superconducting strips
NASA Astrophysics Data System (ADS)
Casaburi, A.; Heath, R. M.; Cristiano, R.; Ejrnaes, M.; Zen, N.; Ohkubo, M.; Hadfield, R. H.
2018-06-01
Understanding and harnessing the physics of the dynamic current distribution in parallel superconducting strips holds the key to creating next generation sensors for single molecule and single photon detection. Non-uniformity in the current distribution in parallel superconducting strips leads to low detection efficiency and unstable operation, preventing the scale up to large area sensors. Recent studies indicate that non-uniform current distributions occurring in parallel strips can be understood and modeled in the framework of the generalized London model. Here we build on this important physical insight, investigating an innovative design with integrated superconducting-to-resistive Joule switches to break the superconducting loops between the strips and thus control the current dynamics. Employing precision low temperature nano-optical techniques, we map the uniformity of the current distribution before- and after the resistive strip switching event, confirming the effectiveness of our design. These results provide important insights for the development of next generation large area superconducting strip-based sensors.
Electronic and structural properties of B i2S e3:Cu
NASA Astrophysics Data System (ADS)
Sobczak, Kamil; Strak, Pawel; Kempisty, Pawel; Wolos, Agnieszka; Hruban, Andrzej; Materna, Andrzej; Borysiuk, Jolanta
2018-04-01
Electronic and structural properties of B i2S e3 and its extension to copper doped B i2S e3:Cu were studied using combined ab initio simulations and transmission electron microscopy based techniques, including electron energy loss spectroscopy, energy filtered transmission electron microscopy, and energy dispersive x-ray spectroscopy. The stability of the mixed phases was investigated for substitutional and intercalation changes of basic B i2S e3 structure. Four systems were compared: B i2S e3 , structures obtaining by Cu intercalation of the van der Waals gap, by substitution of Bi by Cu in quintuple layers, and C u2Se . The structures were identified and their electronic properties were obtained. Transmission electron microscopy measurements of B i2S e3 and the B i2S e3:Cu system identified the first structure as uniform and the second as composite, consisting of a nonuniform lower-Cu-content matrix and randomly distributed high-Cu-concentration precipitates. Critical comparison of the ab initio and experimental data identified the matrix as having a B i2S e3 dominant part with randomly distributed Cu-intercalated regions having 1Cu-B i2S e3 structure. The precipitates were determined to have 3Cu-B i2S e3 structure.
Gagne, Nolan L; Cutright, Daniel R; Rivard, Mark J
2012-09-01
To improve tumor dose conformity and homogeneity for COMS plaque brachytherapy by investigating the dosimetric effects of varying component source ring radionuclides and source strengths. The MCNP5 Monte Carlo (MC) radiation transport code was used to simulate plaque heterogeneity-corrected dose distributions for individually-activated source rings of 14, 16 and 18 mm diameter COMS plaques, populated with (103)Pd, (125)I and (131)Cs sources. Ellipsoidal tumors were contoured for each plaque size and MATLAB programming was developed to generate tumor dose distributions for all possible ring weighting and radionuclide permutations for a given plaque size and source strength resolution, assuming a 75 Gy apical prescription dose. These dose distributions were analyzed for conformity and homogeneity and compared to reference dose distributions from uniformly-loaded (125)I plaques. The most conformal and homogeneous dose distributions were reproduced within a reference eye environment to assess organ-at-risk (OAR) doses in the Pinnacle(3) treatment planning system (TPS). The gamma-index analysis method was used to quantitatively compare MC and TPS-generated dose distributions. Concentrating > 97% of the total source strength in a single or pair of central (103)Pd seeds produced the most conformal dose distributions, with tumor basal doses a factor of 2-3 higher and OAR doses a factor of 2-3 lower than those of corresponding uniformly-loaded (125)I plaques. Concentrating 82-86% of the total source strength in peripherally-loaded (131)Cs seeds produced the most homogeneous dose distributions, with tumor basal doses 17-25% lower and OAR doses typically 20% higher than those of corresponding uniformly-loaded (125)I plaques. Gamma-index analysis found > 99% agreement between MC and TPS dose distributions. A method was developed to select intra-plaque ring radionuclide compositions and source strengths to deliver more conformal and homogeneous tumor dose distributions than uniformly-loaded (125)I plaques. This method may support coordinated investigations of an appropriate clinical target for eye plaque brachytherapy.
Impact of deformed extreme-ultraviolet pellicle in terms of CD uniformity
NASA Astrophysics Data System (ADS)
Kim, In-Seon; Yeung, Michael; Barouch, Eytan; Oh, Hye-Keun
2015-07-01
The usage of the extreme ultraviolet (EUV) pellicle is regarded as the solution for defect control since it can protect the mask from airborne debris. However some obstacles disrupt real-application of the pellicle such as structural weakness, thermal damage and so on. For these reasons, flawless fabrication of the pellicle is impossible. In this paper, we discuss the influence of deformed pellicle in terms of non-uniform intensity distribution and critical dimension (CD) uniformity. It was found that non-uniform intensity distribution is proportional to local tilt angle of pellicle and CD variation was linearly proportional to transmission difference. When we consider the 16 nm line and space pattern with dipole illumination (σc=0.8, σr=0.1, NA=0.33), the transmission difference (max-min) of 0.7 % causes 0.1 nm CD uniformity. Influence of gravity caused deflection to the aerial image is small enough to ignore. CD uniformity is less than 0.1 nm even for the current gap of 2 mm between mask and pellicle. However, heat caused EUV pellicle wrinkle might cause serious image distortion because a wrinkle of EUV pellicle causes a transmission loss variation as well as CD non-uniformity. In conclusion, local angle of a wrinkle, not a period or an amplitude of a wrinkle is a main factor to CD uniformity, and local angle of less than ~270 mrad is needed to achieve 0.1 nm CD uniformity with 16 nm L/S pattern.
Hydrostatic bearings for a turbine fluid flow metering device
Fincke, J.R.
1980-05-02
A rotor assembly fluid metering device has been improved by development of a hydrostatic bearing fluid system which provides bearing fluid at a common pressure to rotor assembly bearing surfaces. The bearing fluid distribution system produces a uniform film of fluid distribution system produces a uniform film of fluid between bearing surfaces and allows rapid replacement of bearing fluid between bearing surfaces, thereby minimizing bearing wear and corrosion.
Development of extended release dosage forms using non-uniform drug distribution techniques.
Huang, Kuo-Kuang; Wang, Da-Peng; Meng, Chung-Ling
2002-05-01
Development of an extended release oral dosage form for nifedipine using the non-uniform drug distribution matrix method was conducted. The process conducted in a fluid bed processing unit was optimized by controlling the concentration gradient of nifedipine in the coating solution and the spray rate applied to the non-pareil beads. The concentration of nifedipine in the coating was controlled by instantaneous dilutions of coating solution with polymer dispersion transported from another reservoir into the coating solution at a controlled rate. The USP dissolution method equipped with paddles at 100 rpm in 0.1 N hydrochloric acid solution maintained at 37 degrees C was used for the evaluation of release rate characteristics. Results indicated that (1) an increase in the ethyl cellulose content in the coated beads decreased the nifedipine release rate, (2) incorporation of water-soluble sucrose into the formulation increased the release rate of nifedipine, and (3) adjustment of the spray coating solution and the transport rate of polymer dispersion could achieve a dosage form with a zero-order release rate. Since zero-order release rate and constant plasma concentration were achieved in this study using the non-uniform drug distribution technique, further studies to determine in vivo/in vitro correlation with various non-uniform drug distribution dosage forms will be conducted.
WE-DE-201-12: Thermal and Dosimetric Properties of a Ferrite-Based Thermo-Brachytherapy Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warrell, G; Shvydka, D; Parsai, E I
Purpose: The novel thermo-brachytherapy (TB) seed provides a simple means of adding hyperthermia to LDR prostate permanent implant brachytherapy. The high blood perfusion rate (BPR) within the prostate motivates the use of the ferrite and conductive outer layer design for the seed cores. We describe the results of computational analyses of the thermal properties of this ferrite-based TB seed in modelled patient-specific anatomy, as well as studies of the interseed and scatter (ISA) effect. Methods: The anatomies (including the thermophysical properties of the main tissue types) and seed distributions of 6 prostate patients who had been treated with LDR brachytherapymore » seeds were modelled in the finite element analysis software COMSOL, using ferrite-based TB and additional hyperthermia-only (HT-only) seeds. The resulting temperature distributions were compared to those computed for patient-specific seed distributions, but in uniform anatomy with a constant blood perfusion rate. The ISA effect was quantified in the Monte Carlo software package MCNP5. Results: Compared with temperature distributions calculated in modelled uniform tissue, temperature distributions in the patient-specific anatomy were higher and more heterogeneous. Moreover, the maximum temperature to the rectal wall was typically ∼1 °C greater for patient-specific anatomy than for uniform anatomy. The ISA effect of the TB and HT-only seeds caused a reduction in D90 similar to that found for previously-investigated NiCu-based seeds, but of a slightly smaller magnitude. Conclusion: The differences between temperature distributions computed for uniform and patient-specific anatomy for ferrite-based seeds are significant enough that heterogeneous anatomy should be considered. Both types of modelling indicate that ferrite-based seeds provide sufficiently high and uniform hyperthermia to the prostate, without excessively heating surrounding tissues. The ISA effect of these seeds is slightly less than that for the previously-presented NiCu-based seeds.« less
Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T
2013-01-01
Background The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. Objective This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. Methods We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Findings Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax. PMID:23792324
Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T
2014-03-01
The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax.
Aneurysm permeability following coil embolization: packing density and coil distribution
Chueh, Ju-Yu; Vedantham, Srinivasan; Wakhloo, Ajay K; Carniato, Sarena L; Puri, Ajit S; Bzura, Conrad; Coffin, Spencer; Bogdanov, Alexei A; Gounis, Matthew J
2015-01-01
Background Rates of durable aneurysm occlusion following coil embolization vary widely, and a better understanding of coil mass mechanics is desired. The goal of this study is to evaluate the impact of packing density and coil uniformity on aneurysm permeability. Methods Aneurysm models were coiled using either Guglielmi detachable coils or Target coils. The permeability was assessed by taking the ratio of microspheres passing through the coil mass to those in the working fluid. Aneurysms containing coil masses were sectioned for image analysis to determine surface area fraction and coil uniformity. Results All aneurysms were coiled to a packing density of at least 27%. Packing density, surface area fraction of the dome and neck, and uniformity of the dome were significantly correlated (p<0.05). Hence, multivariate principal components-based partial least squares regression models were used to predict permeability. Similar loading vectors were obtained for packing and uniformity measures. Coil mass permeability was modeled better with the inclusion of packing and uniformity measures of the dome (r2=0.73) than with packing density alone (r2=0.45). The analysis indicates the importance of including a uniformity measure for coil distribution in the dome along with packing measures. Conclusions A densely packed aneurysm with a high degree of coil mass uniformity will reduce permeability. PMID:25031179
A new approach for the description of discharge extremes in small catchments
NASA Astrophysics Data System (ADS)
Pavia Santolamazza, Daniela; Lebrenz, Henning; Bárdossy, András
2017-04-01
Small catchment basins in Northwestern Switzerland, characterized by small concentration times, are frequently targeted by floods. The peak and the volume of these floods are commonly estimated by a frequency analysis of occurrence and described by a random variable, assuming a uniform distributed probability and stationary input drivers (e.g. precipitation, temperature). For these small catchments, we attempt to describe and identify the underlying mechanisms and dynamics at the occurrence of extremes by means of available high temporal resolution (10 min) observations and to explore the possibilities to regionalize hydrological parameters for short intervals. Therefore, we investigate new concepts for the flood description such as entropy as a measure of disorder and dispersion of precipitation. First findings and conclusions of this ongoing research are presented.
Recent Progress in Nanoelectrical Characterizations of CdTe and Cu(In,Ga)Se2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Chun-Sheng; To, Bobby; Glynn, Stephen
2016-11-21
We report two recent nanoelectrical characterizations of CdTe and Cu(In, Ga)Se2 (CIGS) thin-film solar cells by developing atomic force microscopy-based nanoelectrical probes. Charges trapped at defects at the CdS/CdTe interface were probed by Kelvin probe force microscopy (KPFM) potential mapping and by ion-milling the CdTe superstrate device in a bevel glancing angle of ~0.5 degrees. The results show randomly distributed donor-like defects at the interface. The effect of K post-deposition treatment on the near-surface region of the CIGS film was studied by KPFM potential and scanning spreading resistance microscopy (SSRM) resistivity mapping, which shows passivation of grain-boundary potential and improvementmore » of resistivity uniformity by the K treatment.« less
NASA Astrophysics Data System (ADS)
Theodorsen, Audun; Garcia, Odd Erik; Kube, Ralph; Labombard, Brian; Terry, Jim
2017-10-01
In the far scrape-off layer (SOL), radial motion of filamentary structures leads to excess transport of particles and heat. Amplitudes and arrival times of these filaments have previously been studied by conditional averaging in single-point measurements from Langmuir Probes and Gas Puff Imaging (GPI). Conditional averaging can be problematic: the cutoff for large amplitudes is mostly chosen by convention; the conditional windows used may influence the arrival time distribution; and the amplitudes cannot be separated from a background. Previous work has shown that SOL fluctuations are well described by a stochastic model consisting of a super-position of pulses with fixed shape and randomly distributed amplitudes and arrival times. The model can be formulated as a pulse shape convolved with a train of delta pulses. By choosing a pulse shape consistent with the power spectrum of the fluctuation time series, Richardson-Lucy deconvolution can be used to recover the underlying amplitudes and arrival times of the delta pulses. We apply this technique to both L and H-mode GPI data from the Alcator C-Mod tokamak. The pulse arrival times are shown to be uncorrelated and uniformly distributed, consistent with a Poisson process, and the amplitude distribution has an exponential tail.
Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin
2014-01-01
In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150
Randomly biased investments and the evolution of public goods on interdependent networks
NASA Astrophysics Data System (ADS)
Chen, Wei; Wu, Te; Li, Zhiwu; Wang, Long
2017-08-01
Deciding how to allocate resources between interdependent systems is significant to optimize efficiency. We study the effects of heterogeneous contribution, induced by such interdependency, on the evolution of cooperation, through implementing the public goods games on two-layer networks. The corresponding players on different layers try to share a fixed amount of resources as the initial investment properly. The symmetry breaking of investments between players located on different layers is able to either prevent investments from, or extract them out of the deadlock. Results show that a moderate investment heterogeneity is best favorable for the evolution of cooperation, and random allocation of investment bias suppresses the cooperators at a wide range of the investment bias and the enhancement effect. Further studies on time evolution with different initial strategy configurations show that the non-interdependent cooperators along the interface of interdependent cooperators also are an indispensable factor in facilitating cooperative behavior. Our main results are qualitatively unchanged even diversifying investment bias that is subject to uniform distribution. Our study may shed light on the understanding of the origin of cooperative behavior on interdependent networks.
The Supermarket Model with Bounded Queue Lengths in Equilibrium
NASA Astrophysics Data System (ADS)
Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.
2018-04-01
In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.
Phase information contained in meter-scale SAR images
NASA Astrophysics Data System (ADS)
Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda
2007-10-01
The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.
Random Walk Particle Tracking For Multiphase Heat Transfer
NASA Astrophysics Data System (ADS)
Lattanzi, Aaron; Yin, Xiaolong; Hrenya, Christine
2017-11-01
As computing capabilities have advanced, direct numerical simulation (DNS) has become a highly effective tool for quantitatively predicting the heat transfer within multiphase flows. Here we utilize a hybrid DNS framework that couples the lattice Boltzmann method (LBM) to the random walk particle tracking (RWPT) algorithm. The main challenge of such a hybrid is that discontinuous fields pose a significant challenge to the RWPT framework and special attention must be given to the handling of interfaces. We derive a method for addressing discontinuities in the diffusivity field, arising at the interface between two phases. Analytical means are utilized to develop an interfacial tracer balance and modify the RWPT algorithm. By expanding the modulus of the stochastic (diffusive) step and only allowing a subset of the tracers within the high diffusivity medium to undergo a diffusive step, the correct equilibrium state can be restored (globally homogeneous tracer distribution). The new RWPT algorithm is implemented within the SUSP3D code and verified against a variety of systems: effective diffusivity of a static gas-solids mixture, hot sphere in unbounded diffusion, cooling sphere in unbounded diffusion, and uniform flow past a hot sphere.
The origin of bursts and heavy tails in human dynamics.
Barabási, Albert-László
2005-05-12
The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behaviour into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. In contrast, there is increasing evidence that the timing of many human activities, ranging from communication to entertainment and work patterns, follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. Here I show that the bursty nature of human behaviour is a consequence of a decision-based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, with most tasks being rapidly executed, whereas a few experience very long waiting times. In contrast, random or priority blind execution is well approximated by uniform inter-event statistics. These finding have important implications, ranging from resource management to service allocation, in both communications and retail.
NASA Astrophysics Data System (ADS)
Korobov, A.
2011-08-01
Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.
Korobov, A
2011-08-01
Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.
A novel look at the pulsar force-free magnetosphere
NASA Astrophysics Data System (ADS)
Petrova, S. A.; Flanchik, A. B.
2018-03-01
The stationary axisymmetric force-free magnetosphere of a pulsar is considered. We present an exact dipolar solution of the pulsar equation, construct the magnetospheric model on its basis and examine its observational support. The new model has toroidal rather than common cylindrical geometry, in line with that of the plasma outflow observed directly as the pulsar wind nebula at much larger spatial scale. In its new configuration, the axisymmetric magnetosphere consumes the neutron star rotational energy much more efficiently, implying re-estimation of the stellar magnetic field, B_{new}0=3.3×10^{-4}B/P, where P is the pulsar period. Then the 7-order scatter of the magnetic field derived from the rotational characteristics of the pulsars observed appears consistent with the \\cotχ-law, where χ is a random quantity uniformly distributed in the interval [0,π/2]. Our result is suggestive of a unique actual magnetic field strength of the neutron stars along with a random angle between the magnetic and rotational axes and gives insight into the neutron star unification on the geometrical basis.
Elastic properties of woven bone: effect of mineral content and collagen fibrils orientation.
García-Rodríguez, J; Martínez-Reina, J
2017-02-01
Woven bone is a type of tissue that forms mainly during fracture healing or fetal bone development. Its microstructure can be modeled as a composite with a matrix of mineral (hydroxyapatite) and inclusions of collagen fibrils with a more or less random orientation. In the present study, its elastic properties were estimated as a function of composition (degree of mineralization) and fibril orientation. A self-consistent homogenization scheme considering randomness of inclusions' orientation was used for this purpose. Lacuno-canalicular porosity in the form of periodically distributed void inclusions was also considered. Assuming collagen fibrils to be uniformly oriented in all directions led to an isotropic tissue with a Young's modulus [Formula: see text] GPa, which is of the same order of magnitude as that of woven bone in fracture calluses. By contrast, assuming fibrils to have a preferential orientation resulted in a Young's modulus in the preferential direction of 9-16 GPa depending on the mineral content of the tissue. These results are consistent with experimental evidence for woven bone in foetuses, where collagen fibrils are aligned to a certain extent.
CFD simulation of the gas flow in a pulse tube cryocooler with two pulse tubes
NASA Astrophysics Data System (ADS)
Yin, C. L.
2015-12-01
In this paper, in order to instruct the next optimization work, a two-dimension Computational Fluid Dynamics (CFD) model is developed to simulate temperature distribution and velocity distribution of oscillating fluid in the DPTC by individual phase-shifting. It is found that the axial temperature distribution of regenerator is generally uniform and the temperatures near the center at the same cross setion of two pulse tubes are obviously higher than their near wall temperatures. The wall temperature difference about 0-7 K exists between the two pulse tubes. The velocity distribution near the center of the regenerator is uniform and there is obvious injection stream coming at the center of the pulse tubes from the hot end. The formation reason of temperature distribution and velocity distribution is explained.
Malinouski, Mikalai; Kehr, Sebastian; Finney, Lydia; Vogt, Stefan; Carlson, Bradley A.; Seravalli, Javier; Jin, Richard; Handy, Diane E.; Park, Thomas J.; Loscalzo, Joseph; Hatfield, Dolph L.
2012-01-01
Abstract Aim: Recent advances in quantitative methods and sensitive imaging techniques of trace elements provide opportunities to uncover and explain their biological roles. In particular, the distribution of selenium in tissues and cells under both physiological and pathological conditions remains unknown. In this work, we applied high-resolution synchrotron X-ray fluorescence microscopy (XFM) to map selenium distribution in mouse liver and kidney. Results: Liver showed a uniform selenium distribution that was dependent on selenocysteine tRNA[Ser]Sec and dietary selenium. In contrast, kidney selenium had both uniformly distributed and highly localized components, the latter visualized as thin circular structures surrounding proximal tubules. Other parts of the kidney, such as glomeruli and distal tubules, only manifested the uniformly distributed selenium pattern that co-localized with sulfur. We found that proximal tubule selenium localized to the basement membrane. It was preserved in Selenoprotein P knockout mice, but was completely eliminated in glutathione peroxidase 3 (GPx3) knockout mice, indicating that this selenium represented GPx3. We further imaged kidneys of another model organism, the naked mole rat, which showed a diminished uniformly distributed selenium pool, but preserved the circular proximal tubule signal. Innovation: We applied XFM to image selenium in mammalian tissues and identified a highly localized pool of this trace element at the basement membrane of kidneys that was associated with GPx3. Conclusion: XFM allowed us to define and explain the tissue topography of selenium in mammalian kidneys at submicron resolution. Antioxid. Redox Signal. 16, 185–192. PMID:21854231
High-voltage electrode optimization towards uniform surface treatment by a pulsed volume discharge
NASA Astrophysics Data System (ADS)
Ponomarev, A. V.; Pedos, M. S.; Scherbinin, S. V.; Mamontov, Y. I.; Ponomarev, S. V.
2015-11-01
In this study, the shape and material of the high-voltage electrode of an atmospheric pressure plasma generation system were optimised. The research was performed with the goal of achieving maximum uniformity of plasma treatment of the surface of the low-voltage electrode with a diameter of 100 mm. In order to generate low-temperature plasma with the volume of roughly 1 cubic decimetre, a pulsed volume discharge was used initiated with a corona discharge. The uniformity of the plasma in the region of the low-voltage electrode was assessed using a system for measuring the distribution of discharge current density. The system's low-voltage electrode - collector - was a disc of 100 mm in diameter, the conducting surface of which was divided into 64 radially located segments of equal surface area. The current at each segment was registered by a high-speed measuring system controlled by an ARM™-based 32-bit microcontroller. To facilitate the interpretation of results obtained, a computer program was developed to visualise the results. The program provides a 3D image of the current density distribution on the surface of the low-voltage electrode. Based on the results obtained an optimum shape for a high-voltage electrode was determined. Uniformity of the distribution of discharge current density in relation to distance between electrodes was studied. It was proven that the level of non-uniformity of current density distribution depends on the size of the gap between electrodes. Experiments indicated that it is advantageous to use graphite felt VGN-6 (Russian abbreviation) as the material of the high-voltage electrode's emitting surface.
NASA Astrophysics Data System (ADS)
Parker-Stetter, Sandra; Urmy, Samuel; Horne, John; Eisner, Lisa; Farley, Edward
2016-12-01
Hypotheses on the factors affecting forage fish species distributions are often proposed but rarely evaluated using a comprehensive suite of indices. Using 24 predictor indices, we compared competing hypotheses and calculated average models for the distributions of capelin, age-0 Pacific cod, and age-0 pollock in the eastern Bering Sea from 2006 to 2010. Distribution was described using a two stage modeling approach: probability of occurrence ("presence") and density when fish were present. Both local (varying by location and year) and annual (uniform in space but varying by year) indices were evaluated, the latter accounting for the possibility that distributions were random but that overall presence or densities changed with annual conditions. One regional index, distance to the location of preflexion larvae earlier in the year, was evaluated for age-0 pollock. Capelin distributions were best predicted by local indices such as bottom depth, temperature, and salinity. Annual climate (May sea surface temperature (SST), sea ice extent anomaly) and wind (June wind speed cubed) indices were often important for age-0 Pacific cod in addition to local indices (temperature and depth). Surface, midwater, and water column age-0 pollock distributions were best described by a combination of local (depth, temperature, salinity, zooplankton) and annual (May SST, sea ice anomaly, June wind speed cubed) indices. Our results corroborated some of those in previous distribution studies, but suggested that presence and density may also be influenced by other factors. Even though there were common environmental factors that influenced all species' distributions, it is not possible to generalize conditions for forage fish as a group.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
Criterion-free measurement of motion transparency perception at different speeds
Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.
2018-01-01
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154
Enhancement of a 2D front-tracking algorithm with a non-uniform distribution of Lagrangian markers
NASA Astrophysics Data System (ADS)
Febres, Mijail; Legendre, Dominique
2018-04-01
The 2D front tracking method is enhanced to control the development of spurious velocities for non-uniform distributions of markers. The hybrid formulation of Shin et al. (2005) [7] is considered. A new tangent calculation is proposed for the calculation of the tension force at markers. A new reconstruction method is also proposed to manage non-uniform distributions of markers. We show that for both the static and the translating spherical drop test case the spurious currents are reduced to the machine precision. We also show that the ratio of the Lagrangian grid size Δs over the Eulerian grid size Δx has to satisfy Δs / Δx > 0.2 for ensuring such low level of spurious velocity. The method is found to provide very good agreement with benchmark test cases from the literature.
Results on Vertex Degree and K-Connectivity in Uniform S-Intersection Graphs
2014-01-01
distribution. A uniform s-intersection graph models the topology of a secure wireless sensor network employing the widely used s-composite key predistribution scheme. Our theoretical findings is also confirmed by numerical results.
3D reconstruction from non-uniform point clouds via local hierarchical clustering
NASA Astrophysics Data System (ADS)
Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo
2017-07-01
Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.
Visualization of self-heating of an all climate battery by infrared thermography
NASA Astrophysics Data System (ADS)
Zhang, Guangsheng; Tian, Hua; Ge, Shanhai; Marple, Dan; Sun, Fengchun; Wang, Chao-Yang
2018-02-01
Self-heating Li-ion battery (SHLB), a.k.a. all climate battery, has provided a novel and practical solution to the low temperature power loss challenge. During its rapid self-heating, it is critical to keep the heating process and temperature distributions uniform for superior battery performance, durability and safety. Through infrared thermography of an experimental SHLB cell activated from various low ambient temperatures, we find that temperature distribution is uniform over the active electrode area, suggesting uniform heating. We also find that a hot spot exists at the activation terminal during self-heating, which provides diagnostics for improvement of next generation SHLB cells without the hot spot.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Design and testing of a uniformly solar energy TIR-R concentration lenses for HCPV systems.
Shen, S C; Chang, S J; Yeh, C Y; Teng, P C
2013-11-04
In this paper, total internal reflection-refraction (TIR-R) concentration (U-TIR-R-C) lens module were designed for uniformity using the energy configuration method to eliminate hot spots on the surface of solar cell and increase conversion efficiency. The design of most current solar concentrators emphasizes the high-power concentration of solar energy, however neglects the conversion inefficiency resulting from hot spots generated by uneven distributions of solar energy concentrated on solar cells. The energy configuration method proposed in this study employs the concept of ray tracing to uniformly distribute solar energy to solar cells through a U-TIR-R-C lens module. The U-TIR-R-C lens module adopted in this study possessed a 76-mm diameter, a 41-mm thickness, concentration ratio of 1134 Suns, 82.6% optical efficiency, and 94.7% uniformity. The experiments demonstrated that the U-TIR-R-C lens module reduced the core temperature of the solar cell from 108 °C to 69 °C and the overall temperature difference from 45 °C to 10 °C, and effectively relative increased the conversion efficiency by approximately 3.8%. Therefore, the U-TIR-R-C lens module designed can effectively concentrate a large area of sunlight onto a small solar cell, and the concentrated solar energy can be evenly distributed in the solar cell to achieve uniform irradiance and effectively eliminate hot spots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, R.
1997-09-01
In inertial confinement fusion (ICF) experiments, irradiation uniformity is improved by passing laser beams through distributed phase plates (DPPs), which produce focused intensity profiles with well-controlled, reproducible envelopes modulated by fine random speckle. [C. B. Burckhardt, Appl. Opt. {bold 9}, 695 (1970); Y. Kato and K. Mima, Appl. Phys. B {bold 29}, 186 (1982); Y. Kato {ital et al.}, Phys. Rev. Lett. {bold 53}, 1057 (1984); Laboratory for Laser Energetics LLE Review 33, NTIS Document No. DOE/DP/40200-65, 1987 (unpublished), p. 1; Laboratory for Laser Energetics LLE Review 63, NTIS Document No. DOE/SF/19460-91, 1995 (unpublished), p. 1.] A uniformly ablating plasmamore » atmosphere acts to reduce the contribution of the speckle to the time-averaged irradiation nonuniformity by causing the intensity distribution to move relative to the absorption layer of the plasma. This occurs most directly as the absorption layer in the plasma moves with the ablation-driven flow, but it is shown that the effect of the accumulating ablated plasma on the phase of the laser light also makes a quantitatively significant contribution. Analytical results are obtained using the paraxial approximation applied to the beam propagation, and a simple statistical model is assumed for the properties of DPPs. The reduction in the time-averaged spatial spectrum of the speckle due to these effects is shown to be quantitatively significant within time intervals characteristic of atmospheric hydrodynamics under typical ICF irradiation intensities. {copyright} {ital 1997 American Institute of Physics.}« less
Reiter, Michael J; Nemesure, Allison; Madu, Ezemonye; Reagan, Lisa; Plank, April
2018-06-01
To describe the frequency, distribution and reporting patterns of incidental findings receiving the Lung-RADS S modifier on low-dose chest computed tomography (CT) among lung cancer screening participants. This retrospective investigation included 581 individuals who received baseline low-dose chest CT for lung cancer screening between October 2013 and June 2017 at a single center. Incidental findings resulting in assignment of Lung-RADS S modifier were recorded as were incidental abnormalities detailed within the body of the radiology report only. A subset of 60 randomly selected CTs was reviewed by a second (blinded) radiologist to evaluate inter-rater variability of Lung-RADS reporting. A total of 261 (45%) participants received the Lung-RADS S modifier on baseline CT with 369 incidental findings indicated as potentially clinically significant. Coronary artery calcification was most commonly reported, accounting for 182 of the 369 (49%) findings. An additional 141 incidentalomas of the same types as these 369 findings were described in reports but were not labelled with the S modifier. Therefore, as high as 69% (402 of 581) of participants could have received the S modifier if reporting was uniform. Inter-radiologist concordance of S modifier reporting in a subset of 60 participants was poor (42% agreement, kappa = 0.2). Incidental findings are commonly identified on chest CT for lung cancer screening, yet reporting of the S modifier within Lung-RADS is inconsistent. Specific guidelines are necessary to better define potentially clinically significant abnormalities and to improve reporting uniformity. Copyright © 2018 Elsevier B.V. All rights reserved.
Sol-Gel Glass Holographic Light-Shaping Diffusers
NASA Technical Reports Server (NTRS)
Yu, Kevin; Lee, Kang; Savant, Gajendra; Yin, Khin Swe (Lillian)
2005-01-01
Holographic glass light-shaping diffusers (GLSDs) are optical components for use in special-purpose illumination systems (see figure). When properly positioned with respect to lamps and areas to be illuminated, holographic GLSDs efficiently channel light from the lamps onto specified areas with specified distributions of illumination for example, uniform or nearly uniform irradiance can be concentrated with intensity confined to a peak a few degrees wide about normal incidence, over a circular or elliptical area. Holographic light diffusers were developed during the 1990s. The development of the present holographic GLSDs extends the prior development to incorporate sol-gel optical glass. To fabricate a holographic GLSD, one records a hologram on a sol-gel silica film formulated specially for this purpose. The hologram is a quasi-random, micro-sculpted pattern of smoothly varying changes in the index of refraction of the glass. The structures in this pattern act as an array of numerous miniature lenses that refract light passing through the GLSD, such that the transmitted light beam exhibits a precisely tailored energy distribution. In comparison with other light diffusers, holographic GLSDs function with remarkably high efficiency: they typically transmit 90 percent or more of the incident lamp light onto the designated areas. In addition, they can withstand temperatures in excess of 1,000 C. These characteristics make holographic GLSDs attractive for use in diverse lighting applications that involve high temperatures and/or requirements for high transmission efficiency for ultraviolet, visible, and near-infrared light. Examples include projectors, automobile headlights, aircraft landing lights, high-power laser illuminators, and industrial and scientific illuminators.
Electron transport through triangular potential barriers with doping-induced disorder
NASA Astrophysics Data System (ADS)
Elpelt, R.; Wolst, O.; Willenberg, H.; Malzer, S.; Döhler, G. H.
2004-05-01
Electron transport through single-, double-, and triple-barrier structures created by the insertion of suitably δ-doped layers in GaAs is investigated. The results are compared with experiments on barriers of similar shape, but obtained by linear grading of the Al fraction x in AlxGa1-xAs structures. In the case of the doping-induced space-charge potential it is found that the effective barrier height for transport is much lower than expected from a simple model, in which uniform distribution of the doping charge within the doped layers is assumed. This reduction is quantitatively explained by taking into account the random distribution of the acceptor atoms within the δp-doped layers, which results in large spatial fluctuations of the barrier potential. The transport turns out to be dominated by small regions around the energetically lowest saddle points of the random space-charge potential. Additionally, independent on the dimensionality of the transport [three-dimensional (3D) to 3D in the single barrier, from 3D through 2D to 3D in the double barrier, and from 3D through 2D through 2D to 3D in the triple-barrier structure], fingerprints of 2D subband resonances are neither experimentally observed nor theoretically expected in the doping-induced structures. This is attributed to the disorder-induced random spatial fluctuations of the subband energies in the n layers which are uncorrelated for neighboring layers. Our interpretations of the temperature-dependent current-voltage characteristics are corroborated by comparison with the experimental and theoretical results obtained from the corresponding fluctuation-free AlxGa1-xAs structures. Quantitative agreement between theory and experiment is observed in both cases.
NASA Technical Reports Server (NTRS)
Siegel, R.; Sparrow, E. M.
1960-01-01
The purpose of this note is to examine in a more precise way how the Nusselt numbers for turbulent heat transfer in both the fully developed and thermal entrance regions of a circular tube are affected by two different wall boundary conditions. The comparisons are made for: (a) Uniform wall temperature (UWT); and (b) uniform wall heat flux (UHF). Several papers which have been concerned with the turbulent thermal entrance region problem are given. 1 Although these analyses have all utilized an eigenvalue formulation for the thermal entrance region there were differences in the choices of eddy diffusivity expressions, velocity distributions, and methods for carrying out the numerical solutions. These differences were also found in the fully developed analyses. Hence when making a comparison of the analytical results for uniform wall temperature and uniform wall heat flux, it was not known if differences in the Nusselt numbers could be wholly attributed to the difference in wall boundary conditions, since all the analytical results were not obtained in a consistent way. To have results which could be directly compared, computations were carried out for the uniform wall temperature case, using the same eddy diffusivity, velocity distribution, and digital computer program employed for uniform wall heat flux. In addition, the previous work was extended to a lower Reynolds number range so that comparisons could be made over a wide range of both Reynolds and Prandtl numbers.
Variable area fuel cell process channels
Kothmann, Richard E.
1981-01-01
A fuel cell arrangement having a non-uniform distribution of fuel and oxidant flow paths, on opposite sides of an electrolyte matrix, sized and positioned to provide approximately uniform fuel and oxidant utilization rates, and cell conditions, across the entire cell.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Uniform Test Method is used to test more than one unit of a basic model to determine the efficiency of... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2014-01-01 2014-01-01 false Uniform Test Method for Measuring the Energy Consumption...
Confined energy distribution for charged particle beams
Jason, Andrew J.; Blind, Barbara
1990-01-01
A charged particle beam is formed to a relatively larger area beam which is well-contained and has a beam area which relatively uniformly deposits energy over a beam target. Linear optics receive an accelerator beam and output a first beam with a first waist defined by a relatively small size in a first dimension normal to a second dimension. Nonlinear optics, such as an octupole magnet, are located about the first waist and output a second beam having a phase-space distribution which folds the beam edges along the second dimension toward the beam core to develop a well-contained beam and a relatively uniform particle intensity across the beam core. The beam may then be expanded along the second dimension to form the uniform ribbon beam at a selected distance from the nonlinear optics. Alternately, the beam may be passed through a second set of nonlinear optics to fold the beam edges in the first dimension. The beam may then be uniformly expanded along the first and second dimensions to form a well-contained, two-dimensional beam for illuminating a two-dimensional target with a relatively uniform energy deposition.
NASA Astrophysics Data System (ADS)
Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki
2017-08-01
This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.
Duchêne, Sebastián; Duchêne, David; Holmes, Edward C; Ho, Simon Y W
2015-07-01
Rates and timescales of viral evolution can be estimated using phylogenetic analyses of time-structured molecular sequences. This involves the use of molecular-clock methods, calibrated by the sampling times of the viral sequences. However, the spread of these sampling times is not always sufficient to allow the substitution rate to be estimated accurately. We conducted Bayesian phylogenetic analyses of simulated virus data to evaluate the performance of the date-randomization test, which is sometimes used to investigate whether time-structured data sets have temporal signal. An estimate of the substitution rate passes this test if its mean does not fall within the 95% credible intervals of rate estimates obtained using replicate data sets in which the sampling times have been randomized. We find that the test sometimes fails to detect rate estimates from data with no temporal signal. This error can be minimized by using a more conservative criterion, whereby the 95% credible interval of the estimate with correct sampling times should not overlap with those obtained with randomized sampling times. We also investigated the behavior of the test when the sampling times are not uniformly distributed throughout the tree, which sometimes occurs in empirical data sets. The test performs poorly in these circumstances, such that a modification to the randomization scheme is needed. Finally, we illustrate the behavior of the test in analyses of nucleotide sequences of cereal yellow dwarf virus. Our results validate the use of the date-randomization test and allow us to propose guidelines for interpretation of its results. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Combined Loads Test Fixture for Thermal-Structural Testing Aerospace Vehicle Panel Concepts
NASA Technical Reports Server (NTRS)
Fields, Roger A.; Richards, W. Lance; DeAngelis, Michael V.
2004-01-01
A structural test requirement of the National Aero-Space Plane (NASP) program has resulted in the design, fabrication, and implementation of a combined loads test fixture. Principal requirements for the fixture are testing a 4- by 4-ft hat-stiffened panel with combined axial (either tension or compression) and shear load at temperatures ranging from room temperature to 915 F, keeping the test panel stresses caused by the mechanical loads uniform, and thermal stresses caused by non-uniform panel temperatures minimized. The panel represents the side fuselage skin of an experimental aerospace vehicle, and was produced for the NASP program. A comprehensive mechanical loads test program using the new test fixture has been conducted on this panel from room temperature to 500 F. Measured data have been compared with finite-element analyses predictions, verifying that uniform load distributions were achieved by the fixture. The overall correlation of test data with analysis is excellent. The panel stress distributions and temperature distributions are very uniform and fulfill program requirements. This report provides details of an analytical and experimental validation of the combined loads test fixture. Because of its simple design, this unique test fixture can accommodate panels from a variety of aerospace vehicle designs.
2009-09-01
non-uniform, stationary rotation / non- Distribution A: Approved for public release; distribution is unlimited. 8 stationary rotation , mass...Cayley spectral transformation as a means of rotating the basin of convergence of the Arnoldi algorithm. Instead of doing the inversion of the large...pair of counter rotating streamwise vortices embedded in uniform shear flow. Consistently with earlier work by the same group, the main present finding
Weighted Distances in Scale-Free Configuration Models
NASA Astrophysics Data System (ADS)
Adriaans, Erwin; Komjáthy, Júlia
2018-01-01
In this paper we study first-passage percolation in the configuration model with empirical degree distribution that follows a power-law with exponent τ \\in (2,3) . We assign independent and identically distributed (i.i.d.) weights to the edges of the graph. We investigate the weighted distance (the length of the shortest weighted path) between two uniformly chosen vertices, called typical distances. When the underlying age-dependent branching process approximating the local neighborhoods of vertices is found to produce infinitely many individuals in finite time—called explosive branching process—Baroni, Hofstad and the second author showed in Baroni et al. (J Appl Probab 54(1):146-164, 2017) that typical distances converge in distribution to a bounded random variable. The order of magnitude of typical distances remained open for the τ \\in (2,3) case when the underlying branching process is not explosive. We close this gap by determining the first order of magnitude of typical distances in this regime for arbitrary, not necessary continuous edge-weight distributions that produce a non-explosive age-dependent branching process with infinite mean power-law offspring distributions. This sequence tends to infinity with the amount of vertices, and, by choosing an appropriate weight distribution, can be tuned to be any growing function that is O(log log n) , where n is the number of vertices in the graph. We show that the result remains valid for the the erased configuration model as well, where we delete loops and any second and further edges between two vertices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu; Donnelly, Eric D.; Strauss, Jonathan B.
Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model wasmore » used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0){sup 4} to (13.4){sup 4} for the radiosensitive normal tissue depending on the cylinder size, treatment lengths, prescription depth, and dose as well. However, for a uniform cancer cell distribution, the EUDs were between 6.3 Gy × 4 and 7.1 Gy × 4, and the TRs were found to be between (1.4){sup 4} and (1.7){sup 4}. For the uniformly interspersed cancer and radio-resistant normal cells, the TRs were less than 1. The two VCBT prescription regimens were found to be equivalent in terms of EUDs and TRs. Conclusions: HDR VCBT strongly favors cylindrical target volume with the cancer cell distribution following its dosimetric trend. Assuming a half-Gaussian distribution of cancer cells, the HDR VCBT provides a considerable radiobiological advantage over the external beam radiotherapy (EBRT) in terms of sparing more normal tissues while maintaining the same level of cancer cell killing. But for the uniform cancer cell distribution and radio-resistant normal tissue, the radiobiology outcome of the HDR VCBT does not show an advantage over the EBRT. This study strongly suggests that radiation therapy design should consider the cancer cell distribution inside the target volume in addition to the shape of target.« less
Accelerated 1 H MRSI using randomly undersampled spiral-based k-space trajectories.
Chatnuntawech, Itthi; Gagoski, Borjan; Bilgic, Berkin; Cauley, Stephen F; Setsompop, Kawin; Adalsteinsson, Elfar
2014-07-30
To develop and evaluate the performance of an acquisition and reconstruction method for accelerated MR spectroscopic imaging (MRSI) through undersampling of spiral trajectories. A randomly undersampled spiral acquisition and sensitivity encoding (SENSE) with total variation (TV) regularization, random SENSE+TV, is developed and evaluated on single-slice numerical phantom, in vivo single-slice MRSI, and in vivo three-dimensional (3D)-MRSI at 3 Tesla. Random SENSE+TV was compared with five alternative methods for accelerated MRSI. For the in vivo single-slice MRSI, random SENSE+TV yields up to 2.7 and 2 times reduction in root-mean-square error (RMSE) of reconstructed N-acetyl aspartate (NAA), creatine, and choline maps, compared with the denoised fully sampled and uniformly undersampled SENSE+TV methods with the same acquisition time, respectively. For the in vivo 3D-MRSI, random SENSE+TV yields up to 1.6 times reduction in RMSE, compared with uniform SENSE+TV. Furthermore, by using random SENSE+TV, we have demonstrated on the in vivo single-slice and 3D-MRSI that acceleration factors of 4.5 and 4 are achievable with the same quality as the fully sampled data, as measured by RMSE of reconstructed NAA map, respectively. With the same scan time, random SENSE+TV yields lower RMSEs of metabolite maps than other methods evaluated. Random SENSE+TV achieves up to 4.5-fold acceleration with comparable data quality as the fully sampled acquisition. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2012-12-01
Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).
Lim, Jing; Chong, Mark Seow Khoon; Chan, Jerry Kok Yen; Teoh, Swee-Hin
2014-06-25
Synthetic polymers used in tissue engineering require functionalization with bioactive molecules to elicit specific physiological reactions. These additives must be homogeneously dispersed in order to achieve enhanced composite mechanical performance and uniform cellular response. This work demonstrates the use of a solvent-free powder processing technique to form osteoinductive scaffolds from cryomilled polycaprolactone (PCL) and tricalcium phosphate (TCP). Cryomilling is performed to achieve micrometer-sized distribution of PCL and reduce melt viscosity, thus improving TCP distribution and improving structural integrity. A breakthrough is achieved in the successful fabrication of 70 weight percentage of TCP into a continuous film structure. Following compaction and melting, PCL/TCP composite scaffolds are found to display uniform distribution of TCP throughout the PCL matrix regardless of composition. Homogeneous spatial distribution is also achieved in fabricated 3D scaffolds. When seeded onto powder-processed PCL/TCP films, mesenchymal stem cells are found to undergo robust and uniform osteogenic differentiation, indicating the potential application of this approach to biofunctionalize scaffolds for tissue engineering applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Davis, Anthony B.; Xu, Feng; Diner, David J.
2018-01-01
We demonstrate the computational advantage gained by introducing non-exponential transmission laws into radiative transfer theory for two specific situations. One is the problem of spatial integration over a large domain where the scattering particles cluster randomly in a medium uniformly filled with an absorbing gas, and only a probabilistic description of the variability is available. The increasingly important application here is passive atmospheric profiling using oxygen absorption in the visible/near-IR spectrum. The other scenario is spectral integration over a region where the absorption cross-section of a spatially uniform gas varies rapidly and widely and, moreover, there are scattering particles embedded in the gas that are distributed uniformly, or not. This comes up in many applications, O2 A-band profiling being just one instance. We bring a common framework to solve these problems both efficiently and accurately that is grounded in the recently developed theory of Generalized Radiative Transfer (GRT). In GRT, the classic exponential law of transmission is replaced by one with a slower power-law decay that accounts for the unresolved spectral or spatial variability. Analytical results are derived in the single-scattering limit that applies to optically thin aerosol layers. In spectral integration, a modest gain in accuracy is obtained. As for spatial integration of near-monochromatic radiance, we find that, although both continuum and in-band radiances are affected by moderate levels of sub-pixel variability, only extreme variability will affect in-band/continuum ratios.
Processing of laser formed SiC powder
NASA Technical Reports Server (NTRS)
Haggerty, J. S.; Bowen, H. K.
1985-01-01
Superior SiC characteristics can be achieved through the use of ideal constituent powders and careful post-synthesis processing steps. High purity SiC powders of approx. 1000 A uniform diameter, nonagglomerated and spherical were produced. This required major revision of the particle formation and growth model from one based on classical nucleation and growth to one based on collision and coalescence of Si particles followed by their carburization. Dispersions based on pure organic solvents as well as steric stabilization were investigated. Although stable dispersions were formed by both, subsequent part fabrication emphasized the pure solvents since fewer problems with drying and residuals of the high purity particles were anticipated. Test parts were made by the colloidal pressing technique; both liquid filtration and consolidation (rearrangement) stages were modeled. Green densities corresponding to a random close packed structure (approx. 63%) were achieved; this highly perfect structure has a high, uniform coordination number (greater than 11) approaching the quality of an ordered structure without introducing domain boundary effects. After drying, parts were densified at temperatures ranging from 1800 to 2100 C. Optimum densification temperatures will probably be in the 1900 to 2000 C range based on these preliminary results which showed that 2050 C samples had experienced substantial grain growth. Although overfired, the 2050 C samples exhibited excellent mechanical properties. Biaxial tensile strengths up to 714 MPa and Vickers hardness values of 2430 kg/sq mm 2 were both more typical of hot pressed than sintered SiC. Both result from the absence of large defects and the confinement of residual porosity (less than 2.5%) to small diameter, uniformly distributed pores.
ERIC Educational Resources Information Center
Juhasz, Stephen; And Others
Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…
Chou, Cheng-Ying; Huang, Chih-Kang; Lu, Kuo-Wei; Horng, Tzyy-Leng; Lin, Win-Li
2013-01-01
The transport and accumulation of anticancer nanodrugs in tumor tissues are affected by many factors including particle properties, vascular density and leakiness, and interstitial diffusivity. It is important to understand the effects of these factors on the detailed drug distribution in the entire tumor for an effective treatment. In this study, we developed a small-scale mathematical model to systematically study the spatiotemporal responses and accumulative exposures of macromolecular carriers in localized tumor tissues. We chose various dextrans as model carriers and studied the effects of vascular density, permeability, diffusivity, and half-life of dextrans on their spatiotemporal concentration responses and accumulative exposure distribution to tumor cells. The relevant biological parameters were obtained from experimental results previously reported by the Dreher group. The area under concentration-time response curve (AUC) quantified the extent of tissue exposure to a drug and therefore was considered more reliable in assessing the extent of the overall drug exposure than individual concentrations. The results showed that 1) a small macromolecule can penetrate deep into the tumor interstitium and produce a uniform but low spatial distribution of AUC; 2) large macromolecules produce high AUC in the perivascular region, but low AUC in the distal region away from vessels; 3) medium-sized macromolecules produce a relatively uniform and high AUC in the tumor interstitium between two vessels; 4) enhancement of permeability can elevate the level of AUC, but have little effect on its uniformity while enhancement of diffusivity is able to raise the level of AUC and improve its uniformity; 5) a longer half-life can produce a deeper penetration and a higher level of AUC distribution. The numerical results indicate that a long half-life carrier in plasma and a high interstitial diffusivity are the key factors to produce a high and relatively uniform spatial AUC distribution in the interstitium. PMID:23565142
Schmitz, Max; Dähler, Fabian; Elvinger, François; Pedretti, Andrea; Steinfeld, Aldo
2017-04-10
We introduce a design methodology for nonimaging, single-reflection mirrors with polygonal inlet apertures that generate a uniform irradiance distribution on a polygonal outlet aperture, enabling a multitude of applications within the domain of concentrated photovoltaics. Notably, we present single-mirror concentrators of square and hexagonal perimeter that achieve very high irradiance uniformity on a square receiver at concentrations ranging from 100 to 1000 suns. These optical designs can be assembled in compound concentrators with maximized active area fraction by leveraging tessellation. More advanced multi-mirror concentrators, where each mirror individually illuminates the whole area of the receiver, allow for improved performance while permitting greater flexibility for the concentrator shape and robustness against partial shading of the inlet aperture.
Use of Radon for Evaluation of Atmospheric Transport Models: Sensitivity to Emissions
NASA Technical Reports Server (NTRS)
Gupta, Mohan L.; Douglass, Anne R.; Kawa, S. Randolph; Pawson, Steven
2004-01-01
This paper presents comparative analyses of atmospheric radon (Rn) distributions simulated using different emission scenarios and the observations. Results indicate that the model generally reproduces observed distributions of Rn but there are some biases in the model related to differences in large-scale and convective transport. Simulations presented here use an off-line three-dimensional chemical transport model driven by assimilated winds and two scenarios of Rn fluxes (atom/cm s) from ice-free land surfaces: (A) globally uniform flux of 1.0, and (B) uniform flux of 1.0 between 60 deg. S and 30 deg. N followed by a sharp linear decrease to 0.2 at 70 deg. N. We considered an additional scenario (C) where Rn emissions for case A were uniformly reduced by 28%. Results show that case A overpredicts observed Rn distributions in both hemispheres. Simulated northern hemispheric (NH) Rn distributions from cases B and C compare better with the observations, but are not discernible from each other. In the southern hemisphere, surface Rn distributions from case C compare better with the observations. We performed a synoptic scale source-receptor analysis for surface Rn to locate regions with ratios B/A and B/C less than 0.5. Considering an uncertainty in regional Rn emissions of a factor of two, our analysis indicates that additional measurements of surface Rn particularly during April-October and north of 50 deg. N over the Pacific as well as Atlantic regions would make it possible to determine if the proposed latitude gradient in Rn emissions is superior to a uniform flux scenario.
Evaluation of a multi-point method for determining acoustic impedance
NASA Technical Reports Server (NTRS)
Jones, Michael G.; Parrott, Tony L.
1988-01-01
An investigation was conducted to explore potential improvements provided by a Multi-Point Method (MPM) over the Standing Wave Method (SWM) and Two-Microphone Method (TMM) for determining acoustic impedance. A wave propagation model was developed to model the standing wave pattern in an impedance tube. The acoustic impedance of a test specimen was calculated from a best fit of this standing wave pattern to pressure measurements obtained along the impedance tube centerline. Three measurement spacing distributions were examined: uniform, random, and selective. Calculated standing wave patterns match the point pressure measurement distributions with good agreement for a reflection factor magnitude range of 0.004 to 0.999. Comparisons of results using 2, 3, 6, and 18 measurement points showed that the most consistent results are obtained when using at least 6 evenly spaced pressure measurements per half-wavelength. Also, data were acquired with broadband noise added to the discrete frequency noise and impedances were calculated using the MPM and TMM algorithms. The results indicate that the MPM will be superior to the TMM in the presence of significant broadband noise levels associated with mean flow.
Hierarchical Velocity Structure in the Core of Abell 2597
NASA Technical Reports Server (NTRS)
Still, Martin; Mushotzky, Richard
2004-01-01
We present XMM-Newton RGS and EPIC data of the putative cooling flow cluster Abell 2597. Velocities of the low-ionization emission lines in the spectrum are blue shifted with respect to the high-ionization lines by 1320 (sup +660) (sub -210) kilometers per second, which is consistent with the difference in the two peaks of the galaxy velocity distribution and may be the signature of bulk turbulence, infall, rotation or damped oscillation in the cluster. A hierarchical velocity structure such as this could be the direct result of galaxy mergers in the cluster core, or the injection of power into the cluster gas from a central engine. The uniform X-ray morphology of the cluster, the absence of fine scale temperature structure and the random distribution of the the galaxy positions, independent of velocity, suggests that our line of sight is close to the direction of motion. These results have strong implications for cooling flow models of the cluster Abell 2597. They give impetus to those models which account for the observed temperature structure of some clusters using mergers instead of cooling flows.
Automatic Classification of Medical Text: The Influence of Publication Form1
Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.
1988-01-01
Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.
Stratification in the lunar regolith - A preliminary view
NASA Technical Reports Server (NTRS)
Duke, M. B.; Nagle, J. S.
1975-01-01
Although our knowledge of lunar regolith stratification is incomplete, several categories of thick and thin strata have been identified. Relatively thick units average 2 to 3 cm in thickness, and appear surficially to be massive. On more detailed examination, these units can be uniformly fine-grained, can show internal trends, or can show internal variations which apparently are random. Other thick units contain soil clasts apparently reworked from underlying units. Thin laminae average approximately 1 mm in thickness; lenticular distribution and composition of some thin laminae indicates that they are fillets shed from adjacent rock fragments. Other dark fine-grained well-sorted thin laminae appear to be surficial zones reworked by micrometeorites. Interpretations of stratigraphic succession can be strengthened by the occurrence of characteristic coarse rock fragments and the orientation of large spatter agglutinates, which are commonly found in their original depositional orientation.
NASA Astrophysics Data System (ADS)
Yu, Mei; Wang, Chong; Yang, Cancan; Yu, Zhe
2017-11-01
With the great deformability of stretch, compression, bend and twisting, while preserving electrical property, metal films on elastomeric substrates have many applications for serving as bioelectrical interfaces. However, at present, most polymer-supported thin metal films reported rupture at small elongations (<10%). In this work, highly stretchable thin gold films were fabricated on PDMS substrates by a novel micro-processing technology. The as deposited films can be stretched by a maximum 120% strain while maintaining their electrical conductivity. Electrical characteristics of the gold films under single-cycle and multi-cycle stretch deformations are investigated in this work. SEM images imply that the gold films are under the structure of nanocracks. The mechanisms of the stretchability of the gold films can be explained by the nanocraks, which uniformly distribute with random orientation in the films.
Scale relativity and quantization of planet obliquities.
NASA Astrophysics Data System (ADS)
Nottale, L.
1998-07-01
The author applies the theory of scale relativity to the equations of rotational motion of solid bodies. He predicts in the new framework that the obliquities and inclinations of planets and satellites in the solar system must be quantized. Namely, one expects their distribution to be no longer uniform between 0 and π, but instead to display well-defined peaks of probability density at angles θk = kπ/n. The author shows in the present paper that the observational data agree very well with the prediction for n = 7, including the retrograde bodies and those which are heeled over the ecliptic plane. In particular, the value 23°27' of the obliquity of the Earth, which partly determines its climate, is not a random one, but lies in one of the main probability peaks at θ = π/7.
Modeling spatial effects of PM{sub 2.5} on term low birth weight in Los Angeles County
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coker, Eric, E-mail: cokerer@onid.orst.edu; Ghosh, Jokay; Jerrett, Michael
Air pollution epidemiological studies suggest that elevated exposure to fine particulate matter (PM{sub 2.5}) is associated with higher prevalence of term low birth weight (TLBW). Previous studies have generally assumed the exposure–response of PM{sub 2.5} on TLBW to be the same throughout a large geographical area. Health effects related to PM{sub 2.5} exposures, however, may not be uniformly distributed spatially, creating a need for studies that explicitly investigate the spatial distribution of the exposure–response relationship between individual-level exposure to PM{sub 2.5} and TLBW. Here, we examine the overall and spatially varying exposure–response relationship between PM{sub 2.5} and TLBW throughout urbanmore » Los Angeles (LA) County, California. We estimated PM{sub 2.5} from a combination of land use regression (LUR), aerosol optical depth from remote sensing, and atmospheric modeling techniques. Exposures were assigned to LA County individual pregnancies identified from electronic birth certificates between the years 1995-2006 (N=1,359,284) provided by the California Department of Public Health. We used a single pollutant multivariate logistic regression model, with multilevel spatially structured and unstructured random effects set in a Bayesian framework to estimate global and spatially varying pollutant effects on TLBW at the census tract level. Overall, increased PM{sub 2.5} level was associated with higher prevalence of TLBW county-wide. The spatial random effects model, however, demonstrated that the exposure–response for PM{sub 2.5} and TLBW was not uniform across urban LA County. Rather, the magnitude and certainty of the exposure–response estimates for PM{sub 2.5} on log odds of TLBW were greatest in the urban core of Central and Southern LA County census tracts. These results suggest that the effects may be spatially patterned, and that simply estimating global pollutant effects obscures disparities suggested by spatial patterns of effects. Studies that incorporate spatial multilevel modeling with random coefficients allow us to identify areas where air pollutant effects on adverse birth outcomes may be most severe and policies to further reduce air pollution might be most effective. - Highlights: • We model the spatial dependency of PM{sub 2.5} effects on term low birth weight (TLBW). • PM{sub 2.5} effects on TLBW are shown to vary spatially across urban LA County. • Modeling spatial dependency of PM{sub 2.5} health effects may identify effect 'hotspots'. • Birth outcomes studies should consider the spatial dependency of PM{sub 2.5} effects.« less
Advanced Technology for Ultra-Low Power System-on-Chip (SoC)
2017-06-01
design at IDS=1mA/μm compared with that in experimental 14nm-node FinFET. The redistributed electric field along the channel length direction can... design can result in more uniform electron density and electron velocity distributions compared to a homojunction device. This uniform electron... design at IDS=1mA/μm compared with that in experimental 14nm-node FinFET. 14 Approved for public release, distribution is unlimited. 0 5 10 15 20
World cup soccer players tend to be born with sun and moon in adjacent zodiacal signs
Verhulst, J
2000-01-01
The ecliptic elongation of the moon with respect to the sun does not show uniform distribution on the birth dates of the 704 soccer players selected for the 1998 World Cup. However, a uniform distribution is expected on astronomical grounds. The World Cup players show a very pronounced tendency (p = 0.00001) to be born on days when the sun and moon are in adjacent zodiacal signs. Key Words: soccer; World Cup; astrology; moon PMID:11131239
1991-09-01
12b. DISTRIBUTION CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (Maximum 200 words) Vector spherical harmonic expansions are...electric and magnetic field vectors from E rand B - r alone. Genural expressions are given relating the scattered field expansion coefficients to the source...Prescnbed by ANSI Std. Z39-18 29W-102 NCSC TR 426-90 CONTENTS Pag o INTRODUCTION 1 BACKGROUND 1 ANGULAR MOMENTUM OPERATOR AND VECTOR SPHERICAL
Electron kinematics in a plasma focus
NASA Technical Reports Server (NTRS)
Hohl, F.; Gary, S. P.
1977-01-01
The results of numerical integrations of the three-dimensional relativistic equations of motion of electrons subject to given electric and magnetic fields are presented. Fields due to two different models are studied: (1) a circular distribution of current filaments, and (2) a uniform current distribution; both the collapse and the current reduction phases are studied in each model. Decreasing current in the uniform current model yields 100 keV electrons accelerated toward the anode and, as for earlier ion computations, provides general agreement with experimental results.
High level continuity for coordinate generation with precise controls
NASA Technical Reports Server (NTRS)
Eiseman, P. R.
1982-01-01
Coordinate generation techniques with precise local controls have been derived and analyzed for continuity requirements up to both the first and second derivatives, and have been projected to higher level continuity requirements from the established pattern. The desired local control precision was obtained when a family of coordinate surfaces could be uniformly distributed without a consequent creation of flat spots on the coordinate curves transverse to the family. Relative to the uniform distribution, the family could be redistributed from an a priori distribution function or from a solution adaptive approach, both without distortion from the underlying transformation which may be independently chosen to fit a nontrivial geometry and topology.
ERIC Educational Resources Information Center
Ratliff, Michael I.; Mc Shane, Janet M.
2008-01-01
This article studies various holiday distributions, the most interesting one being Easter. Gauss' Easter algorithm and Microsoft Excel are used to determine that the Easter distribution can be closely approximated by the convolution of two well-known uniform distributions. (Contains 8 figures.)
Theoretical study of liquid droplet dispersion in a venturi scrubber.
Fathikalajahi, J; Talaie, M R; Taheri, M
1995-03-01
The droplet concentration distribution in an atomizing scrubber was calculated based on droplet eddy diffusion by a three-dimensional dispersion model. This model is also capable of predicting the liquid flowing on the wall. The theoretical distribution of droplet concentration agrees well with experimental data given by Viswanathan et al. for droplet concentration distribution in a venturi-type scrubber. The results obtained by the model show a non-uniform distribution of drops over the cross section of the scrubber, as noted by the experimental data. While the maximum of droplet concentration distribution may depend on many operating parameters of the scrubber, the results of this study show that the highest uniformity of drop distribution will be reached when penetration length is approximately equal to one-fourth of the depth of the scrubber. The results of this study can be applied to evaluate the removal efficiency of a venturi scrubber.
MATHEMATICAL ROUTINES FOR ENGINEERS AND SCIENTISTS
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The purpose of this package is to provide the scientific and engineering community with a library of programs useful for performing routine mathematical manipulations. This collection of programs will enable scientists to concentrate on their work without having to write their own routines for solving common problems, thus saving considerable amounts of time. This package contains sixteen subroutines. Each is separately documented with descriptions of the invoking subroutine call, its required parameters, and a sample test program. The functions available include: maxima, minima, and sort of vectors; factorials; random number generator (uniform or Gaussian distribution); complimentary error function; fast Fourier Transformation; Simpson's Rule integration; matrix determinate and inversion; Bessel function (J Bessel function for any order, and modified Bessel function for zero order); roots of a polynomial; roots of non-linear equation; and the solution of first order ordinary differential equations using Hamming's predictor-corrector method. There is also a subroutine for using a dot matrix printer to plot a given set of y values for a uniformly increasing x value. This package is written in FORTRAN 77 (Super Soft Small System FORTRAN compiler) for batch execution and has been implemented on the IBM PC computer series under MS-DOS with a central memory requirement of approximately 28K of 8 bit bytes for all subroutines. This program was developed in 1986.
Jaques, Peter A; Hsiao, Ta-Chih; Gao, Pengfei
2011-08-01
A recirculation aerosol wind tunnel was designed to maintain a uniform airflow and stable aerosol size distribution for evaluating aerosol sampler performance and determining particle penetration through protective clothing materials. The oval-shaped wind tunnel was designed to be small enough to fit onto a lab bench, have optimized dimensions for uniformity in wind speed and particle size distributions, sufficient mixing for even distribution of particles, and minimum particle losses. Performance evaluation demonstrates a relatively high level of spatial uniformity, with a coefficient of variation of 1.5-6.2% for wind velocities between 0.4 and 2.8 m s(-1) and, in this range, 0.8-8.5% for particles between 50 and 450 nm. Aerosol concentration stabilized within the first 5-20 min with, approximately, a count median diameter of 135 nm and geometric standard deviation of 2.20. Negligible agglomerate growth and particle loss are suggested. The recirculation design appears to result in unique features as needed for our research.
A method for improving the light intensity distribution in dental light-curing units.
Arikawa, Hiroyuki; Takahashi, Hideo; Minesaki, Yoshito; Muraguchi, Kouichi; Matsuyama, Takashi; Kanie, Takahito; Ban, Seiji
2011-01-01
A method for improving the uniformity of the radiation light from dental light-curing units (LCUs), and the effect on the polymerization of light-activated composite resin are investigated. Quartz-tungsten halogen, plasma-arc, and light-emitting diode LCUs were used, and additional optical elements such as a mixing tube and diffusing screen were employed to reduce the inhomogeneity of the radiation light. The distribution of the light intensity from the light guide tip was measured across the guide tip, as well as the distribution of the surface hardness of the light-activated resin emitted with the LCUs. Although the additional optical elements caused 13.2-25.9% attenuation of the light intensity, the uniformity of the light intensity of the LCUs was significantly improved in the modified LCUs, and the uniformity of the surface hardness of the resin was also improved. Our results indicate that the addition of optical elements to the LCU may be a simple and effective method for reducing inhomogeneity in radiation light from the LCUs.
Synthesis of uniformly distributed single- and double-sided zinc oxide (ZnO) nanocombs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altintas Yildirim, Ozlem; Liu, Yuzi; Petford-Long, Amanda K.
Uniformly distributed single- and double-sided zinc oxide (ZnO) nanocomb structures have been prepared by a vapor-liquid-solid technique from a mixture of ZnO nanoparticles and graphene nanoplatelets. The ZnO seed nanoparticles were synthesized via a simple precipitation method. The structure of the ZnO nanocombs could easily be controlled by tuning the carrier-gas flow rate during growth. Higher flow rate resulted in the formation of uniformly-distributed single-sided comb structures with nanonail-shaped teeth, as a result of the self-catalysis effect of the catalytically active Zn-terminated polar (0001) surface. Lower gas flow rate was favorable for production of double-sided comb structures with the twomore » sets of teeth at an angle of similar to 110 degrees to each other along the comb ribbon, which was attributed to the formation of a bicrystal nanocomb ribbon. Lastly, the formation of such a double-sided structure with nanonail-shaped teeth has not previously been reported.« less
Synthesis of uniformly distributed single- and double-sided zinc oxide (ZnO) nanocombs
Altintas Yildirim, Ozlem; Liu, Yuzi; Petford-Long, Amanda K.
2015-08-21
Uniformly distributed single- and double-sided zinc oxide (ZnO) nanocomb structures have been prepared by a vapor-liquid-solid technique from a mixture of ZnO nanoparticles and graphene nanoplatelets. The ZnO seed nanoparticles were synthesized via a simple precipitation method. The structure of the ZnO nanocombs could easily be controlled by tuning the carrier-gas flow rate during growth. Higher flow rate resulted in the formation of uniformly-distributed single-sided comb structures with nanonail-shaped teeth, as a result of the self-catalysis effect of the catalytically active Zn-terminated polar (0001) surface. Lower gas flow rate was favorable for production of double-sided comb structures with the twomore » sets of teeth at an angle of similar to 110 degrees to each other along the comb ribbon, which was attributed to the formation of a bicrystal nanocomb ribbon. Lastly, the formation of such a double-sided structure with nanonail-shaped teeth has not previously been reported.« less
Niedzielski, Joshua S; Yang, Jinzhong; Mohan, Radhe; Titt, Uwe; Mirkovic, Dragan; Stingo, Francesco; Liao, Zhongxing; Gomez, Daniel R; Martel, Mary K; Briere, Tina M; Court, Laurence E
2017-11-15
To determine whether there exists any significant difference in normal tissue toxicity between intensity modulated radiation therapy (IMRT) or proton therapy for the treatment of non-small cell lung cancer. A total of 134 study patients (n=49 treated with proton therapy, n=85 with IMRT) treated in a randomized trial had a previously validated esophageal toxicity imaging biomarker, esophageal expansion, quantified during radiation therapy, as well as esophagitis grade (Common Terminology Criteria for Adverse Events version 3.0), on a weekly basis during treatment. Differences between the 2 modalities were statically analyzed using the imaging biomarker metric value (Kruskal-Wallis analysis of variance), as well as the incidence and severity of esophagitis grade (χ 2 and Fisher exact tests, respectively). The dose-response of the imaging biomarker was also compared between modalities using esophageal equivalent uniform dose, as well as delivered dose to an isotropic esophageal subvolume. No statistically significant difference in the distribution of esophagitis grade, the incidence of grade ≥3 esophagitis (15 and 11 patients treated with IMRT and proton therapy, respectively), or the esophageal expansion imaging biomarker between cohorts (P>.05) was found. The distribution of imaging biomarker metric values had similar distributions between treatment arms, despite a slightly higher dose volume in the proton arm (P>.05). Imaging biomarker dose-response was similar between modalities for dose quantified as esophageal equivalent uniform dose and delivered esophageal subvolume dose. Regardless of treatment modality, there was high variability in imaging biomarker response, as well as esophagitis grade, for similar esophageal doses between patients. There was no significant difference in esophageal toxicity from either proton- or photon-based radiation therapy as quantified by esophagitis grade or the esophageal expansion imaging biomarker. Copyright © 2017 Elsevier Inc. All rights reserved.
Tomographical imaging using uniformly redundant arrays
NASA Technical Reports Server (NTRS)
Cannon, T. M.; Fenimore, E. E.
1979-01-01
An investigation is conducted of the behavior of two types of uniformly redundant array (URA) when used for close-up imaging. One URA pattern is a quadratic residue array whose characteristics for imaging planar sources have been simulated by Fenimore and Cannon (1978), while the second is based on m sequences that have been simulated by Gunson and Polychronopulos (1976) and by MacWilliams and Sloan (1976). Close-up imaging is necessary in order to obtain depth information for tomographical purposes. The properties of the two URA patterns are compared with a random array of equal open area. The goal considered in the investigation is to determine if a URA pattern exists which has the desirable defocus properties of the random array while maintaining artifact-free image properties for in-focus objects.
Redundancy and Reduction: Speakers Manage Syntactic Information Density
ERIC Educational Resources Information Center
Jaeger, T. Florian
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel…
Geographic Distribution of Trauma Centers and Injury Related Mortality in the United States
Brown, Joshua B.; Rosengart, Matthew R.; Billiar, Timothy R.; Peitzman, Andrew B.; Sperry, Jason L.
2015-01-01
Background Regionalized trauma care improves outcomes; however access to care is not uniform across the US. The objective was to evaluate whether geographic distribution of trauma centers correlates with injury mortality across state trauma systems. Methods Level I/II trauma centers in the contiguous US were mapped. State-level age-adjusted injury fatality rates/100,000people were obtained and evaluated for spatial autocorrelation. Nearest neighbor ratios (NNR) were generated for each state. A NNR<1 indicates clustering, while NNR>1 indicates dispersion. NNR were tested for difference from random geographic distribution. Fatality rates and NNR were examined for correlation. Fatality rates were compared between states with trauma center clustering versus dispersion. Trauma center distribution and population density were evaluated. Spatial-lag regression determined the association between fatality rate and NNR, controlling for state-level demographics, population density, injury severity, trauma system resources, and socioeconomic factors. Results Fatality rates were spatially autocorrelated (Moran's I=0.35, p<0.01). Nine states had a clustered pattern (median NNR 0.55, IQR 0.48–0.60), 22 had a dispersed pattern (median NNR 2.00, IQR 1.68–3.99), and 10 had a random pattern (median NNR 0.90, IQR 0.85–1.00) of trauma center distribution. Fatality rate and NNR were correlated (ρ=0.34, p=0.03). Clustered states had a lower median injury fatality rate compared to dispersed states (56.9 [IQR 46.5–58.9] versus 64.9 [IQR 52.5–77.1], p=0.04). Dispersed compared to clustered states had more counties without a trauma center that had higher population density than counties with a trauma center (5.7% versus 1.2%, p<0.01). Spatial-lag regression demonstrated fatality rates increased 0.02/100,000persons for each unit increase in NNR (p<0.01). Conclusions Geographic distribution of trauma centers correlates with injury mortality, with more clustered state trauma centers associated with lower fatality rates. This may be a result of access relative to population density. These results may have implications for trauma system planning and requires further study to investigate underlying mechanisms PMID:26517780
Zhao, Zhongqiu; Wang, Lianhua; Bai, Zhongke; Pan, Ziguan; Wang, Yun
2015-07-01
Afforestation of native tree species is often recommended for ecological restoration in mining areas, but the understanding of the ecological processes of restored vegetation is quite limited. In order to provide insight of the ecological processes of restored vegetation, in this study, we investigate the development of the population structure and spatial distribution patterns of restored Robinia pseudoacacia (ROPS) and Pinus tabuliformis (PITA) mixed forests during the 17 years of the mine spoil period of the Pingshuo opencast mine, Shanxi Province, China. After a 17-year succession, apart from the two planted species, Ulmus pumila (ULPU), as an invasive species, settled in the plot along with a large number of small diameter at breast height (DBH) size. In total, there are 10,062 living individual plants, much more than that at the plantation (5105), and ROPS had become the dominant species with a section area with a breast height of 9.40 m(2) hm(-2) and a mean DBH of 6.72 cm, much higher than both PITA and ULPU. The DBH size classes of all the total species showed inverted J-shaped distributions, which may have been a result of the large number of small regenerated ULPU trees. The DBH size classes of both ROPS and PITA showed peak-type structures with individuals mainly gathering in the moderate DBH size class, indicating a relatively healthy DBH size class structure. Meanwhile, invasive ULPU were distributed in a clear L shape, concentrating on the small DBH size class, indicating a relatively low survival rate for adult trees. Both ROPS and PITA species survival in the plantation showed uniform and aggregated distribution at small scales and random with scales increasing. ULPU showed a strong aggregation at small scales as well as random with scales increasing. Both the population structure and spatial distribution indicated that ROPS dominates and will continue to dominate the community in the future succession, which should be continuously monitored.
Improved high power/high frequency inductor
NASA Technical Reports Server (NTRS)
Mclyman, W. T. (Inventor)
1990-01-01
A toroidal core is mounted on an alignment disc having uniformly distributed circumferential notches or holes therein. Wire is then wound about the toroidal core in a uniform pattern defined by the notches or holes. Prior to winding, the wire may be placed within shrink tubing. The shrink tubing is then wound about the alignment disc and core and then heat-shrunk to positively retain the wire in the uniform position on the toroidal core.
Improved Zirconia Oxygen-Separation Cell
NASA Technical Reports Server (NTRS)
Walsh, John V.; Zwissler, James G.
1988-01-01
Cell structure distributes feed gas more evenly for more efficent oxygen production. Multilayer cell structure containing passages, channels, tubes, and pores help distribute pressure evenly over zirconia electrolytic membrane. Resulting more uniform pressure distribution expected to improve efficiency of oxygen production.
On a neutral particle with permanent magnetic dipole moment in a magnetic medium
NASA Astrophysics Data System (ADS)
Bakke, K.; Salvador, C.
2018-03-01
We investigate quantum effects that stem from the interaction of a permanent magnetic dipole moment of a neutral particle with an electric field in a magnetic medium. We consider a long non-conductor cylinder that possesses a uniform distribution of electric charges and a non-uniform magnetization. We discuss the possibility of achieving this non-uniform magnetization from the experimental point of view. Besides, due to this non-uniform magnetization, the permanent magnetic dipole moment of the neutral particle also interacts with a non-uniform magnetic field. This interaction gives rise to a linear scalar potential. Then, we show that bound states solutions to the Schrödinger-Pauli equation can be achieved.
Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling
NASA Astrophysics Data System (ADS)
Bierhorst, Peter; Shalm, Lynden K.; Mink, Alan; Jordan, Stephen; Liu, Yi-Kai; Rommal, Andrea; Glancy, Scott; Christensen, Bradley; Nam, Sae Woo; Knill, Emanuel
Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment to obtain data containing randomness that cannot be predicted by any theory that does not also allow the sending of signals faster than the speed of light. To certify and quantify the randomness, we develop a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtain 256 new random bits, uniform to within 10- 3 .
Luminescence imaging of water during uniform-field irradiation by spot scanning proton beams
NASA Astrophysics Data System (ADS)
Komori, Masataka; Sekihara, Eri; Yabe, Takuya; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi
2018-06-01
Luminescence was found during pencil-beam proton irradiation to water phantom and range could be estimated from the luminescence images. However, it is not yet clear whether the luminescence imaging is applied to the uniform fields made of spot-scanning proton-beam irradiations. For this purpose, imaging was conducted for the uniform fields having spread out Bragg peak (SOBP) made by spot scanning proton beams. We designed six types of the uniform fields with different ranges, SOBP widths and irradiation fields. One of the designed fields was irradiated to water phantom and a cooled charge coupled device camera was used to measure the luminescence image during irradiations. We estimated the ranges, field widths, and luminescence intensities from the luminescence images and compared those with the dose distribution calculated by a treatment planning system. For all types of uniform fields, we could obtain clear images of the luminescence showing the SOBPs. The ranges and field widths evaluated from the luminescence were consistent with those of the dose distribution calculated by a treatment planning system within the differences of ‑4 mm and ‑11 mm, respectively. Luminescence intensities were almost proportional to the SOBP widths perpendicular to the beam direction. The luminescence imaging could be applied to uniform fields made of spot scanning proton beam irradiations. Ranges and widths of the uniform fields with SOBP could be estimated from the images. The luminescence imaging is promising for the range and field width estimations in proton therapy.
Aneurysm permeability following coil embolization: packing density and coil distribution.
Chueh, Ju-Yu; Vedantham, Srinivasan; Wakhloo, Ajay K; Carniato, Sarena L; Puri, Ajit S; Bzura, Conrad; Coffin, Spencer; Bogdanov, Alexei A; Gounis, Matthew J
2015-09-01
Rates of durable aneurysm occlusion following coil embolization vary widely, and a better understanding of coil mass mechanics is desired. The goal of this study is to evaluate the impact of packing density and coil uniformity on aneurysm permeability. Aneurysm models were coiled using either Guglielmi detachable coils or Target coils. The permeability was assessed by taking the ratio of microspheres passing through the coil mass to those in the working fluid. Aneurysms containing coil masses were sectioned for image analysis to determine surface area fraction and coil uniformity. All aneurysms were coiled to a packing density of at least 27%. Packing density, surface area fraction of the dome and neck, and uniformity of the dome were significantly correlated (p<0.05). Hence, multivariate principal components-based partial least squares regression models were used to predict permeability. Similar loading vectors were obtained for packing and uniformity measures. Coil mass permeability was modeled better with the inclusion of packing and uniformity measures of the dome (r(2)=0.73) than with packing density alone (r(2)=0.45). The analysis indicates the importance of including a uniformity measure for coil distribution in the dome along with packing measures. A densely packed aneurysm with a high degree of coil mass uniformity will reduce permeability. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Non-axisymmetric flow characteristics in centrifugal compressor
NASA Astrophysics Data System (ADS)
Wang, Leilei; Lao, Dazhong; Liu, Yixiong; Yang, Ce
2015-06-01
The flow field distribution in centrifugal compressor is significantly affected by the non-axisymmetric geometry structure of the volute. The experimental and numerical simulation methods were adopted in this work to study the compressor flow field distribution with different flow conditions. The results show that the pressure distributionin volute is characterized by the circumferential non-uniform phenomenon and the pressure fluctuation on the high static pressure zone propagates reversely to upstream, which results in the non-axisymmetric flow inside the compressor. The non-uniform level of pressure distribution in large flow condition is higher than that in small flow condition, its effect on the upstream flow field is also stronger. Additionally, the non-uniform circumferential pressure distribution in volute brings the non-axisymmetric flow at impeller outlet. In different flow conditions,the circumferential variation of the absolute flow angle at impeller outlet is also different. Meanwhile, the non-axisymmetric flow characteristics in internal impeller can be also reflected by the distribution of the mass flow. The high static pressure region of the volute corresponds to the decrease of mass flow in upstream blade channel, while the low static pressure zone of the volute corresponds to the increase of the mass flow. In small flow condition, the mass flow difference in the blade channel is bigger than that in the large flow condition.
NASA Astrophysics Data System (ADS)
Zhang, Zh.
2018-02-01
An analytical method is presented, which enables the non-uniform velocity and pressure distributions at the impeller inlet of a pump to be accurately computed. The analyses are based on the potential flow theory and the geometrical similarity of the streamline distribution along the leading edge of the impeller blades. The method is thus called streamline similarity method (SSM). The obtained geometrical form of the flow distribution is then simply described by the geometrical variable G( s) and the first structural constant G I . As clearly demonstrated and also validated by experiments, both the flow velocity and the pressure distributions at the impeller inlet are usually highly non-uniform. This knowledge is indispensible for impeller blade designs to fulfill the shockless inlet flow condition. By introducing the second structural constant G II , the paper also presents the simple and accurate computation of the shock loss, which occurs at the impeller inlet. The introduction of two structural constants contributes immensely to the enhancement of the computational accuracies. As further indicated, all computations presented in this paper can also be well applied to the non-uniform exit flow out of an impeller of the Francis turbine for accurately computing the related mean values.
Mean and Fluctuating Force Distribution in a Random Array of Spheres
NASA Astrophysics Data System (ADS)
Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan
2015-11-01
This study presents a numerical study of the force distribution within a cluster of mono-disperse spherical particles. A direct forcing immersed boundary method is used to calculate the forces on individual particles for a volume fraction range of [0.1, 0.4] and a Reynolds number range of [10, 625]. The overall drag is compared to several drag laws found in the literature. As for the fluctuation of the hydrodynamic streamwise force among individual particles, it is shown to have a normal distribution with a standard deviation that varies with the volume fraction only. The standard deviation remains approximately 25% of the mean streamwise force on a single sphere. The force distribution shows a good correlation between the location of two to three nearest upstream and downstream neighbors and the magnitude of the forces. A detailed analysis of the pressure and shear forces contributions calculated on a ghost sphere in the vicinity of a single particle in a uniform flow reveals a mapping of those contributions. The combination of the mapping and number of nearest neighbors leads to a first order correction of the force distribution within a cluster which can be used in Lagrangian-Eulerian techniques. We also explore the possibility of a binary force model that systematically accounts for the effect of the nearest neighbors. This work was supported by the National Science Foundation (NSF OISE-0968313) under Partnership for International Research and Education (PIRE) in Multiphase Flows at the University of Florida.
NASA Astrophysics Data System (ADS)
Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira; Vanderveld, R. Ali
2012-01-01
We study the fluctuations in luminosity distances due to gravitational lensing by large scale (≳35Mpc) structures, specifically voids and sheets. We use a simplified “Swiss cheese” model consisting of a ΛCDM Friedman-Robertson-Walker background in which a number of randomly distributed nonoverlapping spherical regions are replaced by mass-compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald , which includes the effect of lensing shear. The standard deviation of this distribution is ˜0.027 magnitudes and the mean is ˜0.003 magnitudes for voids of radius 35 Mpc, sources at redshift zs=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thickness of ˜1Mpc, the standard deviation is reduced to ˜0.013 magnitudes. This standard deviation due to voids is a factor ˜3 smaller than that due to galaxy scale structures. We summarize our results in terms of a fitting formula that is accurate to ˜20%, and also build a simplified analytic model that reproduces our results to within ˜30%. Our model also allows us to explore the domain of validity of weak-lensing theory for voids. We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens coupling are of order ˜4%, and corrections due to shear are ˜3%. Finally, we estimate the bias due to source-lens clustering in our model to be negligible.
Perneger, Thomas V; Combescure, Christophe
2017-07-01
Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P < 0.001), 9.1% were moderately significant (P ≥ 0.001 to < 0.01), 11.7% were weakly significant (P ≥ 0.01 to < 0.05), and 53.2% were nonsignificant (P ≥ 0.05). We noted three irregularities: (1) high proportion of P-values <0.001, especially in observational studies, (2) excess of P-values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.
Percolation of fracture networks and stereology
NASA Astrophysics Data System (ADS)
Thovert, Jean-Francois; Mourzenko, Valeri; Adler, Pierre
2017-04-01
The overall properties of fractured porous media depend on the percolative character of the fracture network in a crucial way. The most important examples are permeability and transport. In a recent systematic study, a very wide range of regular, irregular and random fracture shapes is considered, in monodisperse or polydisperse networks containing fractures with different shapes and/or sizes. A simple and new model involving a dimensionless density and a new shape factor is proposed for the percolation threshold, which accounts very efficiently for the influence of the fracture shape. It applies with very good accuracy to monodisperse or moderately polydisperse networks, and provides a good first estimation in other situations. A polydispersity index is shown to control the need for a correction, and the corrective term is modelled for the investigated size distributions. Moreover, and this is crucial for practical applications, the relevant quantities which are present in the expression of the percolation threshold can all be determined from trace maps. An exact and complete set of relations can be derived when the fractures are assumed to be Identical, Isotropically Oriented and Uniformly Distributed (I2OUD). Therefore, the dimensionless density of such networks can be derived directly from the trace maps and its percolating character can be a priori predicted. These relations involve the first five moments of the trace lengths. It is clear that the higher order moments are sensitive to truncation due to the boundaries of the sampling domain. However, it can be shown that the truncation effect can be fully taken into account and corrected, for any fracture shape, size and orientation distributions, if the fractures are spatially uniformly distributed. Systematic applications of these results are made to real fracture networks that we previously analyzed by other means and to numerically simulated networks. It is important to know if the stereological results and their applications can be extended to networks which are not I2OUD. In other words, for a given trace map, an equivalent I2OUD network is defined whose percolating character and permeability are readily deduced. The conditions under which these predicted properties are not too far from the real properties are under investigation.
Ion distribution in dry polyelectrolyte multilayers: a neutron reflectometry study
Ghoussoub, Yara E.; Zerball, Maximilian; Fares, Hadi M.; ...
2018-02-09
Counterions were found to be uniformly distributed in polycation-terminated films of poly(diallyldimethylammonium) and poly(styrenesulfonate) prepared on silicon wafers using layer-by-layer adsorption.
Tackling optimization challenges in industrial load control and full-duplex radios
NASA Astrophysics Data System (ADS)
Gholian, Armen
In price-based demand response programs in smart grid, utilities set the price in accordance with the grid operating conditions and consumers respond to price signals by conducting optimal load control to minimize their energy expenditure while satisfying their energy needs. Industrial sector consumes a large portion of world electricity and addressing optimal load control of energy-intensive industrial complexes, such as steel industry and oil-refinery, is of practical importance. Formulating a general industrial complex and addressing issues in optimal industrial load control in smart grid is the focus of the second part of this dissertation. Several industrial load details are considered in the proposed formulation, including those that do not appear in residential or commercial load control problems. Operation under different smart pricing scenarios, namely, day-ahead pricing, time-of-use pricing, peak pricing, inclining block rates, and critical peak pricing are considered. The use of behind-the-meter renewable generation and energy storage is also considered. The formulated optimization problem is originally nonlinear and nonconvex and thus hard to solve. However, it is then reformulated into a tractable linear mixed-integer program. The performance of the design is assessed through various simulations for an oil refinery and a steel mini-mill. In the third part of this dissertation, a novel all-analog RF interference canceler is proposed. Radio self-interference cancellation (SIC) is the fundamental enabler for full-duplex radios. While SIC methods based on baseband digital signal processing and/or beamforming are inadequate, an all-analog method is useful to drastically reduce the self-interference as the first stage of SIC. It is shown that a uniform architecture with uniformly distributed RF attenuators has a performance highly dependent on the carrier frequency. It is also shown that a new architecture with the attenuators distributed in a clustered fashion has important advantages over the uniform architecture. These advantages are shown numerically through random multipath interference channels, number of control bits in step attenuators, attenuation-dependent phases, single and multi-level structures, etc.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge
2015-01-01
The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.
Note: The design of thin gap chamber simulation signal source based on field programmable gate array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Kun; Wang, Xu; Li, Feng
The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higa, Kenneth; Zhao, Hui; Parkinson, Dilworth Y.
The internal structure of a porous electrode strongly influences battery performance. Understanding the dynamics of electrode slurry drying could aid in engineering electrodes with desired properties. For instance, one might monitor the dynamic, spatially-varying thickness near the edge of a slurry coating, as it should lead to non-uniform thickness of the dried film. This work examines the dynamic behavior of drying slurry drops consisting of SiO x and carbon black particles in a solution of carboxymethylcellulose and deionized water, as an experimental model of drying behavior near the edge of a slurry coating. An X-ray radiography-based procedure is developed tomore » calculate the evolving spatial distribution of active material particles from images of the drying slurry drops. To the authors’ knowledge, this study is the first to use radiography to investigate battery slurry drying, as well as the first to determine particle distributions from radiography images of drying suspensions. The dynamic results are consistent with tomography reconstructions of the static, fully-dried films. It is found that active material particles can rapidly become non-uniformly distributed within the drops. Heating can promote distribution uniformity, but seemingly must be applied very soon after slurry deposition. Higher slurry viscosity is found to strongly restrain particle redistribution.« less
NASA Astrophysics Data System (ADS)
Li, Chunfang; Li, Dongxiang; Wan, Gangqiang; Xu, Jie; Hou, Wanguo
2011-07-01
The citrate reduction method for the synthesis of gold nanoparticles (GNPs) has known advantages but usually provides the products with low nanoparticle concentration and limits its application. Herein, we report a facile method to synthesize GNPs from concentrated chloroauric acid (2.5 mM) via adding sodium hydroxide and controlling the temperature. It was found that adding a proper amount of sodium hydroxide can produce uniform concentrated GNPs with low size distribution; otherwise, the largely distributed nanoparticles or instable colloids were obtained. The low reaction temperature is helpful to control the nanoparticle formation rate, and uniform GNPs can be obtained in presence of optimized NaOH concentrations. The pH values of the obtained uniform GNPs were found to be very near to neutral, and the pH influence on the particle size distribution may reveal the different formation mechanism of GNPs at high or low pH condition. Moreover, this modified synthesis method can save more than 90% energy in the heating step. Such environmental-friendly synthesis method for gold nanoparticles may have a great potential in large-scale manufacturing for commercial and industrial demand.
IR-camera methods for automotive brake system studies
NASA Astrophysics Data System (ADS)
Dinwiddie, Ralph B.; Lee, Kwangjin
1998-03-01
Automotive brake systems are energy conversion devices that convert kinetic energy into heat energy. Several mechanisms, mostly related to noise and vibration problems, can occur during brake operation and are often related to non-uniform temperature distribution on the brake disk. These problems are of significant cost to the industry and are a quality concern to automotive companies and brake system vendors. One such problem is thermo-elastic instabilities in brake system. During the occurrence of these instabilities several localized hot spots will form around the circumferential direction of the brake disk. The temperature distribution and the time dependence of these hot spots, a critical factor in analyzing this problem and in developing a fundamental understanding of this phenomenon, were recorded. Other modes of non-uniform temperature distributions which include hot banding and extreme localized heating were also observed. All of these modes of non-uniform temperature distributions were observed on automotive brake systems using a high speed IR camera operating in snap-shot mode. The camera was synchronized with the rotation of the brake disk so that the time evolution of hot regions could be studied. This paper discusses the experimental approach in detail.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Distributed Multihop Clustering Approach for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Israr, Nauman; Awan, Irfan
Prolonging the life time of Wireless Sensor Networks (WSNs) has been the focus of current research. One of the issues that needs to be addressed along with prolonging the network life time is to ensure uniform energy consumption across the network in WSNs especially in case of random network deployment. Cluster based routing algorithms are believed to be the best choice for WSNs because they work on the principle of divide and conquer and also improve the network life time considerably compared to flat based routing schemes. In this paper we propose a new routing strategy based on two layers clustering which exploits the redundancy property of the network in order to minimise duplicate data transmission and also make the intercluster and intracluster communication multihop. The proposed algorithm makes use of the nodes in a network whose area coverage is covered by the neighbouring nodes. These nodes are marked as temporary cluster heads and later use these temporary cluster heads randomly for multihop intercluster communication. Performance studies indicate that the proposed algorithm solves effectively the problem of load balancing across the network and is more energy efficient compared to the enhanced version of widely used Leach algorithm.
NASA Astrophysics Data System (ADS)
Wang, Liping; Meyer, Clemens; Guibert, Edouard; Homsy, Alexandra; Whitlow, Harry J.
2017-08-01
Porous membranes are widely used as filters in a broad range of micro and nanofluidic applications, e.g. organelle sorters, permeable cell growth substrates, and plasma filtration. Conventional silicon fabrication approaches are not suitable for microporous membranes due to the low mechanical stability of thin film substrates. Other techniques like ion track etching are limited to the production of randomly distributed and randomly orientated pores with non-uniform pore sizes. In this project, we developed a procedure for fabricating high-transmission microporous membranes by proton beam writing (PBW) with a combination of spin-casting and soft lithography. In this approach, focused 2 MeV protons were used to lithographically write patterns consisting of hexagonal arrays of high-density pillars of few μm size in a SU-8 layer coated on a silicon wafer. After development, the pillars were conformably coated with a thin film of poly-para-xylylene (Parylene)-C release agent and spin-coated with polydimethylsiloxane (PDMS). To facilitate demolding, a special technique based on the use of a laser-cut sealing tape ring was developed. This method facilitated the successful delamination of 20-μm thick PDMS membrane with high-density micropores from the mold without rupture or damage.
Random Walk Analysis of the Effect of Mechanical Degradation on All-Solid-State Battery Power
Bucci, Giovanna; Swamy, Tushar; Chiang, Yet-Ming; ...
2017-09-06
Mechanical and electrochemical phenomena are coupled in defining the battery reliability, particularly for solid-state batteries. Micro-cracks act as barriers to Li-ion diffusion in the electrolyte, increasing the average electrode’s tortuosity. In our previous work, we showed that solid electrolytes are likely to suffer from mechanical degradation if their fracture energy is lower than 4 J m -2 [G. Bucci, T. Swamy, Y.-M. Chiang, and W. C. Carter, J. Mater. Chem. A (2017)]. Here we study the effect of electrolyte micro-cracking on the effective conductivity of composite electrodes. Via random analyzes, we predict the average diffusivity of lithium in a solid-statemore » electrode to decrease linearly with the extension of mechanical degradation. Furthermore, the statistical distribution of first passage times indicates that the microstructure becomes more and more heterogeneous as damage progresses. In addition to power and capacity loss, a non-uniform increase of the electrode tortuosity can lead to heterogeneous lithiation and further stress localization. Finally, the understanding of these phenomena at the mesoscale is essential to the implementation of safe high-energy solid-state batteries.« less
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
Jiao, Haisong; Pu, Tao; Zheng, Jilin; Xiang, Peng; Fang, Tao
2017-05-15
The physical-layer security of a quantum-noise randomized cipher (QNRC) system is, for the first time, quantitatively evaluated with secrecy capacity employed as the performance metric. Considering quantum noise as a channel advantage for legitimate parties over eavesdroppers, the specific wire-tap models for both channels of the key and data are built with channel outputs yielded by quantum heterodyne measurement; the general expressions of secrecy capacities for both channels are derived, where the matching codes are proved to be uniformly distributed. The maximal achievable secrecy rate of the system is proposed, under which secrecy of both the key and data is guaranteed. The influences of various system parameters on secrecy capacities are assessed in detail. The results indicate that QNRC combined with proper channel codes is a promising framework of secure communication for long distance with high speed, which can be orders of magnitude higher than the perfect secrecy rates of other encryption systems. Even if the eavesdropper intercepts more signal power than the legitimate receiver, secure communication (up to Gb/s) can still be achievable. Moreover, the secrecy of running key is found to be the main constraint to the systemic maximal secrecy rate.
Random Walk Analysis of the Effect of Mechanical Degradation on All-Solid-State Battery Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucci, Giovanna; Swamy, Tushar; Chiang, Yet-Ming
Mechanical and electrochemical phenomena are coupled in defining the battery reliability, particularly for solid-state batteries. Micro-cracks act as barriers to Li-ion diffusion in the electrolyte, increasing the average electrode’s tortuosity. In our previous work, we showed that solid electrolytes are likely to suffer from mechanical degradation if their fracture energy is lower than 4 J m -2 [G. Bucci, T. Swamy, Y.-M. Chiang, and W. C. Carter, J. Mater. Chem. A (2017)]. Here we study the effect of electrolyte micro-cracking on the effective conductivity of composite electrodes. Via random analyzes, we predict the average diffusivity of lithium in a solid-statemore » electrode to decrease linearly with the extension of mechanical degradation. Furthermore, the statistical distribution of first passage times indicates that the microstructure becomes more and more heterogeneous as damage progresses. In addition to power and capacity loss, a non-uniform increase of the electrode tortuosity can lead to heterogeneous lithiation and further stress localization. Finally, the understanding of these phenomena at the mesoscale is essential to the implementation of safe high-energy solid-state batteries.« less
NASA Astrophysics Data System (ADS)
Henstridge, Martin C.; Batchelor-McAuley, Christopher; Gusmão, Rui; Compton, Richard G.
2011-11-01
Two simple models of electrode surface inhomogeneity based on Marcus-Hush theory are considered; a distribution in formal potentials and a distribution in electron tunnelling distances. Cyclic voltammetry simulated using these models is compared with that simulated using Marcus-Hush theory for a flat, uniform and homogeneous electrode surface, with the two models of surface inhomogeneity yielding broadened peaks with decreased peak-currents. An edge-plane pyrolytic graphite electrode is covalently modified with ferrocene via 'click' chemistry and the resulting voltammetry compared with each of the three previously considered models. The distribution of formal potentials is seen to fit the experimental data most closely.
Sapsirisavat, Vorapot; Vongsutilers, Vorasit; Thammajaruk, Narukjaporn; Pussadee, Kanitta; Riyaten, Prakit; Kerr, Stephen; Avihingsanon, Anchalee; Phanuphak, Praphan; Ruxrungtham, Kiat
2016-01-01
Ensuring that medicines meet quality standards is mandatory for ensuring safety and efficacy. There have been occasional reports of substandard generic medicines, especially in resource-limiting settings where policies to control quality may be less rigorous. As HIV treatment in Thailand depends mostly on affordable generic antiretrovirals (ARV), we performed quality assurance testing of several generic ARV available from different sources in Thailand and a source from Vietnam. We sampled Tenofovir 300mg, Efavirenz 600mg and Lopinavir/ritonavir 200/50mg from 10 primary hospitals randomly selected from those participating in the National AIDS Program, 2 non-government organization ARV clinics, and 3 private drug stores. Quality of ARV was analyzed by blinded investigators at the Faculty of Pharmaceutical Science, Chulalongkorn University. The analysis included an identification test for drug molecules, a chemical composition assay to quantitate the active ingredients, a uniformity of mass test and a dissolution test to assess in-vitro drug release. Comparisons were made against the standards described in the WHO international pharmacopeia. A total of 42 batches of ARV from 15 sources were sampled from January-March 2015. Among those generics, 23, 17, 1, and 1 were Thai-made, Indian-made, Vietnamese-made and Chinese-made, respectively. All sampled products, regardless of manufacturers or sources, met the International Pharmacopeia standards for composition assay, mass uniformity and dissolution. Although local regulations restrict ARV supply to hospitals and clinics, samples of ARV could be bought from private drug stores even without formal prescription. Sampled generic ARVs distributed within Thailand and 1 Vietnamese pharmacy showed consistent quality. However some products were illegally supplied without prescription, highlighting the importance of dispensing ARV for treatment or prevention in facilities where continuity along the HIV treatment and care cascade is available.
A randomization approach to handling data scaling in nuclear medicine.
Bai, Chuanyong; Conwell, Richard; Kindem, Joel
2010-06-01
In medical imaging, data scaling is sometimes desired to handle the system complexity, such as uniformity calibration. Since the data are usually saved in short integer, conventional data scaling will first scale the data in floating point format and then truncate or round the floating point data to short integer data. For example, when using truncation, scaling of 9 by 1.1 results in 9 and scaling of 10 by 1.1 results in 11. When the count level is low, such scaling may change the local data distribution and affect the intended application of the data. In this work, the authors use an example gated cardiac SPECT study to illustrate the effect of conventional scaling by factors of 1.1 and 1.2. The authors then scaled the data with the same scaling factors using a randomization approach, in which a random number evenly distributed between 0 and 1 is generated to determine how the floating point data will be saved as short integer data. If the random number is between 0 and 0.9, then 9.9 will be saved as 10, otherwise 9. In other words, the floating point value 9.9 will be saved in short integer value as 10 with 90% probability or 9 with 10% probability. For statistical analysis of the performance, the authors applied the conventional approach with rounding and the randomization approach to 50 consecutive gated studies from a clinical site. For the example study, the image reconstructed from the original data showed an apparent perfusion defect at the apex of the myocardium. The defect size was noticeably changed by scaling with 1.1 and 1.2 using the conventional approaches with truncation and rounding. Using the randomization approach, in contrast, the images from the scaled data appeared identical to the original image. Line profile analysis of the scaled data showed that the randomization approach introduced the least change to the data as compared to the conventional approaches. For the 50 gated data sets, significantly more studies showed quantitative differences between the original images and the images from the data scaled by 1.2 using the rounding approach than the randomization approach [46/50 (92%) versus 3/50 (6%), p < 0.05]. Likewise, significantly more studies showed visually noticeable differences between the original images and the images from the data scaled by 1.2 using the rounding approach than randomization [29/50 (58%) versus 1/50 (2%), p < 0.05]. In conclusion, the proposed randomization approach minimizes the scaling-introduced local data change as compared to the conventional approaches. It is preferred for nuclear medicine data scaling.
Baldi, Pierre
2010-01-01
As repositories of chemical molecules continue to expand and become more open, it becomes increasingly important to develop tools to search them efficiently and assess the statistical significance of chemical similarity scores. Here we develop a general framework for understanding, modeling, predicting, and approximating the distribution of chemical similarity scores and its extreme values in large databases. The framework can be applied to different chemical representations and similarity measures but is demonstrated here using the most common binary fingerprints with the Tanimoto similarity measure. After introducing several probabilistic models of fingerprints, including the Conditional Gaussian Uniform model, we show that the distribution of Tanimoto scores can be approximated by the distribution of the ratio of two correlated Normal random variables associated with the corresponding unions and intersections. This remains true also when the distribution of similarity scores is conditioned on the size of the query molecules in order to derive more fine-grained results and improve chemical retrieval. The corresponding extreme value distributions for the maximum scores are approximated by Weibull distributions. From these various distributions and their analytical forms, Z-scores, E-values, and p-values are derived to assess the significance of similarity scores. In addition, the framework allows one to predict also the value of standard chemical retrieval metrics, such as Sensitivity and Specificity at fixed thresholds, or ROC (Receiver Operating Characteristic) curves at multiple thresholds, and to detect outliers in the form of atypical molecules. Numerous and diverse experiments carried in part with large sets of molecules from the ChemDB show remarkable agreement between theory and empirical results. PMID:20540577
Uniform-related infection control practices of dental students
Aljohani, Yazan; Almutadares, Mohammed; Alfaifi, Khalid; El Madhoun, Mona; Albahiti, Maysoon H; Al-Hazmi, Nadia
2017-01-01
Background Uniform-related infection control practices are sometimes overlooked and underemphasized. In Saudi Arabia, personal protective equipment must meet global standards for infection control, but the country’s Islamic legislature also needs to be taken into account. Aim To assess uniform-related infection control practices of a group of dental students in a dental school in Saudi Arabia and compare the results with existing literature related to cross-contamination through uniforms in the dental field. Method A questionnaire was formulated and distributed to dental students at King Abdulaziz University Faculty of Dentistry in Jeddah, Saudi Arabia, which queried the students about their uniform-related infection control practices and their methods and frequency of laundering and sanitizing their uniforms, footwear, and name tags. Results There is a significant difference between genders with regard to daily uniform habits. The frequency of uniform washing was below the standard and almost 30% of students were not aware of how their uniforms are washed. Added to this, there is no consensus on a unified uniform for male and female students. Conclusion Information on preventing cross-contamination through wearing uniforms must be supplied, reinforced, and emphasized while taking into consideration the cultural needs of the Saudi society. PMID:28490894
Development of on-site PAFC stacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hotta, K.; Matsumoto, Y.; Horiuchi, H.
1996-12-31
PAFC (Phosphoric Acid Fuel Cell) has been researched for commercial use and demonstration plants have been installed in various sites. However, PAFC don`t have a enough stability yet, so more research and development must be required in the future. Especially, cell stack needs a proper state of three phases (liquid, gas and solid) interface. It is very difficult technology to keep this condition for a long time. In the small size cell with the electrode area of 100 cm{sup 2}, gas flow and temperature distributions show uniformity. But in the large size cell with the electrode area of 4000 cm{supmore » 2}, the temperature distributions show non-uniformity. These distributions would cause to be shorten the cell life. Because these distributions make hot-spot and gas poverty in limited parts. So we inserted thermocouples in short-stack for measuring three-dimensional temperature distributions and observed effects of current density and gas utilization on temperature.« less
Tensile testing grips ensure uniform loading of bimetal tubing specimens
NASA Technical Reports Server (NTRS)
Driscol, S. D.; Hunt, V.
1968-01-01
Tensile testing grip uniformly distributes stresses to the internal and external tube of bimetal tubing specimens. The grip is comprised of a slotted external tube grip, a slotted internal tube grip, a machine bolt and nut, an internal grip expansion cone, and an external grip compression nut.
The mixing of rain with near-surface water
Dennis F. Houk
1976-01-01
Rain experiments were run with various temperature differences between the warm rain and the cool receiving water. The rain intensities were uniform and the raindrop sizes were usually uniform (2.2 mm, 3.6 mm, and 5.5 mm diameter drops). Two drop size distributions were also used.
Anode current density distribution in a cusped field thruster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Huan, E-mail: wuhuan58@qq.com; Liu, Hui, E-mail: hlying@gmail.com; Meng, Yingchao
2015-12-15
The cusped field thruster is a new electric propulsion device that is expected to have a non-uniform radial current density at the anode. To further study the anode current density distribution, a multi-annulus anode is designed to directly measure the anode current density for the first time. The anode current density decreases sharply at larger radii; the magnitude of collected current density at the center is far higher compared with the outer annuli. The anode current density non-uniformity does not demonstrate a significant change with varying working conditions.
Emoto, Akira; Fukuda, Takashi
2013-02-20
For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.
Impact of temporal probability in 4D dose calculation for lung tumors.
Rouabhi, Ouided; Ma, Mingyu; Bayouth, John; Xia, Junyi
2015-11-08
The purpose of this study was to evaluate the dosimetric uncertainty in 4D dose calculation using three temporal probability distributions: uniform distribution, sinusoidal distribution, and patient-specific distribution derived from the patient respiratory trace. Temporal probability, defined as the fraction of time a patient spends in each respiratory amplitude, was evaluated in nine lung cancer patients. Four-dimensional computed tomography (4D CT), along with deformable image registration, was used to compute 4D dose incorporating the patient's respiratory motion. First, the dose of each of 10 phase CTs was computed using the same planning parameters as those used in 3D treatment planning based on the breath-hold CT. Next, deformable image registration was used to deform the dose of each phase CT to the breath-hold CT using the deformation map between the phase CT and the breath-hold CT. Finally, the 4D dose was computed by summing the deformed phase doses using their corresponding temporal probabilities. In this study, 4D dose calculated from the patient-specific temporal probability distribution was used as the ground truth. The dosimetric evaluation matrix included: 1) 3D gamma analysis, 2) mean tumor dose (MTD), 3) mean lung dose (MLD), and 4) lung V20. For seven out of nine patients, both uniform and sinusoidal temporal probability dose distributions were found to have an average gamma passing rate > 95% for both the lung and PTV regions. Compared with 4D dose calculated using the patient respiratory trace, doses using uniform and sinusoidal distribution showed a percentage difference on average of -0.1% ± 0.6% and -0.2% ± 0.4% in MTD, -0.2% ± 1.9% and -0.2% ± 1.3% in MLD, 0.09% ± 2.8% and -0.07% ± 1.8% in lung V20, -0.1% ± 2.0% and 0.08% ± 1.34% in lung V10, 0.47% ± 1.8% and 0.19% ± 1.3% in lung V5, respectively. We concluded that four-dimensional dose computed using either a uniform or sinusoidal temporal probability distribution can approximate four-dimensional dose computed using the patient-specific respiratory trace.
Chen, Zheng; Liu, Liu; Mu, Lin
2017-05-03
In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less
A simplified model of biosonar echoes from foliage and the properties of natural foliages.
Ming, Chen; Zhu, Hongxiao; Müller, Rolf
2017-01-01
Foliage echoes could play an important role in the sensory ecology of echolocating bats, but many aspects of their sensory information content remain to be explored. A realistic numerical model for these echoes could support the development of hypotheses for the relationship between foliage properties and echo parameters. In prior work by the authors, a simple foliage model based on circular disks distributed uniformly in space has been developed. In the current work, three key simplifications used in this model have been examined: (i) representing leaves as circular disks, (ii) neglecting shading effects between leaves, and (iii) the uniform spatial distribution of the leaves. The target strengths of individual leaves and shading between them have been examined in physical experiments, whereas the impact of the spatial leaf distribution has been studied by modifying the numerical model to include leaf distributions according to a biomimetic model for natural branching patterns (L-systems). Leaf samples from a single species (leatherleaf arrowwood) were found to match the relationship between size and target strength of the disk model fairly well, albeit with a large variability part of which could be due to unaccounted geometrical features of the leaves. Shading between leaf-sized disks did occur for distances below 50 cm and could hence impact the echoes. Echoes generated with L-system models in two distinct tree species (ginkgo and pine) showed consistently more temporal inhomogeneity in the envelope amplitudes than a reference with uniform distribution. However, these differences were small compared to effects found in response to changes in the relative orientation of simulated sonar beam and foliage. These findings support the utility of the uniform leaf distribution model and suggest that bats could use temporal inhomogeneities in the echoes to make inferences regarding the relative positioning of their sonar and a foliage.
A simplified model of biosonar echoes from foliage and the properties of natural foliages
Zhu, Hongxiao; Müller, Rolf
2017-01-01
Foliage echoes could play an important role in the sensory ecology of echolocating bats, but many aspects of their sensory information content remain to be explored. A realistic numerical model for these echoes could support the development of hypotheses for the relationship between foliage properties and echo parameters. In prior work by the authors, a simple foliage model based on circular disks distributed uniformly in space has been developed. In the current work, three key simplifications used in this model have been examined: (i) representing leaves as circular disks, (ii) neglecting shading effects between leaves, and (iii) the uniform spatial distribution of the leaves. The target strengths of individual leaves and shading between them have been examined in physical experiments, whereas the impact of the spatial leaf distribution has been studied by modifying the numerical model to include leaf distributions according to a biomimetic model for natural branching patterns (L-systems). Leaf samples from a single species (leatherleaf arrowwood) were found to match the relationship between size and target strength of the disk model fairly well, albeit with a large variability part of which could be due to unaccounted geometrical features of the leaves. Shading between leaf-sized disks did occur for distances below 50 cm and could hence impact the echoes. Echoes generated with L-system models in two distinct tree species (ginkgo and pine) showed consistently more temporal inhomogeneity in the envelope amplitudes than a reference with uniform distribution. However, these differences were small compared to effects found in response to changes in the relative orientation of simulated sonar beam and foliage. These findings support the utility of the uniform leaf distribution model and suggest that bats could use temporal inhomogeneities in the echoes to make inferences regarding the relative positioning of their sonar and a foliage. PMID:29240840
Personal computer (PC) based image processing applied to fluid mechanics research
NASA Technical Reports Server (NTRS)
Cho, Y.-C.; Mclachlan, B. G.
1987-01-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processsed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes commputation.
Personal Computer (PC) based image processing applied to fluid mechanics
NASA Technical Reports Server (NTRS)
Cho, Y.-C.; Mclachlan, B. G.
1987-01-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.
NASA Astrophysics Data System (ADS)
Okazaki, Tomohisa; Seino, Satoshi; Matsuura, Yoshiyuki; Otake, Hiroaki; Kugai, Junichiro; Ohkubo, Yuji; Nitani, Hiroaki; Nakagawa, Takashi; Yamamoto, Takao A.
2017-04-01
The process of nanoparticle formation by radiation chemical synthesis in a heterogeneous system has been investigated. Carbon-supported Pt-based bimetallic nanoparticles were synthesized using a high-energy electron beam. Rh, Cu, Ru, and Sn were used as counterpart metals. The nanoparticles were characterized by inductively coupled plasma atomic emission spectrometry, transmission electron microscopy, X-ray diffraction, and X-ray absorption spectroscopy. PtRh formed a uniform random alloy nanoparticle, while Cu partially formed an alloy with Pt and the remaining Cu existed as CuO. PtRu formed an alloy structure with a composition distribution of a Pt-rich core and Ru-rich shell. No alloying was observed in PtSn, which had a Pt-SnO2 structure. The alloy and oxide formation mechanisms are discussed considering the redox potentials, the standard enthalpy of oxide formation, and the solid solubilities of Pt and the counterpart metals.
NASA Astrophysics Data System (ADS)
Gao, Rui; Ge, Wen-jun; Miao, Shu; Zhang, Tao; Wang, Xian-ping; Fang, Qian-feng
2016-03-01
The grain morphology, nano-oxide particles and mechanical properties of oxide dispersion strengthened (ODS)-316L austenitic steel synthesized by electron beam selective melting (EBSM) technique with different post-working processes, were explored in this study. The ODS-316L austenitic steel with superfine nano-sized oxide particles of 30-40 nm exhibits good tensile strength (412 MPa) and large total elongation (about 51%) due to the pinning effect of uniform distributed oxide particles on dislocations. After hot rolling, the specimen exhibits a higher tensile strength of 482 MPa, but the elongation decreases to 31.8% owing to the introduction of high-density dislocations. The subsequent heat treatment eliminates the grain defects induced by hot rolling and increases the randomly orientated grains, which further improves the strength and ductility of EBSM ODS-316L steel.
Thermally assisted nanosecond laser generation of ferric nanoparticles
NASA Astrophysics Data System (ADS)
Kurselis, K.; Kozheshkurt, V.; Kiyan, R.; Chichkov, B.; Sajti, L.
2018-03-01
A technique to increase nanosecond laser based production of ferric nanoparticles by elevating temperature of the iron target and controlling its surface exposure to oxygen is reported. High power near-infrared laser ablation of the iron target heated up to 600 °C enhances the particle generation efficiency by more than tenfold exceeding 6 μg/J. Temporal and thermal dependencies of the particle generation process indicate correlation of this enhancement with the oxidative processes that take place on the iron surface during the per spot interpulse delay. Nanoparticles, produced using the heat-assisted ablation technique, are examined using scanning electron and transmission electron microscopy confirming the presence of 1-100 nm nanoparticles with an exponential size distribution that contain multiple randomly oriented magnetite nanocrystallites. The described process enables the application of high power lasers and facilitates precise, uniform, and controllable direct deposition of ferric nanoparticle coatings at the industry-relevant rates.
Theoretical and observational analysis of spacecraft fields
NASA Technical Reports Server (NTRS)
Neubauer, F. M.; Schatten, K. H.
1972-01-01
In order to investigate the nondipolar contributions of spacecraft magnetic fields a simple magnetic field model is proposed. This model consists of randomly oriented dipoles in a given volume. Two sets of formulas are presented which give the rms-multipole field components, for isotropic orientations of the dipoles at given positions and for isotropic orientations of the dipoles distributed uniformly throughout a cube or sphere. The statistical results for an 8 cu m cube together with individual examples computed numerically show the following features: Beyond about 2 to 3 m distance from the center of the cube, the field is dominated by an equivalent dipole. The magnitude of the magnetic moment of the dipolar part is approximated by an expression for equal magnetic moments or generally by the Pythagorean sum of the dipole moments. The radial component is generally greater than either of the transverse components for the dipole portion as well as for the nondipolar field contributions.
Model of a thin film optical fiber fluorosensor
NASA Technical Reports Server (NTRS)
Egalon, Claudio O.; Rogowski, Robert S.
1991-01-01
The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.
NASA Technical Reports Server (NTRS)
Wang, C.-W.; Stark, W.
2005-01-01
This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.
Isotropic stochastic rotation dynamics
NASA Astrophysics Data System (ADS)
Mühlbauer, Sebastian; Strobl, Severin; Pöschel, Thorsten
2017-12-01
Stochastic rotation dynamics (SRD) is a widely used method for the mesoscopic modeling of complex fluids, such as colloidal suspensions or multiphase flows. In this method, however, the underlying Cartesian grid defining the coarse-grained interaction volumes induces anisotropy. We propose an isotropic, lattice-free variant of stochastic rotation dynamics, termed iSRD. Instead of Cartesian grid cells, we employ randomly distributed spherical interaction volumes. This eliminates the requirement of a grid shift, which is essential in standard SRD to maintain Galilean invariance. We derive analytical expressions for the viscosity and the diffusion coefficient in relation to the model parameters, which show excellent agreement with the results obtained in iSRD simulations. The proposed algorithm is particularly suitable to model systems bound by walls of complex shape, where the domain cannot be meshed uniformly. The presented approach is not limited to SRD but is applicable to any other mesoscopic method, where particles interact within certain coarse-grained volumes.
Benford’s Law: Textbook Exercises and Multiple-Choice Testbanks
Slepkov, Aaron D.; Ironside, Kevin B.; DiBattista, David
2015-01-01
Benford’s Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford’s Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford’s Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford’s Law, the testbank is nonetheless secure against such a Benford’s attack for banal reasons. PMID:25689468