Sample records for uniform probability distribution

  1. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

  2. The global impact distribution of Near-Earth objects

    NASA Astrophysics Data System (ADS)

    Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.

    2016-02-01

    Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.

  3. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  4. Impact of temporal probability in 4D dose calculation for lung tumors.

    PubMed

    Rouabhi, Ouided; Ma, Mingyu; Bayouth, John; Xia, Junyi

    2015-11-08

    The purpose of this study was to evaluate the dosimetric uncertainty in 4D dose calculation using three temporal probability distributions: uniform distribution, sinusoidal distribution, and patient-specific distribution derived from the patient respiratory trace. Temporal probability, defined as the fraction of time a patient spends in each respiratory amplitude, was evaluated in nine lung cancer patients. Four-dimensional computed tomography (4D CT), along with deformable image registration, was used to compute 4D dose incorporating the patient's respiratory motion. First, the dose of each of 10 phase CTs was computed using the same planning parameters as those used in 3D treatment planning based on the breath-hold CT. Next, deformable image registration was used to deform the dose of each phase CT to the breath-hold CT using the deformation map between the phase CT and the breath-hold CT. Finally, the 4D dose was computed by summing the deformed phase doses using their corresponding temporal probabilities. In this study, 4D dose calculated from the patient-specific temporal probability distribution was used as the ground truth. The dosimetric evaluation matrix included: 1) 3D gamma analysis, 2) mean tumor dose (MTD), 3) mean lung dose (MLD), and 4) lung V20. For seven out of nine patients, both uniform and sinusoidal temporal probability dose distributions were found to have an average gamma passing rate > 95% for both the lung and PTV regions. Compared with 4D dose calculated using the patient respiratory trace, doses using uniform and sinusoidal distribution showed a percentage difference on average of -0.1% ± 0.6% and -0.2% ± 0.4% in MTD, -0.2% ± 1.9% and -0.2% ± 1.3% in MLD, 0.09% ± 2.8% and -0.07% ± 1.8% in lung V20, -0.1% ± 2.0% and 0.08% ± 1.34% in lung V10, 0.47% ± 1.8% and 0.19% ± 1.3% in lung V5, respectively. We concluded that four-dimensional dose computed using either a uniform or sinusoidal temporal probability distribution can approximate four-dimensional dose computed using the patient-specific respiratory trace.

  5. Misinterpretation of statistical distance in security of quantum key distribution shown by simulation

    NASA Astrophysics Data System (ADS)

    Iwakoshi, Takehisa; Hirota, Osamu

    2014-10-01

    This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.

  6. Continuous-variable quantum key distribution in uniform fast-fading channels

    NASA Astrophysics Data System (ADS)

    Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano

    2018-03-01

    We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.

  7. Goodness of fit of probability distributions for sightings as species approach extinction.

    PubMed

    Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael

    2009-04-01

    Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.

  8. The Unevenly Distributed Nearest Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Bihain, Gabriel; Scholz, Ralf-Dieter

    2016-08-01

    To address the questions of how many brown dwarfs there are in the Milky Way, how do these objects relate to star formation, and whether the brown dwarf formation rate was different in the past, the star-to-brown dwarf number ratio can be considered. While main sequence stars are well known components of the solar neighborhood, lower mass, substellar objects increasingly add to the census of the nearest objects. The sky projection of the known objects at <6.5 pc shows that stars present a uniform distribution and brown dwarfs a non-uniform distribution, with about four times more brown dwarfs behind than ahead of the Sun relative to the direction of rotation of the Galaxy. Assuming that substellar objects distribute uniformly, their observed configuration has a probability of 0.1 %. The helio- and geocentricity of the configuration suggests that it probably results from an observational bias, which if compensated for by future discoveries, would bring the star-to-brown dwarf ratio in agreement with the average ratio found in star forming regions.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  10. Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.

    2018-05-01

    Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.

  11. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  12. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  13. Ubiquity of Benford's law and emergence of the reciprocal distribution

    DOE PAGES

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    2016-04-07

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  14. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  15. How to model a negligible probability under the WTO sanitary and phytosanitary agreement?

    PubMed

    Powell, Mark R

    2013-06-01

    Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.

  16. Estimation of distribution overlap of urn models.

    PubMed

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  17. A Search Model for Imperfectly Detected Targets

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert

    2012-01-01

    Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.

  18. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  19. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  20. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  1. Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks

    DTIC Science & Technology

    2006-09-01

    time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by

  2. Investigation of Dielectric Breakdown Characteristics for Double-break Vacuum Interrupter and Dielectric Breakdown Probability Distribution in Vacuum Interrupter

    NASA Astrophysics Data System (ADS)

    Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi

    To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.

  3. On the inequivalence of the CH and CHSH inequalities due to finite statistics

    NASA Astrophysics Data System (ADS)

    Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.

    2017-06-01

    Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.

  4. A Non-Parametric Probability Density Estimator and Some Applications.

    DTIC Science & Technology

    1984-05-01

    distributions, which are assumed to be representa- tive of platykurtic , mesokurtic, and leptokurtic distribu- tions in general. The dissertation is... platykurtic distributions. Consider, for example, the uniform distribution shown in Figure 4. 34 o . 1., Figure 4 -Sensitivity to Support Estimation The...results of the density function comparisons indicate that the new estimator is clearly -Z superior for platykurtic distributions, equal to the best 59

  5. Facility optimization to improve activation rate distributions during IVNAA.

    PubMed

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-05-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body.

  6. Development of a methodology to evaluate material accountability in pyroprocess

    NASA Astrophysics Data System (ADS)

    Woo, Seungmin

    This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).

  7. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  8. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  9. A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis

    DTIC Science & Technology

    2012-01-01

    probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY

  10. The current impact flux on Mars and its seasonal variation

    NASA Astrophysics Data System (ADS)

    JeongAhn, Youngmin; Malhotra, Renu

    2015-12-01

    We calculate the present-day impact flux on Mars and its variation over the martian year, using the current data on the orbital distribution of known Mars-crossing minor planets. We adapt the Öpik-Wetherill formulation for calculating collision probabilities, paying careful attention to the non-uniform distribution of the perihelion longitude and the argument of perihelion owed to secular planetary perturbations. We find that, at the current epoch, the Mars crossers have an axial distribution of the argument of perihelion, and the mean direction of their eccentricity vectors is nearly aligned with Mars' eccentricity vector. These previously neglected angular non-uniformities have the effect of depressing the mean annual impact flux by a factor of about 2 compared to the estimate based on a uniform random distribution of the angular elements of Mars-crossers; the amplitude of the seasonal variation of the impact flux is likewise depressed by a factor of about 4-5. We estimate that the flux of large impactors (of absolute magnitude H < 16) within ±30° of Mars' aphelion is about three times larger than when the planet is near perihelion. Extrapolation of our results to a model population of meter-size Mars-crossers shows that if these small impactors have a uniform distribution of their angular elements, then their aphelion-to-perihelion impact flux ratio would be 11-15, but if they track the orbital distribution of the large impactors, including their non-uniform angular elements, then this ratio would be about 3. Comparison of our results with the current dataset of fresh impact craters on Mars (detected with Mars-orbiting spacecraft) appears to rule out the uniform distribution of angular elements.

  11. Study on probability distributions for evolution in modified extremal optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian

    2010-05-01

    It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.

  12. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.

  13. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    NASA Astrophysics Data System (ADS)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  14. Probability and the changing shape of response distributions for orientation.

    PubMed

    Anderson, Britt

    2014-11-18

    Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.

  15. Probability distributions for Markov chain based quantum walks

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  16. Scale-free behavior of networks with the copresence of preferential and uniform attachment rules

    NASA Astrophysics Data System (ADS)

    Pachon, Angelica; Sacerdote, Laura; Yang, Shuyi

    2018-05-01

    Complex networks in different areas exhibit degree distributions with a heavy upper tail. A preferential attachment mechanism in a growth process produces a graph with this feature. We herein investigate a variant of the simple preferential attachment model, whose modifications are interesting for two main reasons: to analyze more realistic models and to study the robustness of the scale-free behavior of the degree distribution. We introduce and study a model which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1 - p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes, i.e. the nodes belonging to a window of size w to the left of the last born node. The latter highlights a trend to select one of the last added nodes when no information is available. The recent nodes can be either a given fixed number or a proportion (αn) of the total number of existing nodes. In the first case, we prove that this model exhibits an asymptotically power-law degree distribution. The same result is then illustrated through simulations in the second case. When the window of recent nodes has a constant size, we herein prove that the presence of the uniform rule delays the starting time from which the asymptotic regime starts to hold. The mean number of nodes of degree k and the asymptotic degree distribution are also determined analytically. Finally, a sensitivity analysis on the parameters of the model is performed.

  17. Directed Random Markets: Connectivity Determines Money

    NASA Astrophysics Data System (ADS)

    Martínez-Martínez, Ismael; López-Ruiz, Ricardo

    2013-12-01

    Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.

  18. Scaling in tournaments

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Redner, S.; Vazquez, F.

    2007-02-01

    We study a stochastic process that mimics single-game elimination tournaments. In our model, the outcome of each match is stochastic: the weaker player wins with upset probability q<=1/2, and the stronger player wins with probability 1-q. The loser is eliminated. Extremal statistics of the initial distribution of player strengths governs the tournament outcome. For a uniform initial distribution of strengths, the rank of the winner, x*, decays algebraically with the number of players, N, as x*~N-β. Different decay exponents are found analytically for sequential dynamics, βseq=1-2q, and parallel dynamics, \\beta_par=1+\\frac{\\ln (1-q)}{\\ln 2} . The distribution of player strengths becomes self-similar in the long time limit with an algebraic tail. Our theory successfully describes statistics of the US college basketball national championship tournament.

  19. One-dimensional soil temperature assimilation experiment based on unscented particle filter and Common Land Model

    NASA Astrophysics Data System (ADS)

    Fu, Xiao Lei; Jin, Bao Ming; Jiang, Xiao Lei; Chen, Cheng

    2018-06-01

    Data assimilation is an efficient way to improve the simulation/prediction accuracy in many fields of geosciences especially in meteorological and hydrological applications. This study takes unscented particle filter (UPF) as an example to test its performance at different two probability distribution, Gaussian and Uniform distributions with two different assimilation frequencies experiments (1) assimilating hourly in situ soil surface temperature, (2) assimilating the original Moderate Resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature (LST) once per day. The numerical experiment results show that the filter performs better when increasing the assimilation frequency. In addition, UPF is efficient for improving the soil variables (e.g., soil temperature) simulation/prediction accuracy, though it is not sensitive to the probability distribution for observation error in soil temperature assimilation.

  20. On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.

  1. Modeling Periodic Adiabatic Shear Bands Evolution in a 304L Stainless Steel Thick-Walled Cylinder

    NASA Astrophysics Data System (ADS)

    Liu, Mingtao; Hu, Haibo; Fan, Cheng; Tang, Tiegang

    2015-06-01

    The self-organization of multiple shear bands in a 304L stainless steel thick-walled cylinder (TWC) was numerically studied. The microstructures of material lead to the non-uniform distribution of local yield stress, which plays a key role in the formation of spontaneous shear localization. We introduced a probability factor satisfied Gauss distribution into the macroscopic constitutive relationship to describe the non-uniformity of local yield stress. Using the probability factor, the initiation and propagation of multiple shear bands in TWC were numerically replicated in our 2D FEM simulation. Experimental results in the literature indicate that the machined surface at the internal boundary of a 304L stainless steel cylinder provides a work-hardened layer (about 20 μm) which has significantly different microstructures from base material. The work-hardened layer leads to the phenomenon that most shear bands are in clockwise or counterclockwise direction. In our simulation, periodic oriented perturbations were applied to describe the grain orientation in the work-hardened layer, and the spiral pattern of shear bands was successfully replicated.

  2. Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna

    Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information aboutmore » the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.« less

  3. Timing in a Variable Interval Procedure: Evidence for a Memory Singularity

    PubMed Central

    Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn

    2013-01-01

    Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783

  4. Diffusion of active chiral particles

    NASA Astrophysics Data System (ADS)

    Sevilla, Francisco J.

    2016-12-01

    The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.

  5. Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Salimi, S.; Jafarizadeh, M. A.

    2009-06-01

    In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.

  6. Spatial distribution of traffic in a cellular mobile data network

    NASA Astrophysics Data System (ADS)

    Linnartz, J. P. M. G.

    1987-02-01

    The use of integral transforms of the probability density function for the received power to analyze the relation between the spatial distributions of offered and throughout packet traffic in a mobile radio network with Rayleigh fading channels and ALOHA multiple access was assessed. A method to obtain the spatial distribution of throughput traffic from a prescribed spatial distribution of offered traffic is presented. Incoherent and coherent addition of interference signals is considered. The channel behavior for heavy traffic loads is studied. In both the incoherent and coherent case, the spatial distribution of offered traffic required to ensure a prescribed spatially uniform throughput is synthesized numerically.

  7. Bit-Wise Arithmetic Coding For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  8. A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.

    2017-01-01

    Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.

  9. Probability in High Dimension

    DTIC Science & Technology

    2014-06-30

    b 1 , . . . , b0m, bm)  fm(b0) + Pm i=1 1bi 6=b0 i 1b i 6=b j for j<i. 4.8 ( Travelling salesman problem ). Let X 1 , . . . ,Xn be i.i.d. points that...are uniformly distributed in the unit square [0, 1]2. We think of Xi as the location of city i. The goal of the travelling salesman problem is to find... salesman problem , . . . • Probability in Banach spaces: probabilistic limit theorems for Banach- valued random variables, empirical processes, local

  10. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  11. Electron emission produced by photointeractions in a slab target

    NASA Technical Reports Server (NTRS)

    Thinger, B. E.; Dayton, J. A., Jr.

    1973-01-01

    The current density and energy spectrum of escaping electrons generated in a uniform plane slab target which is being irradiated by the gamma flux field of a nuclear reactor are calculated by using experimental gamma energy transfer coefficients, electron range and energy relations, and escape probability computations. The probability of escape and the average path length of escaping electrons are derived for an isotropic distribution of monoenergetic photons. The method of estimating the flux and energy distribution of electrons emerging from the surface is outlined, and a sample calculation is made for a 0.33-cm-thick tungsten target located next to the core of a nuclear reactor. The results are to be used as a guide in electron beam synthesis of reactor experiments.

  12. The concept of entropy in landscape evolution

    USGS Publications Warehouse

    Leopold, Luna Bergere; Langbein, Walter Basil

    1962-01-01

    The concept of entropy is expressed in terms of probability of various states. Entropy treats of the distribution of energy. The principle is introduced that the most probable condition exists when energy in a river system is as uniformly distributed as may be permitted by physical constraints. From these general considerations equations for the longitudinal profiles of rivers are derived that are mathematically comparable to those observed in the field. The most probable river profiles approach the condition in which the downstream rate of production of entropy per unit mass is constant. Hydraulic equations are insufficient to determine the velocity, depths, and slopes of rivers that are themselves authors of their own hydraulic geometries. A solution becomes possible by introducing the concept that the distribution of energy tends toward the most probable. This solution leads to a theoretical definition of the hydraulic geometry of river channels that agrees closely with field observations. The most probable state for certain physical systems can also be illustrated by random-walk models. Average longitudinal profiles and drainage networks were so derived and these have the properties implied by the theory. The drainage networks derived from random walks have some of the principal properties demonstrated by the Horton analysis; specifically, the logarithms of stream length and stream numbers are proportional to stream order.

  13. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  14. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  15. 3D-modelling of radon-induced cellular radiobiological effects in bronchial airway bifurcations: direct versus bystander effects.

    PubMed

    Szőke, István; Farkas, Arpád; Balásházy, Imre; Hofmann, Werner; Madas, Balázs G; Szőke, Réka

    2012-06-01

    The primary objective of this paper was to investigate the distribution of radiation doses and the related biological responses in cells of a central airway bifurcation of the human lung of a hypothetical worker of the New Mexico uranium mines during approximately 12 hours of exposure to short-lived radon progenies. State-of-the-art computational modelling techniques were applied to simulate the relevant biophysical and biological processes in a central human airway bifurcation. The non-uniform deposition pattern of inhaled radon daughters caused a non-uniform distribution of energy deposition among cells, and of related cell inactivation and cell transformation probabilities. When damage propagation via bystander signalling was assessed, it produced more cell killing and cell transformation events than did direct effects. If bystander signalling was considered, variations of the average probabilities of cell killing and cell transformation were supra-linear over time. Our results are very sensitive to the radiobiological parameters, derived from in vitro experiments (e.g., range of bystander signalling), applied in this work and suggest that these parameters may not be directly applicable to realistic three-dimensional (3D) epithelium models.

  16. A numerical study of multiple adiabatic shear bands evolution in a 304LSS thick-walled cylinder

    NASA Astrophysics Data System (ADS)

    Liu, Mingtao; Hu, Haibo; Fan, Cheng; Tang, Tiegang

    2017-01-01

    The self-organization of multiple shear bands in a 304L stainless steel(304LSS) thick-walled cylinder (TWC) was numerically studied. The microstructures of material lead to the non-uniform distribution of the local yield stress, which play a key role in the formation of spontaneous shear localization. We introduced a probability factor satisfied the Gaussian distribution into the macroscopic constitutive relationship to describe the non-uniformity of local yield stress. Using the probability factor, the initiation and propagation of multiple shear bands in TWC were numerically replicated in our 2D FEM simulation. Experimental results in the literature indicated that the machined surface at the internal boundary of a 304L stainless steel cylinder provides a work-hardened layer (about 20˜30μm) which has significantly different microstructures from the base material. The work-hardened layer leads to the phenomenon that most shear bands propagate along a given direction, clockwise or counterclockwise. In our simulation, periodical single direction spiral perturbations were applied to describe the grain orientation in the work-hardened layer, and the single direction spiral pattern of shear bands was successfully replicated.

  17. Synaptic convergence regulates synchronization-dependent spike transfer in feedforward neural networks.

    PubMed

    Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum

    2017-12-01

    Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.

  18. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  19. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  20. The effects of small field dosimetry on the biological models used in evaluating IMRT dose distributions

    NASA Astrophysics Data System (ADS)

    Cardarelli, Gene A.

    The primary goal in radiation oncology is to deliver lethal radiation doses to tumors, while minimizing dose to normal tissue. IMRT has the capability to increase the dose to the targets and decrease the dose to normal tissue, increasing local control, decrease toxicity and allow for effective dose escalation. This advanced technology does present complex dose distributions that are not easily verified. Furthermore, the dose inhomogeneity caused by non-uniform dose distributions seen in IMRT treatments has caused the development of biological models attempting to characterize the dose-volume effect in the response of organized tissues to radiation. Dosimetry of small fields can be quite challenging when measuring dose distributions for high-energy X-ray beams used in IMRT. The proper modeling of these small field distributions is essential in reproducing accurate dose for IMRT. This evaluation was conducted to quantify the effects of small field dosimetry on IMRT plan dose distributions and the effects on four biological model parameters. The four biological models evaluated were: (1) the generalized Equivalent Uniform Dose (gEUD), (2) the Tumor Control Probability (TCP), (3) the Normal Tissue Complication Probability (NTCP) and (4) the Probability of uncomplicated Tumor Control (P+). These models are used to estimate local control, survival, complications and uncomplicated tumor control. This investigation compares three distinct small field dose algorithms. Dose algorithms were created using film, small ion chamber, and a combination of ion chamber measurements and small field fitting parameters. Due to the nature of uncertainties in small field dosimetry and the dependence of biological models on dose volume information, this examination quantifies the effects of small field dosimetry techniques on radiobiological models and recommends pathways to reduce the errors in using these models to evaluate IMRT dose distributions. This study demonstrates the importance of valid physical dose modeling prior to the use of biological modeling. The success of using biological function data, such as hypoxia, in clinical IMRT planning will greatly benefit from the results of this study.

  1. A probability distribution model of tooth pits for evaluating time-varying mesh stiffness of pitting gears

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Liu, Zongyao; Wang, Delong; Yang, Xiao; Liu, Huan; Lin, Jing

    2018-06-01

    Tooth damage often causes a reduction in gear mesh stiffness. Thus time-varying mesh stiffness (TVMS) can be treated as an indication of gear health conditions. This study is devoted to investigating the mesh stiffness variations of a pair of external spur gears with tooth pitting, and proposes a new model for describing tooth pitting based on probability distribution. In the model, considering the appearance and development process of tooth pitting, we model the pitting on the surface of spur gear teeth as a series of pits with a uniform distribution in the direction of tooth width and a normal distribution in the direction of tooth height, respectively. In addition, four pitting degrees, from no pitting to severe pitting, are modeled. Finally, influences of tooth pitting on TVMS are analyzed in details and the proposed model is validated by comparing with a finite element model. The comparison results show that the proposed model is effective for the TVMS evaluations of pitting gears.

  2. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  3. Evaluation of an Ensemble Dispersion Calculation.

    NASA Astrophysics Data System (ADS)

    Draxler, Roland R.

    2003-02-01

    A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.

  4. A Monte Carlo study of the impact of the choice of rectum volume definition on estimates of equivalent uniform doses and the volume parameter

    NASA Astrophysics Data System (ADS)

    Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav

    2004-08-01

    Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.

  5. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  6. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    PubMed

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  7. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  8. Maximizing the biological effect of proton dose delivered with scanned beams via inhomogeneous daily dose distributions

    PubMed Central

    Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A.; Trofimov, Alexei

    2013-01-01

    Purpose: Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. Methods: For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. Results: IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. Conclusions: In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled. PMID:23635256

  9. Maximizing the biological effect of proton dose delivered with scanned beams via inhomogeneous daily dose distributions.

    PubMed

    Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A; Trofimov, Alexei

    2013-05-01

    Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled.

  10. Linking of uniform random polygons in confined spaces

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.

    2007-03-01

    In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .

  11. Pseudochemotaxis in inhomogeneous active Brownian systems

    NASA Astrophysics Data System (ADS)

    Vuijk, Hidde D.; Sharma, Abhinav; Mondal, Debasish; Sommer, Jens-Uwe; Merlitz, Holger

    2018-04-01

    We study dynamical properties of confined, self-propelled Brownian particles in an inhomogeneous activity profile. Using Brownian dynamics simulations, we calculate the probability to reach a fixed target and the mean first passage time to the target of an active particle. We show that both these quantities are strongly influenced by the inhomogeneous activity. When the activity is distributed such that high-activity zone is located between the target and the starting location, the target finding probability is increased and the passage time is decreased in comparison to a uniformly active system. Moreover, for a continuously distributed profile, the activity gradient results in a drift of active particle up the gradient bearing resemblance to chemotaxis. Integrating out the orientational degrees of freedom, we derive an approximate Fokker-Planck equation and show that the theoretical predictions are in very good agreement with the Brownian dynamics simulations.

  12. Designing Glass Panels for Economy and Reliability

    NASA Technical Reports Server (NTRS)

    Moore, D. M.

    1983-01-01

    Analytical method determines probability of failure of rectangular glass plates subjected to uniformly distributed loads such as those from wind, earthquake, snow, and deadweight. Developed as aid in design of protective glass covers for solar-cell arrays and solar collectors, method is also useful in estimating the reliability of large windows in buildings exposed to high winds and is adapted to nonlinear stress analysis of simply supported plates of any elastic material.

  13. Continuous-time quantum walks on star graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salimi, S.

    2009-06-15

    In this paper, we investigate continuous-time quantum walk on star graphs. It is shown that quantum central limit theorem for a continuous-time quantum walk on star graphs for N-fold star power graph, which are invariant under the quantum component of adjacency matrix, converges to continuous-time quantum walk on K{sub 2} graphs (complete graph with two vertices) and the probability of observing walk tends to the uniform distribution.

  14. An In-Depth Analysis of the Chung-Lu Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winlaw, M.; DeSterck, H.; Sanders, G.

    2015-10-28

    In the classic Erd}os R enyi random graph model [5] each edge is chosen with uniform probability and the degree distribution is binomial, limiting the number of graphs that can be modeled using the Erd}os R enyi framework [10]. The Chung-Lu model [1, 2, 3] is an extension of the Erd}os R enyi model that allows for more general degree distributions. The probability of each edge is no longer uniform and is a function of a user-supplied degree sequence, which by design is the expected degree sequence of the model. This property makes it an easy model to work withmore » theoretically and since the Chung-Lu model is a special case of a random graph model with a given degree sequence, many of its properties are well known and have been studied extensively [2, 3, 13, 8, 9]. It is also an attractive null model for many real-world networks, particularly those with power-law degree distributions and it is sometimes used as a benchmark for comparison with other graph generators despite some of its limitations [12, 11]. We know for example, that the average clustering coe cient is too low relative to most real world networks. As well, measures of a nity are also too low relative to most real-world networks of interest. However, despite these limitations or perhaps because of them, the Chung-Lu model provides a basis for comparing new graph models.« less

  15. Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.

    PubMed

    Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep

    2016-04-01

    This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk < 1.0 as "incapable" (1). A C pk > 1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.

  16. Adapting radiotherapy to hypoxic tumours

    NASA Astrophysics Data System (ADS)

    Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag

    2006-10-01

    In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.

  17. Finite element probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacvarov, D.C.

    1981-01-01

    A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less

  18. Geometric evolution of complex networks with degree correlations

    NASA Astrophysics Data System (ADS)

    Murphy, Charles; Allard, Antoine; Laurence, Edward; St-Onge, Guillaume; Dubé, Louis J.

    2018-03-01

    We present a general class of geometric network growth mechanisms by homogeneous attachment in which the links created at a given time t are distributed homogeneously between a new node and the existing nodes selected uniformly. This is achieved by creating links between nodes uniformly distributed in a homogeneous metric space according to a Fermi-Dirac connection probability with inverse temperature β and general time-dependent chemical potential μ (t ) . The chemical potential limits the spatial extent of newly created links. Using a hidden variable framework, we obtain an analytical expression for the degree sequence and show that μ (t ) can be fixed to yield any given degree distributions, including a scale-free degree distribution. Additionally, we find that depending on the order in which nodes appear in the network—its history—the degree-degree correlations can be tuned to be assortative or disassortative. The effect of the geometry on the structure is investigated through the average clustering coefficient 〈c 〉 . In the thermodynamic limit, we identify a phase transition between a random regime where 〈c 〉→0 when β <βc and a geometric regime where 〈c 〉>0 when β >βc .

  19. Using spatial information about recurrence risk for robust optimization of dose-painting prescription functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.

    Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when comparedmore » the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the criticalmore » value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).« less

  1. A new model of the lunar ejecta cloud

    NASA Astrophysics Data System (ADS)

    Christou, A. A.

    2014-04-01

    Every airless body in the solar system is surrounded by a cloud of ejecta produced by the impact of interplanetary meteoroids on its surface [1]. Such "dust exospheres" have been observed around the Galilean satellites of Jupiter [2, 3]. The prospect of long-term robotic and human operations on the Moon by the US and other countries has rekindled interest on the subject [4]. This interest has culminated with the recent investigation of the Moon's dust exosphere by the LADEE spacecraft [5]. Here a model is presented of a ballistic, collisionless, steady state population of ejecta launched vertically at randomly distributed times and velocities. Assuming a uniform distribution of launch times I derive closed form solutions for the probability density functions (pdfs) of the height distribution of particles and the distribution of their speeds in a rest frame both at the surface and at altitude. The treatment is then extended to particle motion with respect to a moving platform such as an orbiting spacecraft. These expressions are compared with numerical simulations under lunar surface gravity where the underlying ejection speed distribution is (a) uniform (b) a power law. I discuss the predictions of the model, its limitations, and how it can be validated against near-surface and orbital measurements.

  2. Uniform deposition of size-selected clusters using Lissajous scanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp; Hirata, Hirohito

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonalmore » directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.« less

  3. On the Distribution of Free Path Lengths for the Periodic Lorentz Gas III

    NASA Astrophysics Data System (ADS)

    Caglioti, Emanuele; Golse, François

    For r(0,1), let Zr={xR2|dist(x,Z2)>r/2} and define τr(x,v)=inf{t>0|x+tv∂Zr}. Let Φr(t) be the probability that τr(x,v)>=t for x and v uniformly distributed in Zr and §1 respectively. We prove in this paper that as t-->+∞. This result improves upon the bounds on Φr in Bourgain-Golse-Wennberg [Commun. Math. Phys. 190, 491-508 (1998)]. We also discuss the applications of this result in the context of kinetic theory.

  4. About an adaptively weighted Kaplan-Meier estimate.

    PubMed

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  5. Poincaré recurrence statistics as an indicator of chaos synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boev, Yaroslav I., E-mail: boev.yaroslav@gmail.com; Vadivasova, Tatiana E., E-mail: vadivasovate@yandex.ru; Anishchenko, Vadim S., E-mail: wadim@info.sgu.ru

    The dynamics of the autonomous and non-autonomous Rössler system is studied using the Poincaré recurrence time statistics. It is shown that the probability distribution density of Poincaré recurrences represents a set of equidistant peaks with the distance that is equal to the oscillation period and the envelope obeys an exponential distribution. The dimension of the spatially uniform Rössler attractor is estimated using Poincaré recurrence times. The mean Poincaré recurrence time in the non-autonomous Rössler system is locked by the external frequency, and this enables us to detect the effect of phase-frequency synchronization.

  6. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob; Englander, Arnold

    2014-01-01

    Trajectory optimization methods using MBH have become well developed during the past decade. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing RVs from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by Englander significantly improves MBH performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness, where efficiency is finding better solutions in less time, and robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive RWs originally developed in the field of statistical physics.

  7. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  8. Middle-high latitude N2O distributions related to the arctic vortex breakup

    NASA Astrophysics Data System (ADS)

    Zhou, L. B.; Zou, H.; Gao, Y. Q.

    2006-03-01

    The relationship of N2O distributions with the Arctic vortex breakup is first analyzed with a probability distribution function (PDF) analysis. The N2O concentration shows different distributions between the early and late vortex breakup years. In the early breakup years, the N2O concentration shows low values and large dispersions after the vortex breakup, which is related to the inhomogeneity in the vertical advection in the middle and high latitude lower stratosphere. The horizontal diffusion coefficient (K,,) shows a larger value accordingly. In the late breakup years, the N2O concentration shows high values and more uniform distributions than in the early years after the vortex breakup, with a smaller vertical advection and K,, after the vortex breakup. It is found that the N2O distributions are largely affected by the Arctic vortex breakup time but the dynamically defined vortex breakup time is not the only factor.

  9. Partial entrainment of gravel bars during floods

    USGS Publications Warehouse

    Konrad, Christopher P.; Booth, Derek B.; Burges, Stephen J.; Montgomery, David R.

    2002-01-01

    Spatial patterns of bed material entrainment by floods were documented at seven gravel bars using arrays of metal washers (bed tags) placed in the streambed. The observed patterns were used to test a general stochastic model that bed material entrainment is a spatially independent, random process where the probability of entrainment is uniform over a gravel bar and a function of the peak dimensionless shear stress τ0* of the flood. The fraction of tags missing from a gravel bar during a flood, or partial entrainment, had an approximately normal distribution with respect to τ0* with a mean value (50% of the tags entrained) of 0.085 and standard deviation of 0.022 (root‐mean‐square error of 0.09). Variation in partial entrainment for a given τ0* demonstrated the effects of flow conditioning on bed strength, with lower values of partial entrainment after intermediate magnitude floods (0.065 < τ0*< 0.08) than after higher magnitude floods. Although the probability of bed material entrainment was approximately uniform over a gravel bar during individual floods and independent from flood to flood, regions of preferential stability and instability emerged at some bars over the course of a wet season. Deviations from spatially uniform and independent bed material entrainment were most pronounced for reaches with varied flow and in consecutive floods with small to intermediate magnitudes.

  10. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.

    PubMed

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-08-31

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.

  11. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks

    PubMed Central

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-01-01

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758

  12. An assessment of PTV margin based on actual accumulated dose for prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Wen, Ning; Kumarasiri, Akila; Nurushev, Teamour; Burmeister, Jay; Xing, Lei; Liu, Dezhi; Glide-Hurst, Carri; Kim, Jinkoo; Zhong, Hualiang; Movsas, Benjamin; Chetty, Indrin J.

    2013-11-01

    The purpose of this work is to present the results of a margin reduction study involving dosimetric and radiobiologic assessment of cumulative dose distributions, computed using an image guided adaptive radiotherapy based framework. Eight prostate cancer patients, treated with 7-9, 6 MV, intensity modulated radiation therapy (IMRT) fields, were included in this study. The workflow consists of cone beam CT (CBCT) based localization, deformable image registration of the CBCT to simulation CT image datasets (SIM-CT), dose reconstruction and dose accumulation on the SIM-CT, and plan evaluation using radiobiological models. For each patient, three IMRT plans were generated with different margins applied to the CTV. The PTV margin for the original plan was 10 mm and 6 mm at the prostate/anterior rectal wall interface (10/6 mm) and was reduced to: (a) 5/3 mm, and (b) 3 mm uniformly. The average percent reductions in predicted tumor control probability (TCP) in the accumulated (actual) plans in comparison to the original plans over eight patients were 0.4%, 0.7% and 11.0% with 10/6 mm, 5/3 mm and 3 mm uniform margin respectively. The mean increase in predicted normal tissue complication probability (NTCP) for grades 2/3 rectal bleeding for the actual plans in comparison to the static plans with margins of 10/6, 5/3 and 3 mm uniformly was 3.5%, 2.8% and 2.4% respectively. For the actual dose distributions, predicted NTCP for late rectal bleeding was reduced by 3.6% on average when the margin was reduced from 10/6 mm to 5/3 mm, and further reduced by 1.0% on average when the margin was reduced to 3 mm. The average reduction in complication free tumor control probability (P+) in the actual plans in comparison to the original plans with margins of 10/6, 5/3 and 3 mm was 3.7%, 2.4% and 13.6% correspondingly. The significant reduction of TCP and P+ in the actual plan with 3 mm margin came from one outlier, where individualizing patient treatment plans through margin adaptation based on biological models, might yield higher quality treatments.

  13. Crater Topography on Titan: Implications for Landscape Evolution

    NASA Technical Reports Server (NTRS)

    Neish, Catherine D.; Kirk, R.L.; Lorenz, R. D.; Bray, V. J.; Schenk, P.; Stiles, B. W.; Turtle, E.; Mitchell, K.; Hayes, A.

    2013-01-01

    We present a comprehensive review of available crater topography measurements for Saturn's moon Titan. In general, the depths of Titan's craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede's average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 +/- 0.0003 (for the largest crater studied, Menrva, D approximately 425 km) and 0.017 +/- 0.004 (for the smallest crater studied, Ksa, D approximately 39 km). When we evaluate the Anderson-Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan's craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close 'airless' analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan's surface, the only body in the outer Solar System with extensive surface-atmosphere exchange.

  14. A new variable interval schedule with constant hazard rate and finite time range.

    PubMed

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  15. Generation of a Multivariate Distribution for Specified Univariate Marginals and Covariance Structure.

    DTIC Science & Technology

    1981-05-28

    x2-+,(y) . F7. y1 (-i! Tf1 ’ 2) 2 f. 2;X (3 1(2 X T= t I (Y) F= I (yI) .(-l I f ill ; X ( 2 ) Combining the one-to-one property of Tf, with (B-4...transformation of probability, dT; (y) g(Y) - f( Tf1 (Y)). ldet ( dY )I = f(TfI (Y))/f(Tf (Y)) =1 Remarks m 1. It immediately follows that Y is uniformly

  16. Maximizing the biological effect of proton dose delivered with scanned beams via inhomogeneous daily dose distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng Chuan; Giantsoudi, Drosoula; Grassberger, Clemens

    2013-05-15

    Purpose: Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. Methods: For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposedmore » lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. Results: IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. Conclusions: In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled.« less

  17. Between giant oscillations and uniform distribution of droplets: The role of varying lumen of channels in microfluidic networks.

    PubMed

    Cybulski, Olgierd; Jakiela, Slawomir; Garstecki, Piotr

    2015-12-01

    The simplest microfluidic network (a loop) comprises two parallel channels with a common inlet and a common outlet. Recent studies that assumed a constant cross section of the channels along their length have shown that the sequence of droplets entering the left (L) or right (R) arm of the loop can present either a uniform distribution of choices (e.g., RLRLRL...) or long sequences of repeated choices (RRR...LLL), with all the intermediate permutations being dynamically equivalent and virtually equally probable to be observed. We use experiments and computer simulations to show that even small variation of the cross section along channels completely shifts the dynamics either into the strong preference for highly grouped patterns (RRR...LLL) that generate system-size oscillations in flow or just the opposite-to patterns that distribute the droplets homogeneously between the arms of the loop. We also show the importance of noise in the process of self-organization of the spatiotemporal patterns of droplets. Our results provide guidelines for rational design of systems that reproducibly produce either grouped or homogeneous sequences of droplets flowing in microfluidic networks.

  18. Topology for efficient information dissemination in ad-hoc networking

    NASA Technical Reports Server (NTRS)

    Jennings, E.; Okino, C. M.

    2002-01-01

    In this paper, we explore the information dissemination problem in ad-hoc wirless networks. First, we analyze the probability of successful broadcast, assuming: the nodes are uniformly distributed, the available area has a lower bould relative to the total number of nodes, and there is zero knowledge of the overall topology of the network. By showing that the probability of such events is small, we are motivated to extract good graph topologies to minimize the overall transmissions. Three algorithms are used to generate topologies of the network with guaranteed connectivity. These are the minimum radius graph, the relative neighborhood graph and the minimum spanning tree. Our simulation shows that the relative neighborhood graph has certain good graph properties, which makes it suitable for efficient information dissemination.

  19. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  20. Determining irrigation distribution uniformity and efficiency for nurseries

    Treesearch

    R. Thomas Fernandez

    2010-01-01

    A simple method for testing the distribution uniformity of overhead irrigation systems is described. The procedure is described step-by-step along with an example. Other uses of distribution uniformity testing are presented, as well as common situations that affect distribution uniformity and how to alleviate them.

  1. Generalized spherical and simplicial coordinates

    NASA Astrophysics Data System (ADS)

    Richter, Wolf-Dieter

    2007-12-01

    Elementary trigonometric quantities are defined in l2,p analogously to that in l2,2, the sine and cosine functions are generalized for each p>0 as functions sinp and cosp such that they satisfy the basic equation cosp([phi])p+sinp([phi])p=1. The p-generalized radius coordinate of a point [xi][set membership, variant]Rn is defined for each p>0 as . On combining these quantities, ln,p-spherical coordinates are defined. It is shown that these coordinates are nearly related to ln,p-simplicial coordinates. The Jacobians of these generalized coordinate transformations are derived. Applications and interpretations from analysis deal especially with the definition of a generalized surface content on ln,p-spheres which is nearly related to a modified co-area formula and an extension of Cavalieri's and Torricelli's indivisibeln method, and with differential equations. Applications from probability theory deal especially with a geometric interpretation of the uniform probability distribution on the ln,p-sphere and with the derivation of certain generalized statistical distributions.

  2. Threshold-selecting strategy for best possible ground state detection with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    2009-04-01

    Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.

  3. Size distribution of submarine landslides along the U.S. Atlantic margin

    USGS Publications Warehouse

    Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.

    2009-01-01

    Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.

  4. Explosion probability of unexploded ordnance: expert beliefs.

    PubMed

    MacDonald, Jacqueline Anne; Small, Mitchell J; Morgan, M G

    2008-08-01

    This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15-140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution-suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies are needed to better understand the explosion risks of UXO.

  5. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  6. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  7. Aspect-related Vegetation Differences Amplify Soil Moisture Variability in Semiarid Landscapes

    NASA Astrophysics Data System (ADS)

    Yetemen, O.; Srivastava, A.; Kumari, N.; Saco, P. M.

    2017-12-01

    Soil moisture variability (SMV) in semiarid landscapes is affected by vegetation, soil texture, climate, aspect, and topography. The heterogeneity in vegetation cover that results from the effects of microclimate, terrain attributes (slope gradient, aspect, drainage area etc.), soil properties, and spatial variability in precipitation have been reported to act as the dominant factors modulating SMV in semiarid ecosystems. However, the role of hillslope aspect in SMV, though reported in many field studies, has not received the same degree of attention probably due to the lack of extensive large datasets. Numerical simulations can then be used to elucidate the contribution of aspect-driven vegetation patterns to this variability. In this work, we perform a sensitivity analysis to study on variables driving SMV using the CHILD landscape evolution model equipped with a spatially-distributed solar-radiation component that couples vegetation dynamics and surface hydrology. To explore how aspect-driven vegetation heterogeneity contributes to the SMV, CHILD was run using a range of parameters selected to reflect different scenarios (from uniform to heterogeneous vegetation cover). Throughout the simulations, the spatial distribution of soil moisture and vegetation cover are computed to estimate the corresponding coefficients of variation. Under the uniform spatial precipitation forcing and uniform soil properties, the factors affecting the spatial distribution of solar insolation are found to play a key role in the SMV through the emergence of aspect-driven vegetation patterns. Hence, factors such as catchment gradient, aspect, and latitude, define water stress and vegetation growth, and in turn affect the available soil moisture content. Interestingly, changes in soil properties (porosity, root depth, and pore-size distribution) over the domain are not as effective as the other factors. These findings show that the factors associated to aspect-related vegetation differences amplify the soil moisture variability of semi-arid landscapes.

  8. Electric-field-induced plasmon in AA-stacked bilayer graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, Y.C., E-mail: yingchih.chuang@gmail.com; Wu, J.Y., E-mail: yarst5@gmail.com; Lin, M.F., E-mail: mflin@mail.ncku.edu.tw

    2013-12-15

    The collective excitations in AA-stacked bilayer graphene for a perpendicular electric field are investigated analytically within the tight-binding model and the random-phase approximation. Such a field destroys the uniform probability distribution of the four sublattices. This drives a symmetry breaking between the intralayer and interlayer polarization intensities from the intrapair band excitations. A field-induced acoustic plasmon thus emerges in addition to the strongly field-tunable intrinsic acoustic and optical plasmons. At long wavelengths, the three modes show different dispersions and field dependence. The definite physical mechanism of the electrically inducible and tunable mode can be expected to also be present inmore » other AA-stacked few-layer graphenes. -- Highlights: •The analytical derivations are performed by the tight-binding model. •An electric field drives the non-uniformity of the charge distribution. •A symmetry breaking between the intralayer and interlayer polarizations is illustrated. •An extra plasmon emerges besides two intrinsic modes in AA-stacked bilayer graphene. •The mechanism of a field-induced mode is present in AA-stacked few-layer graphenes.« less

  9. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.

  10. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  11. Understanding the complexity of the Lévy-walk nature of human mobility with a multi-scale cost∕benefit model.

    PubMed

    Scafetta, Nicola

    2011-12-01

    Probability distributions of human displacements have been fit with exponentially truncated Lévy flights or fat tailed Pareto inverse power law probability distributions. Thus, people usually stay within a given location (for example, the city of residence), but with a non-vanishing frequency they visit nearby or far locations too. Herein, we show that an important empirical distribution of human displacements (range: from 1 to 1000 km) can be well fit by three consecutive Pareto distributions with simple integer exponents equal to 1, 2, and (>) 3. These three exponents correspond to three displacement range zones of about 1 km ≲Δr≲10 km, 10 km ≲Δr≲300 km, and 300 km ≲Δr≲1000 km, respectively. These three zones can be geographically and physically well determined as displacements within a city, visits to nearby cities that may occur within just one-day trips, and visit to far locations that may require multi-days trips. The incremental integer values of the three exponents can be easily explained with a three-scale mobility cost∕benefit model for human displacements based on simple geometrical constrains. Essentially, people would divide the space into three major regions (close, medium, and far distances) and would assume that the travel benefits are randomly∕uniformly distributed mostly only within specific urban-like areas. The three displacement distribution zones appear to be characterized by an integer (1, 2, or >3) inverse power exponent because of the specific number (1, 2, or >3) of cost mechanisms (each of which is proportional to the displacement length). The distributions in the first two zones would be associated to Pareto distributions with exponent β = 1 and β = 2 because of simple geometrical statistical considerations due to the a priori assumption that most benefits are searched in the urban area of the city of residence or in the urban area of specific nearby cities. We also show, by using independent records of human mobility, that the proposed model predicts the statistical properties of human mobility below 1 km ranges, where people just walk. In the latter case, the threshold between zone 1 and zone 2 may be around 100-200 m and, perhaps, may have been evolutionary determined by the natural human high resolution visual range, which characterizes an area of interest where the benefits are assumed to be randomly and uniformly distributed. This rich and suggestive interpretation of human mobility may characterize other complex random walk phenomena that may also be described by a N-piece fit Pareto distributions with increasing integer exponents. This study also suggests that distribution functions used to fit experimental probability distributions must be carefully chosen for not improperly obscuring the physics underlying a phenomenon.

  12. Deviation from Power Law Behavior in Landslide Phenomenon

    NASA Astrophysics Data System (ADS)

    Li, L.; Lan, H.; Wu, Y.

    2013-12-01

    Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.

  13. A blueprint for demonstrating quantum supremacy with superconducting qubits

    NASA Astrophysics Data System (ADS)

    Neill, C.; Roushan, P.; Kechedzhi, K.; Boixo, S.; Isakov, S. V.; Smelyanskiy, V.; Megrant, A.; Chiaro, B.; Dunsworth, A.; Arya, K.; Barends, R.; Burkett, B.; Chen, Y.; Chen, Z.; Fowler, A.; Foxen, B.; Giustina, M.; Graff, R.; Jeffrey, E.; Huang, T.; Kelly, J.; Klimov, P.; Lucero, E.; Mutus, J.; Neeley, M.; Quintana, C.; Sank, D.; Vainsencher, A.; Wenner, J.; White, T. C.; Neven, H.; Martinis, J. M.

    2018-04-01

    A key step toward demonstrating a quantum system that can address difficult problems in physics and chemistry will be performing a computation beyond the capabilities of any classical computer, thus achieving so-called quantum supremacy. In this study, we used nine superconducting qubits to demonstrate a promising path toward quantum supremacy. By individually tuning the qubit parameters, we were able to generate thousands of distinct Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert space. As the number of qubits increases, the system continues to explore the exponentially growing number of states. Extending these results to a system of 50 qubits has the potential to address scientific questions that are beyond the capabilities of any classical computer.

  14. Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field

    NASA Astrophysics Data System (ADS)

    Borelli, M. E. S.; Carneiro, C. E. I.

    1996-02-01

    We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.

  15. Crater topography on Titan: implications for landscape evolution

    USGS Publications Warehouse

    Neish, Catherine D.; Kirk, R.L.; Lorenz, R.D.; Bray, V.J.; Schenk, P.; Stiles, B.W.; Turtle, E.; Mitchell, Ken; Hayes, A.

    2013-01-01

    We present a comprehensive review of available crater topography measurements for Saturn’s moon Titan. In general, the depths of Titan’s craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede’s average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 ± 0.0003 (for the largest crater studied, Menrva, D ~ 425 km) and 0.017 ± 0.004 (for the smallest crater studied, Ksa, D ~ 39 km). When we evaluate the Anderson–Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan’s craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close ‘airless’ analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan’s surface, the only body in the outer Solar System with extensive surface–atmosphere exchange.

  16. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  17. Process, System, Causality, and Quantum Mechanics: A Psychoanalysis of Animal Faith

    NASA Astrophysics Data System (ADS)

    Etter, Tom; Noyes, H. Pierre

    We shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution J on a set of random variables containing x and y, define a link between x and y to be the condition x=y on J. Define the {\\it state} D of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrodinger's equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PD) and D'T = TD respectively, where P is a projection, D and D' are (von Neumann) density matrices, and T is a unitary transformation. We'll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all D's are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we'll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.

  18. Evaluation damage threshold of optical thin-film using an amplified spontaneous emission source

    NASA Astrophysics Data System (ADS)

    Zhou, Qiong; Sun, Mingying; Zhang, Zhixiang; Yao, Yudong; Peng, Yujie; Liu, Dean; Zhu, Jianqiang

    2014-10-01

    An accurate evaluation method with an amplified spontaneous emission (ASE) as the irradiation source has been developed for testing thin-film damage threshold. The partial coherence of the ASE source results in a very smooth beam profile in the near-field and a uniform intensity distribution of the focal spot in the far-field. ASE is generated by an Nd: glass rod amplifier in SG-II high power laser facility, with pulse duration of 9 ns and spectral width (FWHM) of 1 nm. The damage threshold of the TiO2 high reflection film is 14.4J/cm2 using ASE as the irradiation source, about twice of 7.4 J/cm2 that tested by a laser source with the same pulse duration and central wavelength. The damage area induced by ASE is small with small-scale desquamation and a few pits, corresponding to the defect distribution of samples. Large area desquamation is observed in the area damaged by laser, as the main reason that the non-uniformity of the laser light. The ASE damage threshold leads to more accurate evaluations of the samples damage probability by reducing the influence of hot spots in the irradiation beam. Furthermore, the ASE source has a great potential in the detection of the defect distribution of the optical elements.

  19. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    NASA Astrophysics Data System (ADS)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  20. Open-orbit theory of photoionization microscopy on nonhydrogenic atoms

    NASA Astrophysics Data System (ADS)

    Liu, F. L.; Zhao, L. B.

    2017-04-01

    Semiclassical open-orbit theory (OOT), previously developed to study photoionization of hydrogenic atoms in a uniform electric field [L. B. Zhao and J. B. Delos, Phys. Rev. A 81, 053417 (2010), 10.1103/PhysRevA.81.053417], has been generalized to describe the propagation of outgoing electron waves to macroscopic distances from a nonhydrogenic atomic source. The generalized OOT has been applied to calculate spatial distributions of electron probability densities and current densities, produced due to photoionization for lithium in a uniform electric field. The obtained results are compared with those from the fully quantum-mechanical coupled-channel theory (CCT). The excellent agreement between the CCT and OOT confirms the reliability of the generalized OOT. Comparison is also made with theoretical calculations from the wave-packet propagation technique and the recent photoionization microscopy experiment. The existing difference between theory and experiment is discussed.

  1. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    PubMed

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  2. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  3. Scale relativity and quantization of planet obliquities.

    NASA Astrophysics Data System (ADS)

    Nottale, L.

    1998-07-01

    The author applies the theory of scale relativity to the equations of rotational motion of solid bodies. He predicts in the new framework that the obliquities and inclinations of planets and satellites in the solar system must be quantized. Namely, one expects their distribution to be no longer uniform between 0 and π, but instead to display well-defined peaks of probability density at angles θk = kπ/n. The author shows in the present paper that the observational data agree very well with the prediction for n = 7, including the retrograde bodies and those which are heeled over the ecliptic plane. In particular, the value 23°27' of the obliquity of the Earth, which partly determines its climate, is not a random one, but lies in one of the main probability peaks at θ = π/7.

  4. Modelling volatility recurrence intervals in the Chinese commodity futures market

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Wang, Zhengxin; Guo, Haiming

    2016-09-01

    The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.

  5. Statistics of single unit responses in the human medial temporal lobe: A sparse and overdispersed code

    NASA Astrophysics Data System (ADS)

    Magyar, Andrew

    The recent discovery of cells that respond to purely conceptual features of the environment (particular people, landmarks, objects, etc) in the human medial temporal lobe (MTL), has raised many questions about the nature of the neural code in humans. The goal of this dissertation is to develop a novel statistical method based upon maximum likelihood regression which will then be applied to these experiments in order to produce a quantitative description of the coding properties of the human MTL. In general, the method is applicable to any experiments in which a sequence of stimuli are presented to an organism while the binary responses of a large number of cells are recorded in parallel. The central concept underlying the approach is the total probability that a neuron responds to a random stimulus, called the neuronal sparsity. The model then estimates the distribution of response probabilities across the population of cells. Applying the method to single-unit recordings from the human medial temporal lobe, estimates of the sparsity distributions are acquired in four regions: the hippocampus, the entorhinal cortex, the amygdala, and the parahippocampal cortex. The resulting distributions are found to be sparse (large fraction of cells with a low response probability) and highly non-uniform, with a large proportion of ultra-sparse neurons that possess a very low response probability, and a smaller population of cells which respond much more frequently. Rammifications of the results are discussed in relation to the sparse coding hypothesis, and comparisons are made between the statistics of the human medial temporal lobe cells and place cells observed in the rodent hippocampus.

  6. Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator

    NASA Technical Reports Server (NTRS)

    Bents, D. J.

    1982-01-01

    A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.

  7. Identifying uniformly mutated segments within repeats.

    PubMed

    Sahinalp, S Cenk; Eichler, Evan; Goldberg, Paul; Berenbrink, Petra; Friedetzky, Tom; Ergun, Funda

    2004-12-01

    Given a long string of characters from a constant size alphabet we present an algorithm to determine whether its characters have been generated by a single i.i.d. random source. More specifically, consider all possible n-coin models for generating a binary string S, where each bit of S is generated via an independent toss of one of the n coins in the model. The choice of which coin to toss is decided by a random walk on the set of coins where the probability of a coin change is much lower than the probability of using the same coin repeatedly. We present a procedure to evaluate the likelihood of a n-coin model for given S, subject a uniform prior distribution over the parameters of the model (that represent mutation rates and probabilities of copying events). In the absence of detailed prior knowledge of these parameters, the algorithm can be used to determine whether the a posteriori probability for n=1 is higher than for any other n>1. Our algorithm runs in time O(l4logl), where l is the length of S, through a dynamic programming approach which exploits the assumed convexity of the a posteriori probability for n. Our test can be used in the analysis of long alignments between pairs of genomic sequences in a number of ways. For example, functional regions in genome sequences exhibit much lower mutation rates than non-functional regions. Because our test provides means for determining variations in the mutation rate, it may be used to distinguish functional regions from non-functional ones. Another application is in determining whether two highly similar, thus evolutionarily related, genome segments are the result of a single copy event or of a complex series of copy events. This is particularly an issue in evolutionary studies of genome regions rich with repeat segments (especially tandemly repeated segments).

  8. Identification and characterization of extraordinary rainstorms in Italy

    NASA Astrophysics Data System (ADS)

    Libertino, Andrea; Ganora, Daniele; Claps, Pierluigi

    2017-04-01

    Despite its generally mild climate, Italy, as most of the Mediterranean region, is prone to the development of "super-extreme" events with extraordinary rainfall intensities. The main triggering mechanisms of these events is nowadays quite well known, but more research is needed to transform this knowledge in directions to build updated rainstorm hazard maps at the national scale. Moreover, a precise definition of "super-extremes" is still lacking, since the original suggestion of a second specific EV1 component made with the TCEV distribution. The above considerations led us to consider Italy a peculiar and challenging case study, where the geographic and orographic settings, associated with recurring storm-induced disasters, require an updated assessment of the "super-extreme" rainfall hazard at the country scale. Until now, the lack of a unique dataset of rainfall extremes has made the above task difficult to reach. In this work we report the results of the analysis made on a comprehensive and uniform set of rainfall annual maxima, collected from the different authorities in charge, representing the reference dataset of extremes from 1 to 24 hours duration. The database includes more than 6000 measuring points nationwide, spanning the period 1916 - 2014. Our analysis aims at identifying a meaningful population of records deviating from an "ordinary" definition of extreme value distribution, and assessing the stationarity in the timing of these events at the national scale. The first problems that need to be overcome are related to the not uniform distribution of data in time and space. Then the evaluation of meaningful relative thresholds aimed at selecting significant samples for the trend assessment has to be addressed. A first investigation attempt refers to the events exceeding a threshold that identify an average of one occurrence per year all over Italy, i.e. with a 1/1000 overall probability of exceedance. Geographic representation of these "outliers", scaled on local averages, demonstrates some prevailing clustering on the Thyrrenian coastal areas. Subsequent application of quantile regressions, aimed at minimizing the temporal non-uniformity of samples, shows significant increasing trends on the extremes of very short duration. Further efforts have been undertaken to explore the selection of a common national set of higher order parameters all over Italy, that would make less arduous to identify the probability of occurrence of "super-extremes" in the country.

  9. Analysis of mean seismic ground motion and its uncertainty based on the UCERF3 geologic slip rate model with uncertainty for California

    USGS Publications Warehouse

    Zeng, Yuehua

    2018-01-01

    The Uniform California Earthquake Rupture Forecast v.3 (UCERF3) model (Field et al., 2014) considers epistemic uncertainty in fault‐slip rate via the inclusion of multiple rate models based on geologic and/or geodetic data. However, these slip rates are commonly clustered about their mean value and do not reflect the broader distribution of possible rates and associated probabilities. Here, we consider both a double‐truncated 2σ Gaussian and a boxcar distribution of slip rates and use a Monte Carlo simulation to sample the entire range of the distribution for California fault‐slip rates. We compute the seismic hazard following the methodology and logic‐tree branch weights applied to the 2014 national seismic hazard model (NSHM) for the western U.S. region (Petersen et al., 2014, 2015). By applying a new approach developed in this study to the probabilistic seismic hazard analysis (PSHA) using precomputed rates of exceedance from each fault as a Green’s function, we reduce the computer time by about 10^5‐fold and apply it to the mean PSHA estimates with 1000 Monte Carlo samples of fault‐slip rates to compare with results calculated using only the mean or preferred slip rates. The difference in the mean probabilistic peak ground motion corresponding to a 2% in 50‐yr probability of exceedance is less than 1% on average over all of California for both the Gaussian and boxcar probability distributions for slip‐rate uncertainty but reaches about 18% in areas near faults compared with that calculated using the mean or preferred slip rates. The average uncertainties in 1σ peak ground‐motion level are 5.5% and 7.3% of the mean with the relative maximum uncertainties of 53% and 63% for the Gaussian and boxcar probability density function (PDF), respectively.

  10. Origin of the Valley Networks On Mars: A Hydrological Perspective

    NASA Technical Reports Server (NTRS)

    Gulick, Virginia C.

    2000-01-01

    The geomorphology of the Martian valley networks is examined from a hydrological perspective for their compatibility with an origin by rainfall, globally higher heat flow, and localized hydrothermal systems. Comparison of morphology and spatial distribution of valleys on geologic surfaces with terrestrial fluvial valleys suggests that most Martian valleys are probably not indicative of a rainfall origin, nor are they indicative of formation by an early global uniformly higher heat flow. In general, valleys are not uniformly distributed within geologic surface units as are terrestrial fluvial valleys. Valleys tend to form either as isolated systems or in clusters on a geologic surface unit leaving large expanses of the unit virtually untouched by erosion. With the exception of fluvial valleys on some volcanoes, most Martian valleys exhibit a sapping morphology and do not appear to have formed along with those that exhibit a runoff morphology. In contrast, terrestrial sapping valleys form from and along with runoff valleys. The isolated or clustered distribution of valleys suggests localized water sources were important in drainage development. Persistent ground-water outflow driven by localized, but vigorous hydrothermal circulation associated with magmatism, volcanism, impacts, or tectonism is, however, consistent with valley morphology and distribution. Snowfall from sublimating ice-covered lakes or seas may have provided an atmospheric water source for the formation of some valleys in regions where the surface is easily eroded and where localized geothermal/hydrothermal activity is sufficient to melt accumulated snowpacks.

  11. Magnetic intermittency of solar wind turbulence in the dissipation range

    NASA Astrophysics Data System (ADS)

    Pei, Zhongtian; He, Jiansen; Tu, Chuanyi; Marsch, Eckart; Wang, Linghua

    2016-04-01

    The feature, nature, and fate of intermittency in the dissipation range are an interesting topic in the solar wind turbulence. We calculate the distribution of flatness for the magnetic field fluctuations as a functionof angle and scale. The flatness distribution shows a "butterfly" pattern, with two wings located at angles parallel/anti-parallel to local mean magnetic field direction and main body located at angles perpendicular to local B0. This "butterfly" pattern illustrates that the flatness profile in (anti-) parallel direction approaches to the maximum value at larger scale and drops faster than that in perpendicular direction. The contours for probability distribution functions at different scales illustrate a "vase" pattern, more clear in parallel direction, which confirms the scale-variation of flatness and indicates the intermittency generation and dissipation. The angular distribution of structure function in the dissipation range shows an anisotropic pattern. The quasi-mono-fractal scaling of structure function in the dissipation range is also illustrated and investigated with the mathematical model for inhomogeneous cascading (extended p-model). Different from the inertial range, the extended p-model for the dissipation range results in approximate uniform fragmentation measure. However, more complete mathematicaland physical model involving both non-uniform cascading and dissipation is needed. The nature of intermittency may be strong structures or large amplitude fluctuations, which may be tested with magnetic helicity. In one case study, we find the heating effect in terms of entropy for large amplitude fluctuations seems to be more obvious than strong structures.

  12. Equilibrium stochastic dynamics of a Brownian particle in inhomogeneous space: Derivation of an alternative model

    NASA Astrophysics Data System (ADS)

    Bhattacharyay, A.

    2018-03-01

    An alternative equilibrium stochastic dynamics for a Brownian particle in inhomogeneous space is derived. Such a dynamics can model the motion of a complex molecule in its conformation space when in equilibrium with a uniform heat bath. The derivation is done by a simple generalization of the formulation due to Zwanzig for a Brownian particle in homogeneous heat bath. We show that, if the system couples to different number of bath degrees of freedom at different conformations then the alternative model gets derived. We discuss results of an experiment by Faucheux and Libchaber which probably has indicated possible limitation of the Boltzmann distribution as equilibrium distribution of a Brownian particle in inhomogeneous space and propose experimental verification of the present theory using similar methods.

  13. Conditional probability distribution function of "energy transfer rate" (PDF(ɛ|PVI)) as compared with its counterpart of temperature (PDF(T|PVI)) at the same condition of fluctuation

    NASA Astrophysics Data System (ADS)

    He, Jiansen; Wang, Yin; Pei, Zhongtian; Zhang, Lei; Tu, Chuanyi

    2017-04-01

    Energy transfer rate of turbulence is not uniform everywhere but suggested to follow a certain distribution, e.g., lognormal distribution (Kolmogorov 1962). The inhomogeneous transfer rate leads to emergence of intermittency, which may be identified with some parameter, e.g., normalized partial variance increments (PVI) (Greco et al., 2009). Large PVI of magnetic field fluctuations are found to have a temperature distribution with the median and mean values higher than that for small PVI level (Osman et al., 2012). However, there is a large proportion of overlap between temperature distributions associated with the smaller and larger PVIs. So it is recognized that only PVI cannot fully determine the temperature, since the one-to-one mapping relationship does not exist. One may be curious about the reason responsible for the considerable overlap of conditional temperature distribution for different levels of PVI. Usually the hotter plasma with higher temperature is speculated to be heated more with more dissipation of turbulence energy corresponding to more energy cascading rate, if the temperature fluctuation of the eigen wave mode is not taken into account. To explore the statistical relationship between turbulence cascading and plasma thermal state, we aim to study and reveal, for the first time, the conditional probability function of "energy transfer rate" under different levels of PVI condition (PDF(ɛ|PVI)), and compare it with the conditional probability function of temperature. The conditional probability distribution function, PDF(ɛ|PVI), is derived from PDF(PVI|ɛ)·PDF(ɛ)/PDF(PVI) according to the Bayesian theorem. PDF(PVI) can be obtained directly from the data. PDF(ɛ) is derived from the conjugate-gradient inversion of PDF(PVI) by assuming reasonably that PDF(δB|σ) is a Gaussian distribution, where PVI=|δB|/ σ and σ ( ɛι)1/3. PDF(ɛ) can also be acquired from fitting PDF(δB) with an integral function ∫PDF(δB|σ)PDF(σ)d σ. As a result, PDF(ɛ|PVI) is found to shift to higher median value of ɛ with increasing PVI but with a significant overlap of PDFs for different PVIs. Therefore, PDF(ɛ|PVI) is similar to PDF(T|PVI) in the sense of slow migration along with increasing PVI. The detailed comparison between these two conditional PDFs are also performed.

  14. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  15. Risk-targeted maps for Romania

    NASA Astrophysics Data System (ADS)

    Vacareanu, Radu; Pavel, Florin; Craciun, Ionut; Coliba, Veronica; Arion, Cristian; Aldea, Alexandru; Neagu, Cristian

    2018-03-01

    Romania has one of the highest seismic hazard levels in Europe. The seismic hazard is due to a combination of local crustal seismic sources, situated mainly in the western part of the country and the Vrancea intermediate-depth seismic source, which can be found at the bend of the Carpathian Mountains. Recent seismic hazard studies have shown that there are consistent differences between the slopes of the seismic hazard curves for sites situated in the fore-arc and back-arc of the Carpathian Mountains. Consequently, in this study we extend this finding to the evaluation of the probability of collapse of buildings and finally to the development of uniform risk-targeted maps. The main advantage of uniform risk approach is that the target probability of collapse will be uniform throughout the country. Finally, the results obtained are discussed in the light of a recent study with the same focus performed at European level using the hazard data from SHARE project. The analyses performed in this study have pointed out to a dominant influence of the quantile of peak ground acceleration used for anchoring the fragility function. This parameter basically alters the shape of the risk-targeted maps shifting the areas which have higher collapse probabilities from eastern Romania to western Romania, as its exceedance probability increases. Consequently, a uniform procedure for deriving risk-targeted maps appears as more than necessary.

  16. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  17. Bayesian bivariate meta-analysis of correlated effects: Impact of the prior distributions on the between-study correlation, borrowing of strength, and joint inferences

    PubMed Central

    Bujkiewicz, Sylwia; Riley, Richard D

    2016-01-01

    Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929

  18. Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Meegan, Charles A.

    1997-01-01

    This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.

  19. Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, V. A.

    2008-01-01

    There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.

  20. Synthesis and characterization of magnetic poly(divinyl benzene)/Fe3O4, C/Fe3O4/Fe, and C/Fe onionlike fullerene micrometer-sized particles with a narrow size distribution.

    PubMed

    Snovski, Ron; Grinblat, Judith; Margel, Shlomo

    2011-09-06

    Magnetic poly(divinyl benzene)/Fe(3)O(4) microspheres with a narrow size distribution were produced by entrapping the iron pentacarbonyl precursor within the pores of uniform porous poly(divinyl benzene) microspheres prepared in our laboratory, followed by the decomposition in a sealed cell of the entrapped Fe(CO)(5) particles at 300 °C under an inert atmosphere. Magnetic onionlike fullerene microspheres with a narrow size distribution were produced by annealing the obtained PDVB/Fe(3)O(4) particles at 500, 600, 800, and 1100 °C, respectively, under an inert atmosphere. The formation of carbon graphitic layers at low temperatures such as 500 °C is unique and probably obtained because of the presence of the magnetic iron nanoparticles. The annealing temperature allowed control of the composition, size, size distribution, crystallinity, porosity, and magnetic properties of the produced magnetic microspheres. © 2011 American Chemical Society

  1. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  2. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  3. Impact of uniform electrode current distribution on ETF

    NASA Technical Reports Server (NTRS)

    Bents, D. J.

    1982-01-01

    The design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution are examined and the alternate consolidation design which occur are presented compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is given for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.

  4. A blueprint for demonstrating quantum supremacy with superconducting qubits.

    PubMed

    Neill, C; Roushan, P; Kechedzhi, K; Boixo, S; Isakov, S V; Smelyanskiy, V; Megrant, A; Chiaro, B; Dunsworth, A; Arya, K; Barends, R; Burkett, B; Chen, Y; Chen, Z; Fowler, A; Foxen, B; Giustina, M; Graff, R; Jeffrey, E; Huang, T; Kelly, J; Klimov, P; Lucero, E; Mutus, J; Neeley, M; Quintana, C; Sank, D; Vainsencher, A; Wenner, J; White, T C; Neven, H; Martinis, J M

    2018-04-13

    A key step toward demonstrating a quantum system that can address difficult problems in physics and chemistry will be performing a computation beyond the capabilities of any classical computer, thus achieving so-called quantum supremacy. In this study, we used nine superconducting qubits to demonstrate a promising path toward quantum supremacy. By individually tuning the qubit parameters, we were able to generate thousands of distinct Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert space. As the number of qubits increases, the system continues to explore the exponentially growing number of states. Extending these results to a system of 50 qubits has the potential to address scientific questions that are beyond the capabilities of any classical computer. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  5. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

    PubMed

    Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

    2011-03-01

    During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

  6. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  7. A Probabilistic Framework for Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.

    Quantification and propagation of uncertainties in cyber attacker payoffs is a key aspect within multiplayer, stochastic security games. These payoffs may represent penalties or rewards associated with player actions and are subject to various sources of uncertainty, including: (1) cyber-system state, (2) attacker type, (3) choice of player actions, and (4) cyber-system state transitions over time. Past research has primarily focused on representing defender beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and mathematical intervals. For cyber-systems, probability distributions may helpmore » address statistical (aleatory) uncertainties where the defender may assume inherent variability or randomness in the factors contributing to the attacker payoffs. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as generalizations of probability boxes. This paper explores the mathematical treatment of such mixed payoff uncertainties. A conditional probabilistic reasoning approach is adopted to organize the dependencies between a cyber-system’s state, attacker type, player actions, and state transitions. This also enables the application of probabilistic theories to propagate various uncertainties in the attacker payoffs. An example implementation of this probabilistic framework and resulting attacker payoff distributions are discussed. A goal of this paper is also to highlight this uncertainty quantification problem space to the cyber security research community and encourage further advancements in this area.« less

  8. Underestimating extreme events in power-law behavior due to machine-dependent cutoffs

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo

    2014-11-01

    Power-law distributions are typical macroscopic features occurring in almost all complex systems observable in nature. As a result, researchers in quantitative analyses must often generate random synthetic variates obeying power-law distributions. The task is usually performed through standard methods that map uniform random variates into the desired probability space. Whereas all these algorithms are theoretically solid, in this paper we show that they are subject to severe machine-dependent limitations. As a result, two dramatic consequences arise: (i) the sampling in the tail of the distribution is not random but deterministic; (ii) the moments of the sample distribution, which are theoretically expected to diverge as functions of the sample sizes, converge instead to finite values. We provide quantitative indications for the range of distribution parameters that can be safely handled by standard libraries used in computational analyses. Whereas our findings indicate possible reinterpretations of numerical results obtained through flawed sampling methodologies, they also pave the way for the search for a concrete solution to this central issue shared by all quantitative sciences dealing with complexity.

  9. Quantum illumination for enhanced detection of Rayleigh-fading targets

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.

  10. Rapid learning of visual ensembles.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-02-01

    We recently demonstrated that observers are capable of encoding not only summary statistics, such as mean and variance of stimulus ensembles, but also the shape of the ensembles. Here, for the first time, we show the learning dynamics of this process, investigate the possible priors for the distribution shape, and demonstrate that observers are able to learn more complex distributions, such as bimodal ones. We used speeding and slowing of response times between trials (intertrial priming) in visual search for an oddly oriented line to assess internal models of distractor distributions. Experiment 1 demonstrates that two repetitions are sufficient for enabling learning of the shape of uniform distractor distributions. In Experiment 2, we compared Gaussian and uniform distractor distributions, finding that following only two repetitions Gaussian distributions are represented differently than uniform ones. Experiment 3 further showed that when distractor distributions are bimodal (with a 30° distance between two uniform intervals), observers initially treat them as uniform, and only with further repetitions do they begin to treat the distributions as bimodal. In sum, observers do not have strong initial priors for distribution shapes and quickly learn simple ones but have the ability to adjust their representations to more complex feature distributions as information accumulates with further repetitions of the same distractor distribution.

  11. Uniform irradiation of irregularly shaped cavities for photodynamic therapy.

    PubMed

    Rem, A I; van Gemert, M J; van der Meulen, F W; Gijsbers, G H; Beek, J F

    1997-03-01

    It is difficult to achieve a uniform light distribution in irregularly shaped cavities. We have conducted a study on the use of hollow 'integrating' moulds for more uniform light delivery of photodynamic therapy in irregularly shaped cavities such as the oral cavity. Simple geometries such as a cubical box, a sphere, a cylinder and a 'bottle-neck' geometry have been investigated experimentally and the results have been compared with computed light distributions obtained using the 'radiosity method'. A high reflection coefficient of the mould and the best uniform direct irradiance possible on the inside of the mould were found to be important determinants for achieving a uniform light distribution.

  12. Expected Number of Fixed Points in Boolean Networks with Arbitrary Topology.

    PubMed

    Mori, Fumito; Mochizuki, Atsushi

    2017-07-14

    Boolean network models describe genetic, neural, and social dynamics in complex networks, where the dynamics depend generally on network topology. Fixed points in a genetic regulatory network are typically considered to correspond to cell types in an organism. We prove that the expected number of fixed points in a Boolean network, with Boolean functions drawn from probability distributions that are not required to be uniform or identical, is one, and is independent of network topology if only a feedback arc set satisfies a stochastic neutrality condition. We also demonstrate that the expected number is increased by the predominance of positive feedback in a cycle.

  13. Phase transition in nonuniform Josephson arrays: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lozovik, Yu. E.; Pomirchy, L. M.

    1994-01-01

    Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.

  14. Hydrodynamics of the Polyakov line in SU(N c) Yang-Mills

    DOE PAGES

    Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail

    2015-12-08

    We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite N c for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of N c, and are consistent with the string model results at N c = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out ofmore » equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z(N c)bubble using a piece-wise sound wave is suggested.« less

  15. Nonlinear dynamic evolution and control in CCFN with mixed attachment mechanisms

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Jianping; Han, Dun

    2017-01-01

    In recent years, wireless communication plays an important role in our lives. Cooperative communication, is used by a mobile station with single antenna to share with each other forming a virtual MIMO antenna system, will become a development with a diversity gain for wireless communication in tendency future. In this paper, a fitness model of evolution network based on complex networks with mixed attachment mechanisms is devised in order to study an actual network-CCFN (cooperative communication fitness network). Firstly, the evolution of CCFN is given by four cases with different probabilities, and the rate equations of nodes degree are presented to analyze the evolution of CCFN. Secondly, the degree distribution is analyzed by calculating the rate equation and numerical simulation with the examples of four fitness distributions such as power law, uniform fitness distribution, exponential fitness distribution and Rayleigh fitness distribution. Finally, the robustness of CCFN is studied by numerical simulation with four fitness distributions under random attack and intentional attack to analyze the effects of degree distribution, average path length and average degree. The results of this paper offers insights for building CCFN systems in order to program communication resources.

  16. Disentangling rotational velocity distribution of stars

    NASA Astrophysics Data System (ADS)

    Curé, Michel; Rial, Diego F.; Cassetti, Julia; Christen, Alejandra

    2017-11-01

    Rotational speed is an important physical parameter of stars: knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. However, rotational speed cannot be measured directly and is instead the convolution between the rotational speed and the sine of the inclination angle vsin(i). The problem itself can be described via a Fredhoml integral of the first kind. A new method (Curé et al. 2014) to deconvolve this inverse problem and obtain the cumulative distribution function for stellar rotational velocities is based on the work of Chandrasekhar & Münch (1950). Another method to obtain the probability distribution function is Tikhonov regularization method (Christen et al. 2016). The proposed methods can be also applied to the mass ratio distribution of extrasolar planets and brown dwarfs (in binary systems, Curé et al. 2015). For stars in a cluster, where all members are gravitationally bounded, the standard assumption that rotational axes are uniform distributed over the sphere is questionable. On the basis of the proposed techniques a simple approach to model this anisotropy of rotational axes has been developed with the possibility to ``disentangling'' simultaneously both the rotational speed distribution and the orientation of rotational axes.

  17. Osmium isotope evidence for uniform distribution of s- and r-process components in the early solar system

    NASA Astrophysics Data System (ADS)

    Yokoyama, Tetsuya; Rai, Vinai K.; Alexander, Conel M. O'D.; Lewis, Roy S.; Carlson, Richard W.; Shirey, Steven B.; Thiemens, Mark H.; Walker, Richard J.

    2007-07-01

    We have precisely measured Os isotopic ratios in bulk samples of five carbonaceous, two enstatite and two ordinary chondrites, as well as the acid-resistant residues of three carbonaceous chondrites. All bulk meteorite samples have uniform 186Os/ 188Os, 188Os/ 189Os and 190Os/ 189Os ratios, when decomposed by an alkaline fusion total digestion technique. These ratios are also identical to estimates for Os in the bulk silicate Earth. Despite Os isotopic homogeneity at the bulk meteorite scale, acid insoluble residues of three carbonaceous chondrites are enriched in 186Os, 188Os and 190Os, isotopes with major contributions from stellar s-process nucleosynthesis. Conversely, these isotopes are depleted in acid soluble portions of the same meteorites. The complementary enriched and depleted fractions indicate the presence of at least two types of Os-rich components in these meteorites, one enriched in Os isotopes produced by s-process nucleosynthesis, the other enriched in isotopes produced by the r-process. Presolar silicon carbide is the most probable host for the s-process-enriched Os present in the acid insoluble residues. Because the enriched and depleted components present in these meteorites are combined in proportions resulting in a uniform chondritic/terrestrial composition, it requires that disparate components were thoroughly mixed within the solar nebula at the time of the initiation of planetesimal accretion. This conclusion contrasts with evidence from the isotopic compositions of some other elements (e.g., Sm, Nd, Ru, Mo) that suggests heterogeneous distribution of matter with disparate nucleosynthetic sources within the nebula.

  18. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  19. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  20. Algorithms to eliminate the influence of non-uniform intensity distributions on wavefront reconstruction by quadri-wave lateral shearing interferometers

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing

    2017-11-01

    In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.

  1. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  2. Uranium concentration and distribution in six peridotite inclusions of probable mantle origin

    NASA Technical Reports Server (NTRS)

    Haines, E. L.; Zartman, R. E.

    1973-01-01

    Fission-track activation was used to investigate uranium concentration and distribution in peridotite inclusions in alkali basalt from six localities. Whole-rock uranium concentrations range from 24 to 82 ng/g. Most of the uranium is uniformly distributed in the major silicate phases - olivine, orthopyroxene, and clinopyroxene. Chromian spinels may be classified into two groups on the basis of their uranium content - those which have less than 10 ng/g and those which have 100 to 150 ng/g U. In one sample accessory hydrous phases, phlogopite and hornblende, contain 130 and 300 ng/g U, respectively. The contact between the inclusion and the host basalt is usually quite sharp. Glassy or microcrystalline veinlets found in some samples contain more than 1 microgram/g. Very little uranium is associated with microcrystals of apatite. These results agree with some earlier investigators, who have concluded that suboceanic peridotites contain too little uranium to account for normal oceanic heat flow by conduction alone.

  3. Laboratory Characterization and Modeling of a Near-Infrared Enhanced Photomultiplier Tube

    NASA Technical Reports Server (NTRS)

    Biswas, A.; Farr, W. H.

    2003-01-01

    The photon-starved channel for optical communications from deep space requires the development of detector technology that can achieve photon-counting sensitivities with high bandwidth. In this article, a near-infrared enhanced photomultiplier tube (PMT) with a quantum e.ciency of 0.08 at a 1.06- m wavelength is characterized in the laboratory. A Polya distribution model is used to compute the probability distribution function of the emitted secondary photoelectrons from the PMT. The model is compared with measured pulse-height distributions with reasonable agreement. The model accounts for realistic device parameters, such as the individual dynode stage gains and a shape parameter that is representative of the spatial uniformity of response across the photocathode and dynodes. Bit-error rate (BER) measurements also are presented for 4- and 8-pulse-position modulation (PPM) modulation schemes with data rates of 20 to 30 Mb/s. A BER of 10-2 is obtained for a mean of 8 detected photons.

  4. High-energy Electron Scattering and the Charge Distributions of Selected Nuclei

    DOE R&D Accomplishments Database

    Hahn, B.; Ravenhall, D. G.; Hofstadter, R.

    1955-10-01

    Experimental results are presented of electron scattering by Ca, V, Co, In, Sb, Hf, Ta, W, Au, Bi, Th, and U, at 183 Mev and (for some of the elements) at 153 Mev. For those nuclei for which asphericity and inelastic scattering are absent or unimportant, i.e., Ca, V, Co, In, Sb, Au, and Bi, a partial wave analysis of the Dirac equation has been performed in which the nuclei are represented by static, spherically symmetric charge distributions. Smoothed uniform charge distributions have been assumed; these are characterized by a constant charge density in the central region of the nucleus, with a smoothed-our surface. Essentially two parameters can be determined, related to the radium and to the surface thickness. An examination of the Au experiments show that the functional forms of the surface are not important, and that the charge density in the central regions is probably fairly flat, although it cannot be determined very accurately.

  5. Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, V. A.

    2008-01-01

    There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.

  6. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  7. The dose response relation for rat spinal cord paralysis analyzed in terms of the effective size of the functional subunit

    NASA Astrophysics Data System (ADS)

    Adamus-Górka, Magdalena; Mavroidis, Panayiotis; Brahme, Anders; Lind, Bengt K.

    2008-11-01

    Radiobiological models for estimating normal tissue complication probability (NTCP) are increasingly used in order to quantify or optimize the clinical outcome of radiation therapy. A good NTCP model should fulfill at least the following two requirements: (a) it should predict the sigmoid shape of the corresponding dose-response curve and (b) it should accurately describe the probability of a specified response for arbitrary non-uniform dose delivery for a given endpoint as accurately as possible, i.e. predict the volume dependence. In recent studies of the volume effect of a rat spinal cord after irradiation with narrow and broad proton beams the authors claim that none of the existing NTCP models is able to describe their results. Published experimental data have been used here to try to quantify the change in the effective dose (D50) causing 50% response for different field sizes. The present study was initiated to describe the induction of white matter necrosis in a rat spinal cord after irradiation with narrow proton beams in terms of the mean dose to the effective volume of the functional subunit (FSU). The physically delivered dose distribution was convolved with a function describing the effective size or, more accurately, the sensitivity distribution of the FSU to obtain the effective mean dose deposited in it. This procedure allows the determination of the mean D50 value of the FSUs of a certain size which is of interest for example if the cell nucleus of the oligodendrocyte is the sensitive target. Using the least-squares method to compare the effective doses for different sizes of the functional subunits with the experimental data the best fit was obtained with a length of about 9 mm. For the non-uniform dose distributions an effective FSU length of 8 mm gave the optimal fit with the probit dose-response model. The method could also be used to interpret the so-called bath and shower experiments where the heterogeneous dose delivery was used in the convolution process. The assumption of an effective FSU size is consistent with most of the effects seen when different portions of the rat spinal cord are irradiated to different doses. The effective FSU length from these experiments is about 8.5 ± 0.5 mm. This length could be interpreted as an effective size of the functional subunits in a rat spinal cord, where multiple myelin sheaths are connected by a single oligodendrocyte and repair is limited by the range of oligodendrocyte progenitor cell diffusion. It was even possible to suggest a more likely than uniform effective FSU sensitivity distribution from the experimental data.

  8. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  9. Spatial and temporal distribution of trunk-injected imidacloprid in apple tree canopies.

    PubMed

    Aćimović, Srđan G; VanWoerkom, Anthony H; Reeb, Pablo D; Vandervoort, Christine; Garavaglia, Thomas; Cregg, Bert M; Wise, John C

    2014-11-01

    Pesticide use in orchards creates drift-driven pesticide losses which contaminate the environment. Trunk injection of pesticides as a target-precise delivery system could greatly reduce pesticide losses. However, pesticide efficiency after trunk injection is associated with the underinvestigated spatial and temporal distribution of the pesticide within the tree crown. This study quantified the spatial and temporal distribution of trunk-injected imidacloprid within apple crowns after trunk injection using one, two, four or eight injection ports per tree. The spatial uniformity of imidacloprid distribution in apple crowns significantly increased with more injection ports. Four ports allowed uniform spatial distribution of imidacloprid in the crown. Uniform and non-uniform spatial distributions were established early and lasted throughout the experiment. The temporal distribution of imidacloprid was significantly non-uniform. Upper and lower crown positions did not significantly differ in compound concentration. Crown concentration patterns indicated that imidacloprid transport in the trunk occurred through radial diffusion and vertical uptake with a spiral pattern. By showing where and when a trunk-injected compound is distributed in the apple tree canopy, this study addresses a key knowledge gap in terms of explaining the efficiency of the compound in the crown. These findings allow the improvement of target-precise pesticide delivery for more sustainable tree-based agriculture. © 2014 Society of Chemical Industry.

  10. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  11. Distributed Adaptive Neural Network Output Tracking of Leader-Following High-Order Stochastic Nonlinear Multiagent Systems With Unknown Dead-Zone Input.

    PubMed

    Hua, Changchun; Zhang, Liuliu; Guan, Xinping

    2017-01-01

    This paper studies the problem of distributed output tracking consensus control for a class of high-order stochastic nonlinear multiagent systems with unknown nonlinear dead-zone under a directed graph topology. The adaptive neural networks are used to approximate the unknown nonlinear functions and a new inequality is used to deal with the completely unknown dead-zone input. Then, we design the controllers based on backstepping method and the dynamic surface control technique. It is strictly proved that the resulting closed-loop system is stable in probability in the sense of semiglobally uniform ultimate boundedness and the tracking errors between the leader and the followers approach to a small residual set based on Lyapunov stability theory. Finally, two simulation examples are presented to show the effectiveness and the advantages of the proposed techniques.

  12. Neutron transmutation doped Ge bolometers

    NASA Technical Reports Server (NTRS)

    Haller, E. E.; Kreysa, E.; Palaio, N. P.; Richards, P. L.; Rodder, M.

    1983-01-01

    Some conclusions reached are as follow. Neutron Transmutation Doping (NTD) of high quality Ge single crystals provides perfect control of doping concentration and uniformity. The resistivity can be tailored to any given bolometer operating temperature down to 0.1 K and probably lower. The excellent uniformity is advantaged for detector array development.

  13. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.

    PubMed

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.

  14. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis

    PubMed Central

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691

  15. Accuracy analysis of automodel solutions for Lévy flight-based transport: from resonance radiative transfer to a simple general model

    NASA Astrophysics Data System (ADS)

    Kukushkin, A. B.; Sdvizhenskii, P. A.

    2017-12-01

    The results of accuracy analysis of automodel solutions for Lévy flight-based transport on a uniform background are presented. These approximate solutions have been obtained for Green’s function of the following equations: the non-stationary Biberman-Holstein equation for three-dimensional (3D) radiative transfer in plasma and gases, for various (Doppler, Lorentz, Voigt and Holtsmark) spectral line shapes, and the 1D transport equation with a simple longtailed step-length probability distribution function with various power-law exponents. The results suggest the possibility of substantial extension of the developed method of automodel solution to other fields far beyond physics.

  16. Intensity variation study of the radiation field in a mammographic system using thermoluminescent dosimeters TLD-900 (CaSO4:Dy)

    NASA Astrophysics Data System (ADS)

    Corrêa, E. L.; Silva, J. O.; Vivolo, V.; Potiens, M. P. A.; Daros, K. A. C.; Medeiros, R. B.

    2014-02-01

    This study presents the results of the intensity variation of the radiation field in a mammographic system using the thermoluminescent dosimeter TLD-900 (CaSO4:Dy). These TLDs were calibrated and characterized in an industrial X-ray system used for instruments calibration, in the energy range used in mammography. They were distributed in a matrix of 19 lines and five columns, covering an area of 18 cm×8 cm in the center of the radiation field on the clinical equipment. The results showed a variation of the intensity probably explained by the non-uniformity of the field due to the heel effect.

  17. A statistical model for radar images of agricultural scenes

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  18. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    NASA Astrophysics Data System (ADS)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  19. The first-digit frequencies in data of turbulent flows

    NASA Astrophysics Data System (ADS)

    Biau, Damien

    2015-12-01

    Considering the first significant digits (noted d) in data sets of dissipation for turbulent flows, the probability to find a given number (d = 1 or 2 or …9) would be 1/9 for a uniform distribution. Instead the probability closely follows Newcomb-Benford's law, namely P(d) = log(1 + 1 / d) . The discrepancies between Newcomb-Benford's law and first-digits frequencies in turbulent data are analysed through Shannon's entropy. The data sets are obtained with direct numerical simulations for two types of fluid flow: an isotropic case initialized with a Taylor-Green vortex and a channel flow. Results are in agreement with Newcomb-Benford's law in nearly homogeneous cases and the discrepancies are related to intermittent events. Thus the scale invariance for the first significant digits, which supports Newcomb-Benford's law, seems to be related to an equilibrium turbulent state, namely with a significant inertial range. A matlab/octave program provided in appendix is such that part of the presented results can easily be replicated.

  20. Effects of beam irregularity on uniform scanning

    NASA Astrophysics Data System (ADS)

    Kim, Chang Hyeuk; Jang, Sea duk; Yang, Tae-Keun

    2016-09-01

    An active scanning beam delivery method has many advantages in particle beam applications. For the beam is to be successfully delivered to the target volume by using the active scanning technique, the dose uniformity must be considered and should be at least 2.5% in the case of therapy application. During beam irradiation, many beam parameters affect the 2-dimensional uniformity at the target layer. A basic assumption in the beam irradiation planning stage is that the shape of the beam is symmetric and follows a Gaussian distribution. In this study, a pure Gaussian-shaped beam distribution was distorted by adding parasitic Gaussian distribution. An appropriate uniform scanning condition was deduced by using a quantitative analysis based on the gamma value of the distorted beam and 2-dimensional uniformities.

  1. Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.

    2007-04-01

    Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.

  2. Towards an explanation of orbits in the extreme trans-Neptunian region: The effect of Milgromian dynamics

    NASA Astrophysics Data System (ADS)

    Paučo, R.

    2017-06-01

    Context. Milgromian dynamics (MD or MOND) uniquely predicts motion in a galaxy from the distribution of its stars and gas in a remarkable agreement with observations so far. In the solar system, MD predicts the existence of some possibly non-negligible dynamical effects, which can be used to constrain the freedom in MD theories. Known extreme trans-Neptunian objects (ETNOs) have their argument of perihelion, longitude of ascending node, and inclination distributed in highly non-uniform fashion; ETNOs are bodies with perihelion distances greater than the orbit of Neptune and with semimajor axes greater than 150 au and less than 1500 au. It is as if these bodies have been systematically perturbed by some external force. Aims: We investigated a hypothesis that the puzzling orbital characteristics of ETNOs are a consequence of MD. Methods: We set up a dynamical model of the solar system incorporating the external field effect (EFE), which is anticipated to be the dominant effect of MD in the ETNOs region. We used constraints available on the strength of EFE coming from radio tracking of the Cassini spacecraft. We performed several numerical experiments, concentrating on the long-term orbital evolution of primordial (randomised) ETNOs in MD. Results: The EFE could produce distinct non-uniform distributions of the orbital elements of ETNOs that are related to the orientation of an orbit in space. If we demand that EFE is solely responsible for the detachment of Sedna and 2012 VP113, then these distributions are at odds with the currently observed statistics on ETNOs unless the EFE quadrupole strength parameter Q2 has values that are unlikely (with probability <1%) in light of the Cassini data.

  3. Vertical motion of a charged colloidal particle near an AC polarized electrode with a nonuniform potential distribution: theory and experimental evidence.

    PubMed

    Fagan, Jeffrey A; Sides, Paul J; Prieve, Dennis C

    2004-06-08

    Electroosmotic flow in the vicinity of a colloidal particle suspended over an electrode accounts for observed changes in the average height of the particle when the electrode passes alternating current at 100 Hz. The main findings are (1) electroosmotic flow provides sufficient force to move the particle and (2) a phase shift between the purely electrical force on the particle and the particle's motion provides evidence of an E2 force acting on the particle. The electroosmotic force in this case arises from the boundary condition applied when faradaic reactions occur on the electrode. The presence of a potential-dependent electrode reaction moves the likely distribution of electrical current at the electrode surface toward uniform current density around the particle. In the presence of a particle the uniform current density is associated with a nonuniform potential; thus, the electric field around the particle has a nonzero radial component along the electrode surface, which interacts with unbalanced charge in the diffuse double layer on the electrode to create a flow pattern and impose an electroosmotic-flow-based force on the particle. Numerical solutions are presented for these additional height-dependent forces on the particle as a function of the current distribution on the electrode and for the time-dependent probability density of a charged colloidal particle near a planar electrode with a nonuniform electrical potential boundary condition. The electrical potential distribution on the electrode, combined with a phase difference between the electric field in solution and the electrode potential, can account for the experimentally observed motion of particles in ac electric fields in the frequency range from approximately 10 to 200 Hz.

  4. Risk-targeted versus current seismic design maps for the conterminous United States

    USGS Publications Warehouse

    Luco, Nicolas; Ellingwood, Bruce R.; Hamburger, Ronald O.; Hooper, John D.; Kimball, Jeffrey K.; Kircher, Charles A.

    2007-01-01

    The probabilistic portions of the seismic design maps in the NEHRP Provisions (FEMA, 2003/2000/1997), and in the International Building Code (ICC, 2006/2003/2000) and ASCE Standard 7-05 (ASCE, 2005a), provide ground motion values from the USGS that have a 2% probability of being exceeded in 50 years. Under the assumption that the capacity against collapse of structures designed for these "uniformhazard" ground motions is equal to, without uncertainty, the corresponding mapped value at the location of the structure, the probability of its collapse in 50 years is also uniform. This is not the case however, when it is recognized that there is, in fact, uncertainty in the structural capacity. In that case, siteto-site variability in the shape of ground motion hazard curves results in a lack of uniformity. This paper explains the basis for proposed adjustments to the uniform-hazard portions of the seismic design maps currently in the NEHRP Provisions that result in uniform estimated collapse probability. For seismic design of nuclear facilities, analogous but specialized adjustments have recently been defined in ASCE Standard 43-05 (ASCE, 2005b). In support of the 2009 update of the NEHRP Provisions currently being conducted by the Building Seismic Safety Council (BSSC), herein we provide examples of the adjusted ground motions for a selected target collapse probability (or target risk). Relative to the probabilistic MCE ground motions currently in the NEHRP Provisions, the risk-targeted ground motions for design are smaller (by as much as about 30%) in the New Madrid Seismic Zone, near Charleston, South Carolina, and in the coastal region of Oregon, with relatively little (<15%) change almost everywhere else in the conterminous U.S.

  5. Life prediction and mechanical reliability of NT551 silicon nitride

    NASA Astrophysics Data System (ADS)

    Andrews, Mark Jay

    The inert strength and fatigue performance of a diesel engine exhaust valve made from silicon nitride (Si3N4) ceramic were assessed. The Si3N4 characterized in this study was manufactured by Saint Gobain/Norton Industrial Ceramics and was designated as NT551. The evaluation was made utilizing a probabilistic life prediction algorithm that combined censored test specimen strength data with a Weibull distribution function and the stress field of the ceramic valve obtained from finite element analysis. The major assumptions of the life prediction algorithm are that the bulk ceramic material is isotropic and homogeneous and that the strength-limiting flaws are uniformly distributed. The results from mechanical testing indicated that NT551 was not a homogeneous ceramic and that its strength were functions of temperature, loading rate, and machining orientation. Fractographic analysis identified four different failure modes; 2 were identified as inhomogeneities that were located throughout the bulk of NT551 and were due to processing operations. The fractographic analysis concluded that the strength degradation of NT551 observed from the temperature and loading rate test parameters was due to a change of state that occurred in its secondary phase. Pristine and engine-tested valves made from NT551 were loaded to failure and the inert strengths were obtained. Fractographic analysis of the valves identified the same four failure mechanisms as found with the test specimens. The fatigue performance and the inert strength of the Si3N 4 valves were assessed from censored and uncensored test specimen strength data, respectively. The inert strength failure probability predictions were compared to the inert strength of the Si3N4 valves. The inert strength failure probability predictions were more conservative than the strength of the valves. The lack of correlation between predicted and actual valve strength was due to the nonuniform distribution of inhomogeneities present in NT551. For the same reasons, the predicted and actual fatigue performance did not correlate well. The results of this study should not be considered a limitation of the life prediction algorithm but emphasize the requirement that ceramics be homogeneous and strength-limiting flaws uniformly distributed as a perquisite for accurate life prediction and reliability analyses.

  6. On Voxel based Iso-Tumor Control Probabilty and Iso-Complication Maps for Selective Boosting and Selective Avoidance Intensity Modulated Radiotherapy.

    PubMed

    Kim, Yusung; Tomé, Wolfgang A

    2008-01-01

    Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.

  7. Neuromorphic learning of continuous-valued mappings from noise-corrupted data. Application to real-time adaptive control

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Merrill, Walter C.

    1990-01-01

    The ability of feed-forward neural network architectures to learn continuous valued mappings in the presence of noise was demonstrated in relation to parameter identification and real-time adaptive control applications. An error function was introduced to help optimize parameter values such as number of training iterations, observation time, sampling rate, and scaling of the control signal. The learning performance depended essentially on the degree of embodiment of the control law in the training data set and on the degree of uniformity of the probability distribution function of the data that are presented to the net during sequence. When a control law was corrupted by noise, the fluctuations of the training data biased the probability distribution function of the training data sequence. Only if the noise contamination is minimized and the degree of embodiment of the control law is maximized, can a neural net develop a good representation of the mapping and be used as a neurocontroller. A multilayer net was trained with back-error-propagation to control a cart-pole system for linear and nonlinear control laws in the presence of data processing noise and measurement noise. The neurocontroller exhibited noise-filtering properties and was found to operate more smoothly than the teacher in the presence of measurement noise.

  8. Evaluation of dripper clogging using magnetic water in drip irrigation

    NASA Astrophysics Data System (ADS)

    Khoshravesh, Mojtaba; Mirzaei, Sayyed Mohammad Javad; Shirazi, Pooya; Valashedi, Reza Norooz

    2018-06-01

    This study was performed to investigate the uniformity of distribution of water and discharge variations in drip irrigation using magnetic water. Magnetic water was achieved by transition of water using a robust permanent magnet connected to a feed pipeline. Two main factors including magnetic and non-magnetic water and three sub-factor of salt concentration including well water, addition of 150 and 300 mg L-1 calcium carbonate to irrigation water with three replications were applied. The result of magnetic water on average dripper discharge was significant at ( P ≤ 0.05). At the final irrigation, the average dripper discharge and distribution uniformity were higher for the magnetic water compared to the non-magnetic water. The magnetic water showed a significant effect ( P ≤ 0.01) on distribution uniformity of drippers. At the first irrigation, the water distribution uniformity was almost the same for both the magnetic water and the non-magnetic water. The use of magnetic water for drip irrigation is recommended to achieve higher uniformity.

  9. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so models can be strongly or weakly time-dependent.

  10. UNIFORMLY MOST POWERFUL BAYESIAN TESTS

    PubMed Central

    Johnson, Valen E.

    2014-01-01

    Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829

  11. Optimization of a Deep Convective Cloud Technique in Evaluating the Long-Term Radiometric Stability of MODIS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Mu, Qiaozhen; Wu, Aisheng; Xiong, Xiaoxiong; Doelling, David R.; Angal, Amit; Chang, Tiejun; Bhatt, Rajendra

    2017-01-01

    MODIS reflective solar bands are calibrated on-orbit using a solar diffuser and near-monthly lunar observations. To monitor the performance and effectiveness of the on-orbit calibrations, pseudo-invariant targets such as deep convective clouds (DCCs), Libya-4, and Dome-C are used to track the long-term stability of MODIS Level 1B product. However, the current MODIS operational DCC technique (DCCT) simply uses the criteria set for the 0.65- m band. We optimize several critical DCCT parameters including the 11- micrometer IR-band Brightness Temperature (BT11) threshold for DCC identification, DCC core size and uniformity to help locate DCCs at convection centers, data collection time interval, and probability distribution function (PDF) bin increment for each channel. The mode reflectances corresponding to the PDF peaks are utilized as the DCC reflectances. Results show that the BT11 threshold and time interval are most critical for the Short Wave Infrared (SWIR) bands. The Bidirectional Reflectance Distribution Function model is most effective in reducing the DCC anisotropy for the visible channels. The uniformity filters and PDF bin size have minimal impacts on the visible channels and a larger impact on the SWIR bands. The newly optimized DCCT will be used for future evaluation of MODIS on-orbit calibration by MODIS Characterization Support Team.

  12. Effect of Patient Set-up and Respiration motion on Defining Biological Targets for Image-Guided Targeted Radiotherapy

    NASA Astrophysics Data System (ADS)

    McCall, Keisha C.

    Identification and monitoring of sub-tumor targets will be a critical step for optimal design and evaluation of cancer therapies in general and biologically targeted radiotherapy (dose-painting) in particular. Quantitative PET imaging may be an important tool for these applications. Currently radiotherapy planning accounts for tumor motion by applying geometric margins. These margins create a motion envelope to encompass the most probable positions of the tumor, while also maintaining the appropriate tumor control and normal tissue complication probabilities. This motion envelope is effective for uniform dose prescriptions where the therapeutic dose is conformed to the external margins of the tumor. However, much research is needed to establish the equivalent margins for non-uniform fields, where multiple biological targets are present and each target is prescribed its own dose level. Additionally, the size of the biological targets and close proximity make it impractical to apply planning margins on the sub-tumor level. Also, the extent of high dose regions must be limited to avoid excessive dose to the surrounding tissue. As such, this research project is an investigation of the uncertainty within quantitative PET images of moving and displaced dose-painting targets, and an investigation of the residual errors that remain after motion management. This included characterization of the changes in PET voxel-values as objects are moved relative to the discrete sampling interval of PET imaging systems (SPECIFIC AIM 1). Additionally, the repeatability of PET distributions and the delineating dose-painting targets were measured (SPECIFIC AIM 2). The effect of imaging uncertainty on the dose distributions designed using these images (SPECIFIC AIM 3) has also been investigated. This project also included analysis of methods to minimize motion during PET imaging and reduce the dosimetric impact of motion/position-induced imaging uncertainty (SPECIFIC AIM 4).

  13. Dosimetric advantages of generalised equivalent uniform dose-based optimisation on dose–volume objectives in intensity-modulated radiotherapy planning for bilateral breast cancer

    PubMed Central

    Lee, T-F; Ting, H-M; Chao, P-J; Wang, H-Y; Shieh, C-S; Horng, M-F; Wu, J-M; Yeh, S-A; Cho, M-Y; Huang, E-Y; Huang, Y-J; Chen, H-C; Fang, F-M

    2012-01-01

    Objective We compared and evaluated the differences between two models for treating bilateral breast cancer (BBC): (i) dose–volume-based intensity-modulated radiation treatment (DV plan), and (ii) dose–volume-based intensity-modulated radiotherapy with generalised equivalent uniform dose-based optimisation (DV-gEUD plan). Methods The quality and performance of the DV plan and DV-gEUD plan using the Pinnacle3® system (Philips, Fitchburg, WI) were evaluated and compared in 10 patients with stage T2–T4 BBC. The plans were delivered on a Varian 21EX linear accelerator (Varian Medical Systems, Milpitas, CA) equipped with a Millennium 120 leaf multileaf collimator (Varian Medical Systems). The parameters analysed included the conformity index, homogeneity index, tumour control probability of the planning target volume (PTV), the volumes V20 Gy and V30 Gy of the organs at risk (OAR, including the heart and lungs), mean dose and the normal tissue complication probability. Results Both plans met the requirements for the coverage of PTV with similar conformity and homogeneity indices. However, the DV-gEUD plan had the advantage of dose sparing for OAR: the mean doses of the heart and lungs, lung V20 Gy, and heart V30 Gy in the DV-gEUD plan were lower than those in the DV plan (p<0.05). Conclusions A better result can be obtained by starting with a DV-generated plan and then improving it by adding gEUD-based improvements to reduce the number of iterations and to improve the optimum dose distribution. Advances to knowledge The DV-gEUD plan provided superior dosimetric results for treating BBC in terms of PTV coverage and OAR sparing than the DV plan, without sacrificing the homogeneity of dose distribution in the PTV. PMID:23091290

  14. Fundamental quantitative security in quantum key generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuen, Horace P.

    2010-12-15

    We analyze the fundamental security significance of the quantitative criteria on the final generated key K in quantum key generation including the quantum criterion d, the attacker's mutual information on K, and the statistical distance between her distribution on K and the uniform distribution. For operational significance a criterion has to produce a guarantee on the attacker's probability of correctly estimating some portions of K from her measurement, in particular her maximum probability of identifying the whole K. We distinguish between the raw security of K when the attacker just gets at K before it is used in a cryptographicmore » context and its composition security when the attacker may gain further information during its actual use to help get at K. We compare both of these securities of K to those obtainable from conventional key expansion with a symmetric key cipher. It is pointed out that a common belief in the superior security of a quantum generated K is based on an incorrect interpretation of d which cannot be true, and the security significance of d is uncertain. Generally, the quantum key distribution key K has no composition security guarantee and its raw security guarantee from concrete protocols is worse than that of conventional ciphers. Furthermore, for both raw and composition security there is an exponential catch-up problem that would make it difficult to quantitatively improve the security of K in a realistic protocol. Some possible ways to deal with the situation are suggested.« less

  15. Demonstration of UV LED versatility when paired with molded UV transmitting glass optics to produce unique irradiance patterns

    NASA Astrophysics Data System (ADS)

    Jasenak, Brian

    2017-02-01

    Ultraviolet light-emitting diode (UV LED) adoption is accelerating; they are being used in new applications such as UV curing, germicidal irradiation, nondestructive testing, and forensic analysis. In many of these applications, it is critically important to produce a uniform light distribution and consistent surface irradiance. Flat panes of fused quartz, silica, or glass are commonly used to cover and protect UV LED arrays. However, they don't offer the advantages of an optical lens design. An investigation was conducted to determine the effect of a secondary glass optic on the uniformity of the light distribution and irradiance. Glass optics capable of transmitting UV-A, UV-B, and UV-C wavelengths can improve light distribution, uniformity, and intensity. In this work, two simulation studies were created to illustrate distinct irradiance patterns desirable for potential real world applications. The first study investigates the use of a multi-UV LED array and optic to create a uniform irradiance pattern on the flat two dimensional (2D) target surface. The uniformity was improved by designing both the LED array and molded optic to produce a homogenous pattern. The second study investigated the use of an LED light source and molded optic to improve the light uniformity on the inside of a canister. The case study illustrates the requirements for careful selection of LED based on light distribution and subsequent design of optics. The optic utilizes total internal reflection to create optimized light distribution. The combination of the LED and molded optic showed significant improvement in uniformity on the inner surface of the canister. The simulations illustrate how the application of optics can significantly improve UV light distribution which can be critical in applications such as UV curing and sterilization.

  16. Potential implications of the bystander effect on TCP and EUD when considering target volume dose heterogeneity.

    PubMed

    Balderson, Michael J; Kirkby, Charles

    2015-01-01

    In light of in vitro evidence suggesting that radiation-induced bystander effects may enhance non-local cell killing, there is potential for impact on radiotherapy treatment planning paradigms such as the goal of delivering a uniform dose throughout the clinical target volume (CTV). This work applies a bystander effect model to calculate equivalent uniform dose (EUD) and tumor control probability (TCP) for external beam prostate treatment and compares the results with a more common model where local response is dictated exclusively by local absorbed dose. The broad assumptions applied in the bystander effect model are intended to place an upper limit on the extent of the results in a clinical context. EUD and TCP of a prostate cancer target volume under conditions of increasing dose heterogeneity were calculated using two models: One incorporating bystander effects derived from previously published in vitro bystander data ( McMahon et al. 2012 , 2013a); and one using a common linear-quadratic (LQ) response that relies exclusively on local absorbed dose. Dose through the CTV was modelled as a normal distribution, where the degree of heterogeneity was then dictated by changing the standard deviation (SD). Also, a representative clinical dose distribution was examined as cold (low dose) sub-volumes were systematically introduced. The bystander model suggests a moderate degree of dose heterogeneity throughout a target volume will yield as good or better outcome compared to a uniform dose in terms of EUD and TCP. For a typical intermediate risk prostate prescription of 78 Gy over 39 fractions maxima in EUD and TCP as a function of increasing SD occurred at SD ∼ 5 Gy. The plots only dropped below the uniform dose values for SD ∼ 10 Gy, almost 13% of the prescribed dose. Small, but potentially significant differences in the outcome metrics between the models were identified in the clinically-derived dose distribution as cold sub-volumes were introduced. In terms of EUD and TCP, the bystander model demonstrates the potential to deviate from the common local LQ model predictions as dose heterogeneity through a prostate CTV varies. The results suggest, at least in a limiting sense, the potential for allowing some degree of dose heterogeneity within a CTV, although further investigation of the assumptions of the bystander model are warranted.

  17. Ultra-thin carbon-fiber paper fabrication and carbon-fiber distribution homogeneity evaluation method

    NASA Astrophysics Data System (ADS)

    Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.

    2018-01-01

    A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.

  18. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  19. A comparison of intensity modulated x-ray therapy to intensity modulated proton therapy for the delivery of non-uniform dose distributions

    NASA Astrophysics Data System (ADS)

    Flynn, Ryan

    2007-12-01

    The distribution of biological characteristics such as clonogen density, proliferation, and hypoxia throughout tumors is generally non-uniform, therefore it follows that the optimal dose prescriptions should also be non-uniform and tumor-specific. Advances in intensity modulated x-ray therapy (IMXT) technology have made the delivery of custom-made non-uniform dose distributions possible in practice. Intensity modulated proton therapy (IMPT) has the potential to deliver non-uniform dose distributions as well, while significantly reducing normal tissue and organ at risk dose relative to IMXT. In this work, a specialized treatment planning system was developed for the purpose of optimizing and comparing biologically based IMXT and IMPT plans. The IMXT systems of step-and-shoot (IMXT-SAS) and helical tomotherapy (IMXT-HT) and the IMPT systems of intensity modulated spot scanning (IMPT-SS) and distal gradient tracking (IMPT-DGT), were simulated. A thorough phantom study was conducted in which several subvolumes, which were contained within a base tumor region, were boosted or avoided with IMXT and IMPT. Different boosting situations were simulated by varying the size, proximity, and the doses prescribed to the subvolumes, and the size of the phantom. IMXT and IMPT were also compared for a whole brain radiation therapy (WBRT) case, in which a brain metastasis was simultaneously boosted and the hippocampus was avoided. Finally, IMXT and IMPT dose distributions were compared for the case of non-uniform dose prescription in a head and neck cancer patient that was based on PET imaging with the Cu(II)-diacetyl-bis(N4-methylthiosemicarbazone (Cu-ATSM) hypoxia marker. The non-uniform dose distributions within the tumor region were comparable for IMXT and IMPT. IMPT, however, was capable of delivering the same non-uniform dose distributions within a tumor using a 180° arc as for a full 360° rotation, which resulted in the reduction of normal tissue integral dose by a factor of up to three relative to IMXT, and the complete sparing of organs at risk distal to the tumor region.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokovinin, Andrei, E-mail: atokovinin@ctio.noao.edu

    Radial velocity (RV) monitoring of solar-type visual binaries has been conducted at the CTIO/SMARTS 1.5 m telescope to study short-period systems. The data reduction is described, and mean and individual RVs of 163 observed objects are given. New spectroscopic binaries are discovered or suspected in 17 objects, and for some of them the orbital periods could be determined. Subsystems are efficiently detected even in a single observation by double lines and/or by the RV difference between the components of visual binaries. The potential of this detection technique is quantified by simulation and used for statistical assessment of 96 wide binariesmore » within 67 pc. It is found that 43 binaries contain at least one subsystem, and the occurrence of subsystems is equally probable in either primary or secondary components. The frequency of subsystems and their periods matches the simple prescription proposed by the author. The remaining 53 simple wide binaries with a median projected separation of 1300 AU have an RV difference distribution between their components that is not compatible with the thermal eccentricity distribution f (e) = 2e but rather matches the uniform eccentricity distribution.« less

  1. Computational study on the behaviors of granular materials under mechanical cycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaoliang; Ye, Minyou; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn

    2015-11-07

    Considering that fusion pebble beds are probably subjected to the cyclic compression excitation in their future applications, we presented a computational study to report the effect of mechanical cycling on the behaviors of granular matter. The correctness of our numerical experiments was confirmed by a comparison with the effective medium theory. Under the cyclic loads, the fast granular compaction was observed to evolve in a stretched exponential law. Besides, the increasing stiffening in packing structure, especially the decreasing moduli pressure dependence due to granular consolidation, was also observed. For the force chains inside the pebble beds, both the internal forcemore » distribution and the spatial distribution of force chains would become increasingly uniform as the external force perturbation proceeded and therefore produced the stress relief on grains. In this case, the originally proposed 3-parameter Mueth function was found to fail to describe the internal force distribution. Thereby, its improved functional form with 4 parameters was proposed here and proved to better fit the data. These findings will provide more detailed information on the pebble beds for the relevant fusion design and analysis.« less

  2. Three-phase boundary length in solid-oxide fuel cells: A mathematical model

    NASA Astrophysics Data System (ADS)

    Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf

    A mathematical model to calculate the volume specific three-phase boundary length in the porous composite electrodes of solid-oxide fuel cell is presented. The model is exclusively based on geometrical considerations accounting for porosity, particle diameter, particle size distribution, and solids phase distribution. Results are presented for uniform particle size distribution as well as for non-uniform particle size distribution.

  3. Spatial Burnout in Water Reactors with Nonuniform Startup Distributions of Uranium and Boron

    NASA Technical Reports Server (NTRS)

    Fox, Thomas A.; Bogart, Donald

    1955-01-01

    Spatial burnout calculations have been made of two types of water moderated cylindrical reactor using boron as a burnable poison to increase reactor life. Specific reactors studied were a version of the Submarine Advanced Reactor (sAR) and a supercritical water reactor (SCW) . Burnout characteristics such as reactivity excursion, neutron-flux and heat-generation distributions, and uranium and boron distributions have been determined for core lives corresponding to a burnup of approximately 7 kilograms of fully enriched uranium. All reactivity calculations have been based on the actual nonuniform distribution of absorbers existing during intervals of core life. Spatial burnout of uranium and boron and spatial build-up of fission products and equilibrium xenon have been- considered. Calculations were performed on the NACA nuclear reactor simulator using two-group diff'usion theory. The following reactor burnout characteristics have been demonstrated: 1. A significantly lower excursion in reactivity during core life may be obtained by nonuniform rather than uniform startup distribution of uranium. Results for SCW with uranium distributed to provide constant radial heat generation and a core life corresponding to a uranium burnup of 7 kilograms indicated a maximum excursion in reactivity of 2.5 percent. This compared to a maximum excursion of 4.2 percent obtained for the same core life when w'anium was uniformly distributed at startup. Boron was incorporated uniformly in these cores at startup. 2. It is possible to approach constant radial heat generation during the life of a cylindrical core by means of startup nonuniform radial and axial distributions of uranium and boron. Results for SCW with nonuniform radial distribution of uranium to provide constant radial heat generation at startup and with boron for longevity indicate relatively small departures from the initially constant radial heat generation distribution during core life. Results for SAR with a sinusoidal distribution rather than uniform axial distributions of boron indicate significant improvements in axial heat generation distribution during the greater part of core life. 3. Uranium investments for cylindrical reactors with nonuniform radial uranium distributions which provide constant radial heat generation per unit core volume are somewhat higher than for reactors with uniform uranium concentration at startup. On the other hand, uranium investments for reactors with axial boron distributions which approach constant axial heat generation are somewhat smaller than for reactors with uniform boron distributions at startup.

  4. The Chandra Source Catalog: X-ray Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. We describe here the method by which fluxes are measured for detected sources. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect. Source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The current implementation is however limited to non-informative priors. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it.

  5. Stability analysis for virus spreading in complex networks with quarantine and non-homogeneous transition rates

    NASA Astrophysics Data System (ADS)

    Alarcon-Ramos, L. A.; Schaum, A.; Rodríguez Lucatero, C.; Bernal Jaquez, R.

    2014-03-01

    Virus propagations in complex networks have been studied in the framework of discrete time Markov process dynamical systems. These studies have been carried out under the assumption of homogeneous transition rates, yielding conditions for virus extinction in terms of the transition probabilities and the largest eigenvalue of the connectivity matrix. Nevertheless the assumption of homogeneous rates is rather restrictive. In the present study we consider non-homogeneous transition rates, assigned according to a uniform distribution, with susceptible, infected and quarantine states, thus generalizing the previous studies. A remarkable result of this analysis is that the extinction depends on the weakest element in the network. Simulation results are presented for large free-scale networks, that corroborate our theoretical findings.

  6. Bayesian assessment of uncertainty in aerosol size distributions and index of refraction retrieved from multiwavelength lidar measurements.

    PubMed

    Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir

    2008-04-01

    We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.

  7. On Voxel based Iso-Tumor Control Probabilty and Iso-Complication Maps for Selective Boosting and Selective Avoidance Intensity Modulated Radiotherapy

    PubMed Central

    Kim, Yusung; Tomé, Wolfgang A.

    2010-01-01

    Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734

  8. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  9. A steady-state model of the lunar ejecta cloud

    NASA Astrophysics Data System (ADS)

    Christou, Apostolos

    2014-05-01

    Every airless body in the solar system is surrounded by a cloud of ejecta produced by the impact of interplanetary meteoroids on its surface [1]. Such ``dust exospheres'' have been observed around the Galilean satellites of Jupiter [2,3]. The prospect of long-term robotic and human operations on the Moon by the US and other countries has rekindled interest on the subject [4]. This interest has culminated with the - currently ongoing - investigation of the Moon's dust exosphere by the LADEE spacecraft [5]. Here a model is presented of a ballistic, collisionless, steady state population of ejecta launched vertically at randomly distributed times and velocities and moving under constant gravity. Assuming a uniform distribution of launch times I derive closed form solutions for the probability density functions (pdfs) of the height distribution of particles and the distribution of their speeds in a rest frame both at the surface and at altitude. The treatment is then extended to particle motion with respect to a moving platform such as an orbiting spacecraft. These expressions are compared with numerical simulations under lunar surface gravity where the underlying ejection speed distribution is (a) uniform (b) a power law. I discuss the predictions of the model, its limitations, and how it can be validated against near-surface and orbital measurements.[1] Gault, D. Shoemaker, E.M., Moore, H.J., 1963, NASA TN-D 1767. [2] Kruger, H., Krivov, A.V., Hamilton, D. P., Grun, E., 1999, Nature, 399, 558. [3] Kruger, H., Krivov, A.V., Sremcevic, M., Grun, E., 2003, Icarus, 164, 170. [4] Grun, E., Horanyi, M., Sternovsky, Z., 2011, Planetary and Space Science, 59, 1672. [5] Elphic, R.C., Hine, B., Delory, G.T., Salute, J.S., Noble, S., Colaprete, A., Horanyi, M., Mahaffy, P., and the LADEE Science Team, 2014, LPSC XLV, LPI Contr. 1777, 2677.

  10. Improvement of illumination uniformity for LED flat panel light by using micro-secondary lens array.

    PubMed

    Lee, Hsiao-Wen; Lin, Bor-Shyh

    2012-11-05

    LED flat panel light is an innovative lighting product in recent years. However, current flat panel light products still contain some drawbacks, such as narrow lighting areas and hot spots. In this study, a micro-secondary lens array technique was proposed and applied for the design of the light guide surface to improve the illumination uniformity. By using the micro-secondary lens array, the candela distribution of the LED flat panel light can be adjusted to similar to batwing distribution to improve the illumination uniformity. The experimental results show that the enhancement of the floor illumination uniformity is about 61%, and that of the wall illumination uniformity is about 20.5%.

  11. The nonuniformity of antibody distribution in the kidney and its influence on dosimetry.

    PubMed

    Flynn, Aiden A; Pedley, R Barbara; Green, Alan J; Dearling, Jason L; El-Emir, Ethaar; Boxer, Geoffrey M; Boden, Robert; Begent, Richard H J

    2003-02-01

    The therapeutic efficacy of radiolabeled antibody fragments can be limited by nephrotoxicity, particularly when the kidney is the major route of extraction from the circulation. Conventional dose estimates in kidney assume uniform dose deposition, but we have shown increased antibody localization in the cortex after glomerular filtration. The purpose of this study was to measure the radioactivity in cortex relative to medulla for a range of antibodies and to assess the validity of the assumption of uniformity of dose deposition in the whole kidney and in the cortex for these antibodies with a range of radionuclides. Storage phosphor plate technology (radioluminography) was used to acquire images of the distributions of a range of antibodies of various sizes, labeled with 125I, in kidney sections. This allowed the calculation of the antibody concentration in the cortex relative to the medulla. Beta-particle point dose kernels were then used to generate the dose-rate distributions from 14C, 131I, 186Re, 32P and 90Y. The correlation between the actual dose-rate distribution and the corresponding distribution calculated assuming uniform antibody distribution throughout the kidney was used to test the validity of estimating dose by assuming uniformity in the kidney and in the cortex. There was a strong inverse relationship between the ratio of the radioactivity in the cortex relative to that in the medulla and the antibody size. The nonuniformity of dose deposition was greatest with the smallest antibody fragments but became more uniform as the range of the emissions from the radionuclide increased. Furthermore, there was a strong correlation between the actual dose-rate distribution and the distribution when assuming a uniform source in the kidney for intact antibodies along with medium- to long-range radionuclides, but there was no correlation for small antibody fragments with any radioisotope or for short-range radionuclides with any antibody. However, when the cortex was separated from the whole kidney, the correlation between the actual dose-rate distribution and the assumed dose-rate distribution, if the source was uniform, increased significantly. During radioimmunotherapy, the extent of nonuniformity of dose deposition in the kidney depends on the properties of the antibody and radionuclide. For dosimetry estimates, the cortex should be taken as a separate source region when the radiopharmaceutical is small enough to be filtered by the glomerulus.

  12. Determination of the microbolometric FPA's responsivity with imaging system's radiometric considerations

    NASA Astrophysics Data System (ADS)

    Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal

    2013-10-01

    Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.

  13. Turbulent transport with intermittency: Expectation of a scalar concentration.

    PubMed

    Rast, Mark Peter; Pinton, Jean-François; Mininni, Pablo D

    2016-04-01

    Scalar transport by turbulent flows is best described in terms of Lagrangian parcel motions. Here we measure the Eulerian distance travel along Lagrangian trajectories in a simple point vortex flow to determine the probabilistic impulse response function for scalar transport in the absence of molecular diffusion. As expected, the mean squared Eulerian displacement scales ballistically at very short times and diffusively for very long times, with the displacement distribution at any given time approximating that of a random walk. However, significant deviations in the displacement distributions from Rayleigh are found. The probability of long distance transport is reduced over inertial range time scales due to spatial and temporal intermittency. This can be modeled as a series of trapping events with durations uniformly distributed below the Eulerian integral time scale. The probability of long distance transport is, on the other hand, enhanced beyond that of the random walk for both times shorter than the Lagrangian integral time and times longer than the Eulerian integral time. The very short-time enhancement reflects the underlying Lagrangian velocity distribution, while that at very long times results from the spatial and temporal variation of the flow at the largest scales. The probabilistic impulse response function, and with it the expectation value of the scalar concentration at any point in space and time, can be modeled using only the evolution of the lowest spatial wave number modes (the mean and the lowest harmonic) and an eddy based constrained random walk that captures the essential velocity phase relations associated with advection by vortex motions. Preliminary examination of Lagrangian tracers in three-dimensional homogeneous isotropic turbulence suggests that transport in that setting can be similarly modeled.

  14. Generating Within-Plant Spatial Distributions of an Insect Herbivore Based on Aggregation Patterns and Per-Node Infestation Probabilities.

    PubMed

    Rincon, Diego F; Hoy, Casey W; Cañas, Luis A

    2015-04-01

    Most predator-prey models extrapolate functional responses from small-scale experiments assuming spatially uniform within-plant predator-prey interactions. However, some predators focus their search in certain plant regions, and herbivores tend to select leaves to balance their nutrient uptake and exposure to plant defenses. Individual-based models that account for heterogeneous within-plant predator-prey interactions can be used to scale-up functional responses, but they would require the generation of explicit prey spatial distributions within-plant architecture models. The silverleaf whitefly, Bemisia tabaci biotype B (Gennadius) (Hemiptera: Aleyrodidae), is a significant pest of tomato crops worldwide that exhibits highly aggregated populations at several spatial scales, including within the plant. As part of an analytical framework to understand predator-silverleaf whitefly interactions, the objective of this research was to develop an algorithm to generate explicit spatial counts of silverleaf whitefly nymphs within tomato plants. The algorithm requires the plant size and the number of silverleaf whitefly individuals to distribute as inputs, and includes models that describe infestation probabilities per leaf nodal position and the aggregation pattern of the silverleaf whitefly within tomato plants and leaves. The output is a simulated number of silverleaf whitefly individuals for each leaf and leaflet on one or more plants. Parameter estimation was performed using nymph counts per leaflet censused from 30 artificially infested tomato plants. Validation revealed a substantial agreement between algorithm outputs and independent data that included the distribution of counts of both eggs and nymphs. This algorithm can be used in simulation models that explore the effect of local heterogeneity on whitefly-predator dynamics. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Agents Overcoming Resource Independent Scaling Threats (AORIST)

    DTIC Science & Technology

    2004-10-01

    20 Table 8: Tilted Consumer Preferences Experiment (m=8, N=61, G=2, C=60, Mean over 13 experiments...probabilities. Non-uniform consumer preferences create a new potential for sub-optimal system performance and thus require an additional adaptive...distribu- tion of the capacities across the sup- plier population must match the non- uniform consumer preferences . The second plot in Table 8

  16. Effects on Subtalar Joint Stress Distribution After Cannulated Screw Insertion at Different Positions and Directions.

    PubMed

    Yuan, Cheng-song; Chen, Wan; Chen, Chen; Yang, Guang-hua; Hu, Chao; Tang, Kang-lai

    2015-01-01

    We investigated the effects on subtalar joint stress distribution after cannulated screw insertion at different positions and directions. After establishing a 3-dimensional geometric model of a normal subtalar joint, we analyzed the most ideal cannulated screw insertion position and approach for subtalar joint stress distribution and compared the differences in loading stress, antirotary strength, and anti-inversion/eversion strength among lateral-medial antiparallel screw insertion, traditional screw insertion, and ideal cannulated screw insertion. The screw insertion approach allowing the most uniform subtalar joint loading stress distribution was lateral screw insertion near the border of the talar neck plus medial screw insertion close to the ankle joint. For stress distribution uniformity, antirotary strength, and anti-inversion/eversion strength, lateral-medial antiparallel screw insertion was superior to traditional double-screw insertion. Compared with ideal cannulated screw insertion, slightly poorer stress distribution uniformity and better antirotary strength and anti-inversion/eversion strength were observed for lateral-medial antiparallel screw insertion. Traditional single-screw insertion was better than double-screw insertion for stress distribution uniformity but worse for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion was slightly worse for stress distribution uniformity than was ideal cannulated screw insertion but superior to traditional screw insertion. It was better than both ideal cannulated screw insertion and traditional screw insertion for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion is an approach with simple localization, convenient operation, and good safety. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  17. Circular, confined distribution for charged particle beams

    DOEpatents

    Garnett, Robert W.; Dobelbower, M. Christian

    1995-01-01

    A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location.

  18. Circular, confined distribution for charged particle beams

    DOEpatents

    Garnett, R.W.; Dobelbower, M.C.

    1995-11-21

    A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location. 26 figs.

  19. On the vertical distribution of water vapor in the Martian tropics

    NASA Technical Reports Server (NTRS)

    Haberle, Robert M.

    1988-01-01

    Although measurements of the column abundance of atmospheric water vapor on Mars have been made, measurements of its vertical distribution have not. How water is distributed in the vertical is fundamental to atmosphere-surface exchange processes, and especially to transport within the atmosphere. Several lines of evidence suggest that in the lowest several scale heights of the atmosphere, water vapor is nearly uniformly distributed. However, most of these arguments are suggestive rather than conclusive since they only demonstrate that the altitude to saturation is very high if the observed amount of water vapor is distributed uniformly. A simple argument is presented, independent of the saturation constraint, which suggests that in tropical regions, water vapor on Mars should be very nearly uniformly mixed on an annual and zonally averaged basis.

  20. A new approach to increase the two-dimensional detection probability of CSI algorithm for WAS-GMTI mode

    NASA Astrophysics Data System (ADS)

    Yan, H.; Zheng, M. J.; Zhu, D. Y.; Wang, H. T.; Chang, W. S.

    2015-07-01

    When using clutter suppression interferometry (CSI) algorithm to perform signal processing in a three-channel wide-area surveillance radar system, the primary concern is to effectively suppress the ground clutter. However, a portion of moving target's energy is also lost in the process of channel cancellation, which is often neglected in conventional applications. In this paper, we firstly investigate the two-dimensional (radial velocity dimension and squint angle dimension) residual amplitude of moving targets after channel cancellation with CSI algorithm. Then, a new approach is proposed to increase the two-dimensional detection probability of moving targets by reserving the maximum value of the three channel cancellation results in non-uniformly spaced channel system. Besides, theoretical expression of the false alarm probability with the proposed approach is derived in the paper. Compared with the conventional approaches in uniformly spaced channel system, simulation results validate the effectiveness of the proposed approach. To our knowledge, it is the first time that the two-dimensional detection probability of CSI algorithm is studied.

  1. Does the central limit theorem always apply to phase noise? Some implications for radar problems

    NASA Astrophysics Data System (ADS)

    Gray, John E.; Addison, Stephen R.

    2017-05-01

    The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.

  2. Logical optimization for database uniformization

    NASA Technical Reports Server (NTRS)

    Grant, J.

    1984-01-01

    Data base uniformization refers to the building of a common user interface facility to support uniform access to any or all of a collection of distributed heterogeneous data bases. Such a system should enable a user, situated anywhere along a set of distributed data bases, to access all of the information in the data bases without having to learn the various data manipulation languages. Furthermore, such a system should leave intact the component data bases, and in particular, their already existing software. A survey of various aspects of the data bases uniformization problem and a proposed solution are presented.

  3. The NUONCE engine for LEO networks

    NASA Technical Reports Server (NTRS)

    Lo, Martin W.; Estabrook, Polly

    1995-01-01

    Typical LEO networks use constellations which provide a uniform coverage. However, the demand for telecom service is dynamic and unevenly distributed around the world. We examine a more efficient and cost effective design by matching the satellite coverage with the cyclical demand for service around the world. Our approach is to use a non-uniform satellite distribution for the network. We have named this constellation design NUONCE for Non Uniform Optimal Network Communications Engine.

  4. Irreversible reactions and diffusive escape: Stationary properties

    DOE PAGES

    Krapivsky, Paul L.; Ben-Naim, Eli

    2015-05-01

    We study three basic diffusion-controlled reaction processes—annihilation, coalescence, and aggregation. We examine the evolution starting with the most natural inhomogeneous initial configuration where a half-line is uniformly filled by particles, while the complementary half-line is empty. We show that the total number of particles that infiltrate the initially empty half-line is finite and has a stationary distribution. We determine the evolution of the average density from which we derive the average total number N of particles in the initially empty half-line; e.g. for annihilationmore » $$\\langle N\\rangle = \\frac{3}{16}+\\frac{1}{4\\π}$$ . For the coalescence process, we devise a procedure that in principle allows one to compute P(N), the probability to find exactly N particles in the initially empty half-line; we complete the calculations in the first non-trivial case (N = 1). As a by-product we derive the distance distribution between the two leading particles.« less

  5. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  6. A 64-pixel NbTiN superconducting nanowire single-photon detector array for spatially resolved photon detection.

    PubMed

    Miki, Shigehito; Yamashita, Taro; Wang, Zhen; Terai, Hirotaka

    2014-04-07

    We present the characterization of two-dimensionally arranged 64-pixel NbTiN superconducting nanowire single-photon detector (SSPD) array for spatially resolved photon detection. NbTiN films deposited on thermally oxidized Si substrates enabled the high-yield production of high-quality SSPD pixels, and all 64 SSPD pixels showed uniform superconducting characteristics within the small range of 7.19-7.23 K of superconducting transition temperature and 15.8-17.8 μA of superconducting switching current. Furthermore, all of the pixels showed single-photon sensitivity, and 60 of the 64 pixels showed a pulse generation probability higher than 90% after photon absorption. As a result of light irradiation from the single-mode optical fiber at different distances between the fiber tip and the active area, the variations of system detection efficiency (SDE) in each pixel showed reasonable Gaussian distribution to represent the spatial distributions of photon flux intensity.

  7. Random isotropic one-dimensional XY-model

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  8. A users' manual for MCPRAM (Monte Carlo PReprocessor for AMEER) and for the fuze options in AMEER (Aero Mechanical Equation Evaluation Routines)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaFarge, R.A.

    1990-05-01

    MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less

  9. Cooling water distribution system

    DOEpatents

    Orr, Richard

    1994-01-01

    A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using an interconnected series of radial guide elements, a plurality of circumferential collector elements and collector boxes to collect and feed the cooling water into distribution channels extending along the curved surface of the steel containment vessel. The cooling water is uniformly distributed over the curved surface by a plurality of weirs in the distribution channels.

  10. Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies.

    PubMed

    Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary

    2018-04-29

    Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.

  11. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    PubMed

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  12. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves

    NASA Astrophysics Data System (ADS)

    Paraskevov, A. V.; Zendrikov, D. K.

    2017-04-01

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  13. Insights into the latent multinomial model through mark-resight data on female grizzly bears with cubs-of-the-year

    USGS Publications Warehouse

    Higgs, Megan D.; Link, William; White, Gary C.; Haroldson, Mark A.; Bjornlie, Daniel D.

    2013-01-01

    Mark-resight designs for estimation of population abundance are common and attractive to researchers. However, inference from such designs is very limited when faced with sparse data, either from a low number of marked animals, a low probability of detection, or both. In the Greater Yellowstone Ecosystem, yearly mark-resight data are collected for female grizzly bears with cubs-of-the-year (FCOY), and inference suffers from both limitations. To overcome difficulties due to sparseness, we assume homogeneity in sighting probabilities over 16 years of bi-annual aerial surveys. We model counts of marked and unmarked animals as multinomial random variables, using the capture frequencies of marked animals for inference about the latent multinomial frequencies for unmarked animals. We discuss undesirable behavior of the commonly used discrete uniform prior distribution on the population size parameter and provide OpenBUGS code for fitting such models. The application provides valuable insights into subtleties of implementing Bayesian inference for latent multinomial models. We tie the discussion to our application, though the insights are broadly useful for applications of the latent multinomial model.

  14. Statistics of Advective Stretching in Three-dimensional Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.

    2009-09-01

    We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.

  15. Good Practices in Free-energy Calculations

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  16. 'Fracking', Induced Seismicity and the Critical Earth

    NASA Astrophysics Data System (ADS)

    Leary, P.; Malin, P. E.

    2012-12-01

    Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.

  17. Spatial effect of conical angle on optical-thermal distribution for circumferential photocoagulation

    PubMed Central

    Truong, Van Gia; Park, Suhyun; Tran, Van Nam; Kang, Hyun Wook

    2017-01-01

    A uniformly diffusing applicator can be advantageous for laser treatment of tubular tissue. The current study investigated various conical angles for diffuser tips as a critical factor for achieving radially uniform light emission. A customized goniometer was employed to characterize the spatial uniformity of the light propagation. An ex vivo model was developed to quantitatively compare the temperature development and irreversible tissue coagulation. The 10-mm diffuser tip with angle at 25° achieved a uniform longitudinal intensity profile (i.e., 0.90 ± 0.07) as well as a consistent thermal denaturation on the tissue. The proposed conical angle can be instrumental in determining the uniformity of light distribution for the photothermal treatment of tubular tissue. PMID:29296495

  18. optGpSampler: an improved tool for uniformly sampling the solution-space of genome-scale metabolic networks.

    PubMed

    Megchelenbrink, Wout; Huynen, Martijn; Marchiori, Elena

    2014-01-01

    Constraint-based models of metabolic networks are typically underdetermined, because they contain more reactions than metabolites. Therefore the solutions to this system do not consist of unique flux rates for each reaction, but rather a space of possible flux rates. By uniformly sampling this space, an estimated probability distribution for each reaction's flux in the network can be obtained. However, sampling a high dimensional network is time-consuming. Furthermore, the constraints imposed on the network give rise to an irregularly shaped solution space. Therefore more tailored, efficient sampling methods are needed. We propose an efficient sampling algorithm (called optGpSampler), which implements the Artificial Centering Hit-and-Run algorithm in a different manner than the sampling algorithm implemented in the COBRA Toolbox for metabolic network analysis, here called gpSampler. Results of extensive experiments on different genome-scale metabolic networks show that optGpSampler is up to 40 times faster than gpSampler. Application of existing convergence diagnostics on small network reconstructions indicate that optGpSampler converges roughly ten times faster than gpSampler towards similar sampling distributions. For networks of higher dimension (i.e. containing more than 500 reactions), we observed significantly better convergence of optGpSampler and a large deviation between the samples generated by the two algorithms. optGpSampler for Matlab and Python is available for non-commercial use at: http://cs.ru.nl/~wmegchel/optGpSampler/.

  19. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  20. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  1. A tuneable approach to uniform light distribution for artificial daylight photodynamic therapy.

    PubMed

    O'Mahoney, Paul; Haigh, Neil; Wood, Kenny; Brown, C Tom A; Ibbotson, Sally; Eadie, Ewan

    2018-06-16

    Implementation of daylight photodynamic therapy (dPDT) is somewhat limited by variable weather conditions. Light sources have been employed to provide artificial dPDT indoors, with low irradiances and longer treatment times. Uniform light distribution across the target area is key to ensuring effective treatment, particularly for large areas. A novel light source is developed with tuneable direction of light emission in order to meet this challenge. Wavelength composition of the novel light source is controlled such that the protoporphyrin-IX (PpIX) weighed spectra of both the light source and daylight match. The uniformity of the light source is characterised on a flat surface, a model head and a model leg. For context, a typical conventional PDT light source is also characterised. Additionally, the wavelength uniformity across the treatment site is characterised. The PpIX-weighted spectrum of the novel light source matches with PpIX-weighted daylight spectrum, with irradiance values within the bounds for effective dPDT. By tuning the direction of light emission, improvements are seen in the uniformity across large anatomical surfaces. Wavelength uniformity is discussed. We have developed a light source that addresses the challenges in uniform, multiwavelength light distribution for large area artificial dPDT across curved anatomical surfaces. Copyright © 2018. Published by Elsevier B.V.

  2. Representing Color Ensembles.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  3. Trapping of Neutrinos in Extremely Compact Stars and the Influence of Brane Tension on This Process

    NASA Astrophysics Data System (ADS)

    Stuchlík, Zdenäěk; Hladík, Jan; Urbanec, Martin

    We present estimates on the efficiency of neutrino trapping in brany extremely compact stars, using the simplest model with uniform distribution of energy density, assuming massless neutrinos and uniform distribution of neutrino emissivity. Computation have been done for two different uniform-density stellar solution in the Randall-Sundrum II type braneworld, namely with the Reissner-Nordström-type of geometry and the second one, derived by Germani and Maartens.1

  4. High density, uniformly distributed W/UO2 for use in Nuclear Thermal Propulsion

    NASA Astrophysics Data System (ADS)

    Tucker, Dennis S.; Barnes, Marvin W.; Hone, Lance; Cook, Steven

    2017-04-01

    An inexpensive, quick method has been developed to obtain uniform distributions of UO2 particles in a tungsten matrix utilizing 0.5 wt percent low density polyethylene. Powders were sintered in a Spark Plasma Sintering (SPS) furnace at 1600 °C, 1700 °C, 1750 °C, 1800 °C and 1850 °C using a modified sintering profile. This resulted in a uniform distribution of UO2 particles in a tungsten matrix with high densities, reaching 99.46% of theoretical for the sample sintered at 1850 °C. The powder process is described and the results of this study are given below.

  5. Time-evolution of uniform momentum zones in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Laskari, Angeliki; Hearst, R. Jason; de Kat, Roeland; Ganapathisubramani, Bharathram

    2016-11-01

    Time-resolved planar particle image velocimetry (PIV) is used to analyse the organisation and evolution of uniform momentum zones (UMZs) in a turbulent boundary layer. Experiments were performed in a recirculating water tunnel on a streamwise-wall-normal plane extending approximately 0 . 5 δ × 1 . 8 δ , in x and y, respectively. In total 400,000 images were captured and for each of the resulting velocity fields, local peaks in the probability density distribution of the streamwise velocity were detected, indicating the instantaneous presence of UMZs throughout the boundary layer. The main characteristics of these zones are outlined and more specifically their velocity range and wall-normal extent. The variation of these characteristics with wall normal distance and total number of zones are also discussed. Exploiting the time information available, time-scales of zones that have a substantial coherence in time are analysed and results show that the zones' lifetime is dependent on both their momentum deficit level and the total number of zones present. Conditional averaging of the flow statistics seems to further indicate that a large number of zones is the result of a wall-dominant mechanism, while the opposite implies an outer-layer dominance.

  6. Incompleteness and limit of security theory of quantum key distribution

    NASA Astrophysics Data System (ADS)

    Hirota, Osamu; Murakami, Dan; Kato, Kentaro; Futami, Fumio

    2012-10-01

    It is claimed in the many papers that a trace distance: d guarantees the universal composition security in quantum key distribution (QKD) like BB84 protocol. In this introduction paper, at first, it is explicitly explained what is the main misconception in the claim of the unconditional security for QKD theory. In general terms, the cause of the misunderstanding on the security claim is the Lemma in the paper of Renner. It suggests that the generation of the perfect random key is assured by the probability (1-d), and its failure probability is d. Thus, it concludes that the generated key provides the perfect random key sequence when the protocol is success. So the QKD provides perfect secrecy to the one time pad. This is the reason for the composition claim. However, the quantity of the trace distance (or variational distance) is not the probability for such an event. If d is not small enough, always the generated key sequence is not uniform. Now one needs the reconstruction of the evaluation of the trace distance if one wants to use it. One should first go back to the indistinguishability theory in the computational complexity based, and to clarify the meaning of the value of the variational distance. In addition, the same analysis for the information theoretic case is necessary. The recent serial papers by H.P.Yuen have given the answer on such questions. In this paper, we show more concise description of Yuen's theory, and clarify that the upper bound theories for the trace distance by Tomamichel et al and Hayashi et al are constructed by the wrong reasoning of Renner and it is unsuitable as the security analysis. Finally, we introduce a new macroscopic quantum communication to replace Q-bit QKD.

  7. HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps

    NASA Astrophysics Data System (ADS)

    Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.

    2017-01-01

    We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.

  8. The Chandra Source Catalog: X-ray Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect, and source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it. We also determine limiting sensitivities at arbitrary locations in the field using the same formulation. This work was supported by CXC NASA contracts NAS8-39073 (VK) and NAS8-03060 (CSC).

  9. Periodic synchronization in a system of coupled phase oscillators with attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Yuan, Di; Tian, Jun-Long; Lin, Fang; Ma, Dong-Wei; Zhang, Jing; Cui, Hai-Tao; Xiao, Yi

    2018-06-01

    In this study we investigate the collective behavior of the generalized Kuramoto model with an external pinning force in which oscillators with positive and negative coupling strengths are conformists and contrarians, respectively. We focus on a situation in which the natural frequencies of the oscillators follow a uniform probability density. By numerically simulating the model, it is shown that the model supports multistable synchronized states such as a traveling wave state, π state and periodic synchronous state: an oscillating π state. The oscillating π state may be characterized by the phase distribution oscillating in a confined region and the phase difference between conformists and contrarians oscillating around π periodically. In addition, we present the parameter space of the oscillating π state and traveling wave state of the model.

  10. Dynamical topology and statistical properties of spatiotemporal chaos.

    PubMed

    Zhuang, Quntao; Gao, Xun; Ouyang, Qi; Wang, Hongli

    2012-12-01

    For spatiotemporal chaos described by partial differential equations, there are generally locations where the dynamical variable achieves its local extremum or where the time partial derivative of the variable vanishes instantaneously. To a large extent, the location and movement of these topologically special points determine the qualitative structure of the disordered states. We analyze numerically statistical properties of the topologically special points in one-dimensional spatiotemporal chaos. The probability distribution functions for the number of point, the lifespan, and the distance covered during their lifetime are obtained from numerical simulations. Mathematically, we establish a probabilistic model to describe the dynamics of these topologically special points. In spite of the different definitions in different spatiotemporal chaos, the dynamics of these special points can be described in a uniform approach.

  11. A new approach for the description of discharge extremes in small catchments

    NASA Astrophysics Data System (ADS)

    Pavia Santolamazza, Daniela; Lebrenz, Henning; Bárdossy, András

    2017-04-01

    Small catchment basins in Northwestern Switzerland, characterized by small concentration times, are frequently targeted by floods. The peak and the volume of these floods are commonly estimated by a frequency analysis of occurrence and described by a random variable, assuming a uniform distributed probability and stationary input drivers (e.g. precipitation, temperature). For these small catchments, we attempt to describe and identify the underlying mechanisms and dynamics at the occurrence of extremes by means of available high temporal resolution (10 min) observations and to explore the possibilities to regionalize hydrological parameters for short intervals. Therefore, we investigate new concepts for the flood description such as entropy as a measure of disorder and dispersion of precipitation. First findings and conclusions of this ongoing research are presented.

  12. Laser velocimetry measurements in a gas turbine research combustor

    NASA Technical Reports Server (NTRS)

    Driscoll, J. F.; Pelaccio, D. G.

    1979-01-01

    The effects of turbulence on the production of pollutant species in a gas-turbine research combustor are studied using laser diffraction velocimetry (LDV) techniques. Measurements that were made in the primary combustion zone include mean velocity, rms velocity fluctuations, velocity probability distributions, and autocorrelation functions. A unique combustor design provides relatively uniform flow conditions and independent control of drop size, equivalence ratio, inlet temperature, and combustor pressure. Parameters which characterize the nature of the spray combustion (i.e., whether single droplet or group combustion occurs), were determined from the LDV data. Turbulent diffusivity (eddy viscosity) reaches a value of 2930 sq cm/sec, corresponding to a convective integral length scale of 1.8 cm. The group combustion number, based on turbulent diffusivity, is measured to be 6.2

  13. Passive containment cooling water distribution device

    DOEpatents

    Conway, Lawrence E.; Fanto, Susan V.

    1994-01-01

    A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using a series of radial guide elements and cascading weir boxes to collect and then distribute the cooling water into a series of distribution areas through a plurality of cascading weirs. The cooling water is then uniformly distributed over the curved surface by a plurality of weir notches in the face plate of the weir box.

  14. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    PubMed

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  15. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  16. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  17. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  18. Optimization and Calculation of Probability Performances of Processes of Storage and Processing of Refrigerator Containerized Cargoes

    NASA Astrophysics Data System (ADS)

    Nyrkov, A. P.; Sokolov, S. S.; Chernyi, S. G.; Shnurenko, A. A.; Pavlova, L. A.

    2016-08-01

    In the work the queueing system of the disconnected multi-channel type to which irregular, uniform or not uniform flows of requests with a unlimited latency period arrive is considered. The system is considered on an example of the container terminal having conditional-functional sections with a definite mark-to-space ratio on which the irregular inhomogeneous traffic flow with resultant intensity acts.

  19. Chebyshev collocation approach for vibration analysis of functionally graded porous beams based on third-order shear deformation theory

    NASA Astrophysics Data System (ADS)

    Wattanasakulpong, Nuttawit; Chaikittiratana, Arisara; Pornpeerakeat, Sacharuck

    2018-06-01

    In this paper, vibration analysis of functionally graded porous beams is carried out using the third-order shear deformation theory. The beams have uniform and non-uniform porosity distributions across their thickness and both ends are supported by rotational and translational springs. The material properties of the beams such as elastic moduli and mass density can be related to the porosity and mass coefficient utilizing the typical mechanical features of open-cell metal foams. The Chebyshev collocation method is applied to solve the governing equations derived from Hamilton's principle, which is used in order to obtain the accurate natural frequencies for the vibration problem of beams with various general and elastic boundary conditions. Based on the numerical experiments, it is revealed that the natural frequencies of the beams with asymmetric and non-uniform porosity distributions are higher than those of other beams with uniform and symmetric porosity distributions.

  20. WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, H; Yeung, I; Milosevic, M

    2016-06-15

    Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less

  1. In-plane modal frequencies and mode shapes of two stay cables interconnected by uniformly distributed cross-ties

    NASA Astrophysics Data System (ADS)

    Jing, Haiquan; He, Xuhui; Zou, Yunfeng; Wang, Hanfeng

    2018-03-01

    Stay cables are important load-bearing structural elements of cable-stayed bridges. Suppressing the large vibrations of the stay cables under the external excitations is of worldwide concern for the bridge engineers and researchers. Over the past decade, the use of crosstie has become one of the most practical and effective methods. Extensive research has led to a better understanding of the mechanics of cable networks, and the effects of different parameters, such as length ratio, mass-tension ratio, and segment ratio on the effectiveness of the crosstie have been investigated. In this study, uniformly distributed elastic crossties serve to replace the traditional single, or several cross-ties, aiming to delay "mode localization." A numerical method is developed by replacing the uniformly distributed, discrete elastic cross-tie model with an equivalent, continuously distributed, elastic cross-tie model in order to calculate the modal frequencies and mode shapes of the cable-crosstie system. The effectiveness of the proposed method is verified by comparing the elicited results with those obtained using the previous method. The uniformly distributed elastic cross-ties are shown to significantly delay "mode localization."

  2. Stacked waveguide reactors with gradient embedded scatterers for high-capacity water cleaning

    DOE PAGES

    Ahsan, Syed Saad; Gumus, Abdurrahman; Erickson, David

    2015-11-04

    We present a compact water-cleaning reactor with stacked layers of waveguides containing gradient patterns of optical scatterers that enable uniform light distribution and augmented water-cleaning rates. Previous photocatalytic reactors using immersion, external, or distributive lamps suffer from poor light distribution that impedes scalability. Here, we use an external UV-source to direct photons into stacked waveguide reactors where we scatter the photons uniformly over the length of the waveguide to thin films of TiO 2-catalysts. In conclusion, we also show 4.5 times improvement in activity over uniform scatterer designs, demonstrate a degradation of 67% of the organic dye, and characterize themore » degradation rate constant.« less

  3. Stacked waveguide reactors with gradient embedded scatterers for high-capacity water cleaning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahsan, Syed Saad; Gumus, Abdurrahman; Erickson, David

    We present a compact water-cleaning reactor with stacked layers of waveguides containing gradient patterns of optical scatterers that enable uniform light distribution and augmented water-cleaning rates. Previous photocatalytic reactors using immersion, external, or distributive lamps suffer from poor light distribution that impedes scalability. Here, we use an external UV-source to direct photons into stacked waveguide reactors where we scatter the photons uniformly over the length of the waveguide to thin films of TiO 2-catalysts. In conclusion, we also show 4.5 times improvement in activity over uniform scatterer designs, demonstrate a degradation of 67% of the organic dye, and characterize themore » degradation rate constant.« less

  4. Engineered surface scatterers in edge-lit slab waveguides to improve light delivery in algae cultivation.

    PubMed

    Ahsan, Syed Saad; Pereyra, Brandon; Jung, Erica E; Erickson, David

    2014-10-20

    Most existing photobioreactors do a poor job of distributing light uniformly due to shading effects. One method by which this could be improved is through the use of internal wave-guiding structures incorporating engineered light scattering schemes. By varying the density of these scatterers, one can control the spatial distribution of light inside the reactor enabling better uniformity of illumination. Here, we compare a number of light scattering schemes and evaluate their ability to enhance biomass accumulation. We demonstrate a design for a gradient distribution of surface scatterers with uniform lateral scattering intensity that is superior for algal biomass accumulation, resulting in a 40% increase in the growth rate.

  5. Elastic field of a spherical inclusion with non-uniform eigenfields in second strain gradient elasticity

    NASA Astrophysics Data System (ADS)

    Delfani, M. R.; Latifi Shahandashti, M.

    2017-09-01

    In this paper, within the complete form of Mindlin's second strain gradient theory, the elastic field of an isolated spherical inclusion embedded in an infinitely extended homogeneous isotropic medium due to a non-uniform distribution of eigenfields is determined. These eigenfields, in addition to eigenstrain, comprise eigen double and eigen triple strains. After the derivation of a closed-form expression for Green's function associated with the problem, two different cases of non-uniform distribution of the eigenfields are considered as follows: (i) radial distribution, i.e. the distributions of the eigenfields are functions of only the radial distance of points from the centre of inclusion, and (ii) polynomial distribution, i.e. the distributions of the eigenfields are polynomial functions in the Cartesian coordinates of points. While the obtained solution for the elastic field of the latter case takes the form of an infinite series, the solution to the former case is represented in a closed form. Moreover, Eshelby's tensors associated with the two mentioned cases are obtained.

  6. Fuzzy-Logic Based Distributed Energy-Efficient Clustering Algorithm for Wireless Sensor Networks.

    PubMed

    Zhang, Ying; Wang, Jun; Han, Dezhi; Wu, Huafeng; Zhou, Rundong

    2017-07-03

    Due to the high-energy efficiency and scalability, the clustering routing algorithm has been widely used in wireless sensor networks (WSNs). In order to gather information more efficiently, each sensor node transmits data to its Cluster Head (CH) to which it belongs, by multi-hop communication. However, the multi-hop communication in the cluster brings the problem of excessive energy consumption of the relay nodes which are closer to the CH. These nodes' energy will be consumed more quickly than the farther nodes, which brings the negative influence on load balance for the whole networks. Therefore, we propose an energy-efficient distributed clustering algorithm based on fuzzy approach with non-uniform distribution (EEDCF). During CHs' election, we take nodes' energies, nodes' degree and neighbor nodes' residual energies into consideration as the input parameters. In addition, we take advantage of Takagi, Sugeno and Kang (TSK) fuzzy model instead of traditional method as our inference system to guarantee the quantitative analysis more reasonable. In our scheme, each sensor node calculates the probability of being as CH with the help of fuzzy inference system in a distributed way. The experimental results indicate EEDCF algorithm is better than some current representative methods in aspects of data transmission, energy consumption and lifetime of networks.

  7. Investigation on inlet recirculation characteristics of double suction centrifugal compressor with unsymmetrical inlet

    NASA Astrophysics Data System (ADS)

    Yang, Ce; Wang, Yingjun; Lao, Dazhong; Tong, Ding; Wei, Longyu; Liu, Yixiong

    2016-08-01

    The inlet recirculation characteristics of double suction centrifugal compressor with unsymmetrical inlet structures were studied in numerical method, mainly focused on three issues including the amounts and differences of the inlet recirculation in different working conditions, the circumferential non-uniform distributions of the inlet recirculation, the recirculation velocity distributions of the upstream slot of the rear impeller. The results show that there are some differences between the recirculation of the front impeller and that of the rear impeller in whole working conditions. In design speed, the recirculation flow rate of the rear impeller is larger than that of the front impeller in the large flow range, but in the small flow range, the recirculation flow rate of the rear impeller is smaller than that of the front impeller. In different working conditions, the recirculation velocity distributions of the front and rear impeller are non-uniform along the circumferential direction and their non-uniform extents are quite different. The circumferential non-uniform extent of the recirculation velocity varies with the working conditions change. The circumferential non-uniform extent of the recirculation velocity of front impeller and its distribution are determined by the static pressure distribution of the front impeller, but that of the rear impeller is decided by the coupling effects of the inlet flow distortion of the rear impeller, the circumferential unsymmetrical distribution of the upstream slot and the asymmetric structure of the volute. In the design flow and small flow conditions, the recirculation velocities at different circumferential positions of the mean line of the upstream slot cross-section of the rear impeller are quite different, and the recirculation velocities distribution forms at both sides of the mean line are different. The recirculation velocity distributions in the cross-section of the upstream slot depend on the static pressure distributions in the intake duct.

  8. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  9. Experimental design for dynamics identification of cellular processes.

    PubMed

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  10. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  11. Local Burn-Up Effects in the NBSR Fuel Element

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown N. R.; Hanson A.; Diamond, D.

    2013-01-31

    This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less

  12. Apparatus and process to enhance the uniform formation of hollow glass microspheres

    DOEpatents

    Schumacher, Ray F

    2013-10-01

    A process and apparatus is provided for enhancing the formation of a uniform population of hollow glass microspheres. A burner head is used which directs incoming glass particles away from the cooler perimeter of the flame cone of the gas burner and distributes the glass particles in a uniform manner throughout the more evenly heated portions of the flame zone. As a result, as the glass particles are softened and expand by a released nucleating gas so as to form a hollow glass microsphere, the resulting hollow glass microspheres have a more uniform size and property distribution as a result of experiencing a more homogenous heat treatment process.

  13. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  14. Influence of a non-uniform free stream velocity distribution on performance/acoustics of counterrotating propeller configurations

    NASA Astrophysics Data System (ADS)

    Allen, C. S.; Korkan, K. D.

    1991-01-01

    A methodology for predicting the performance and acoustics of counterrotating propeller configurations was modified to take into account the effects of a non-uniform free stream velocity distribution entering the disk plane. The method utilizes the analytical techniques of Lock and Theodorson as described by Davidson to determine the influence of the non-uniform free stream velocity distribution in the prediction of the steady aerodynamic loads. The unsteady load contribution is determined according to the procedure of Leseture with rigid helical tip vortices simulating the previous rotations of each propeller. The steady and unsteady loads are combined to obtain the total blade loading required for acoustic prediction employing the Ffowcs Williams-Hawking equation as simplified by Succi with the assumption of compact sources. The numerical method is used to redesign the previous commuter class counterrotating propeller configuration of Denner. The specifications, performance, and acoustics of the new design are compared with the results of Denner thereby determining the influence of the non-uniform free stream velocity distribution on these metrics.

  15. A novel polyimide based micro heater with high temperature uniformity

    DOE PAGES

    Yu, Shifeng; Wang, Shuyu; Lu, Ming; ...

    2017-02-06

    MEMS based micro heaters are a key component in micro bio-calorimetry, nondispersive infrared gas sensors, semiconductor gas sensors and microfluidic actuators. A micro heater with a uniform temperature distribution in the heating area and short response time is desirable in ultrasensitive temperature-dependent measurements. In this study, we propose a novel micro heater design to reach a uniform temperature in a large heating area by optimizing the heating power density distribution in the heating area. A polyimide membrane is utilized as the substrate to reduce the thermal mass and heat loss which allows for fast thermal response as well as amore » simplified fabrication process. A gold and titanium heating element is fabricated on the flexible polyimide substrate using the standard MEMS technique. The temperature distribution in the heating area for a certain power input is measured by an IR camera, and is consistent with FEA simulation results. Finally, this design can achieve fast response and uniform temperature distribution, which is quite suitable for the programmable heating such as impulse and step driving.« less

  16. A novel polyimide based micro heater with high temperature uniformity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Shifeng; Wang, Shuyu; Lu, Ming

    MEMS based micro heaters are a key component in micro bio-calorimetry, nondispersive infrared gas sensors, semiconductor gas sensors and microfluidic actuators. A micro heater with a uniform temperature distribution in the heating area and short response time is desirable in ultrasensitive temperature-dependent measurements. In this study, we propose a novel micro heater design to reach a uniform temperature in a large heating area by optimizing the heating power density distribution in the heating area. A polyimide membrane is utilized as the substrate to reduce the thermal mass and heat loss which allows for fast thermal response as well as amore » simplified fabrication process. A gold and titanium heating element is fabricated on the flexible polyimide substrate using the standard MEMS technique. The temperature distribution in the heating area for a certain power input is measured by an IR camera, and is consistent with FEA simulation results. Finally, this design can achieve fast response and uniform temperature distribution, which is quite suitable for the programmable heating such as impulse and step driving.« less

  17. Regional methods for mapping major faults in areas of uniform low relief, as used in the London Basin, UK

    NASA Astrophysics Data System (ADS)

    Haslam, Richard; Aldiss, Donald

    2013-04-01

    Most of the London Basin, south-eastern UK, is underlain by the Palaeogene London Clay Formation, comprising a succession of rather uniform marine clay deposits up to 150 m thick, with widespread cover of Quaternary deposits and urban development. Therefore, in this area faults are difficult to delineate (or to detect) by conventional geological surveying methods in the field, and few are shown on the geological maps of the area. However, boreholes and excavations, especially those for civil engineering works, indicate that faults are probably widespread and numerous in the London area. A representative map of fault distribution and patterns of displacement is a pre-requisite for understanding the tectonic development of a region. Moreover, faulting is an important influence on the design and execution of civil engineering works, and on the hydrogeological characteristics of the ground. This paper reviews methods currently being used to map faults in the London Basin area. These are: the interpretation of persistent scatterer interferometry (PSI) data from time-series satellite-borne radar measurements; the interpretation of regional geophysical fields (Bouguer gravity anomaly and aeromagnetic), especially in combination with a digital elevation model; and the construction and interpretation of 3D geological models. Although these methods are generally not as accurate as large-scale geological field surveys, due to the availability of appropriate data in the London Basin they provide the means to recognise and delineate more faults, and with more confidence, than was possible using traditional geological mapping techniques. Together they reveal regional structures arising during Palaeogene crustal extension and subsidence in the North Sea, followed by inversion of a Mesozoic sedimentary basin in the south of the region, probably modified by strike-slip fault motion associated with the relative northward movement of the African Plate and the Alpine orogeny. This work is distributed under the Creative Commons Attribution 3.0 Unported License together with an NERC copyright. This license does not conflict with the regulations of the Crown Copyright.

  18. Distribution and regularity of injection from a multicylinder fuel-injection pump

    NASA Technical Reports Server (NTRS)

    Rothrock, A M; Marsh, E T

    1936-01-01

    This report presents the results of performance test conducted on a six-cylinder commercial fuel-injection pump that was adjusted to give uniform fuel distribution among the cylinders at a throttle setting of 0.00038 pound per injection and a pump speed of 750 revolutions per minute. The throttle setting and pump speed were then varied through the operating range to determine the uniformity of distribution and regularity of injection.

  19. Flow coating apparatus and method of coating

    DOEpatents

    Hanumanthu, Ramasubrahmaniam; Neyman, Patrick; MacDonald, Niles; Brophy, Brenor; Kopczynski, Kevin; Nair, Wood

    2014-03-11

    Disclosed is a flow coating apparatus, comprising a slot that can dispense a coating material in an approximately uniform manner along a distribution blade that increases uniformity by means of surface tension and transfers the uniform flow of coating material onto an inclined substrate such as for example glass, solar panels, windows or part of an electronic display. Also disclosed is a method of flow coating a substrate using the apparatus such that the substrate is positioned correctly relative to the distribution blade, a pre-wetting step is completed where both the blade and substrate are completed wetted with a pre-wet solution prior to dispensing of the coating material onto the distribution blade from the slot and hence onto the substrate. Thereafter the substrate is removed from the distribution blade and allowed to dry, thereby forming a coating.

  20. Experimental and numerical modeling research of rubber material during microwave heating process

    NASA Astrophysics Data System (ADS)

    Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling

    2018-05-01

    This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.

  1. Factors affecting summer distributions of Bering Sea forage fish species: Assessing competing hypotheses

    NASA Astrophysics Data System (ADS)

    Parker-Stetter, Sandra; Urmy, Samuel; Horne, John; Eisner, Lisa; Farley, Edward

    2016-12-01

    Hypotheses on the factors affecting forage fish species distributions are often proposed but rarely evaluated using a comprehensive suite of indices. Using 24 predictor indices, we compared competing hypotheses and calculated average models for the distributions of capelin, age-0 Pacific cod, and age-0 pollock in the eastern Bering Sea from 2006 to 2010. Distribution was described using a two stage modeling approach: probability of occurrence ("presence") and density when fish were present. Both local (varying by location and year) and annual (uniform in space but varying by year) indices were evaluated, the latter accounting for the possibility that distributions were random but that overall presence or densities changed with annual conditions. One regional index, distance to the location of preflexion larvae earlier in the year, was evaluated for age-0 pollock. Capelin distributions were best predicted by local indices such as bottom depth, temperature, and salinity. Annual climate (May sea surface temperature (SST), sea ice extent anomaly) and wind (June wind speed cubed) indices were often important for age-0 Pacific cod in addition to local indices (temperature and depth). Surface, midwater, and water column age-0 pollock distributions were best described by a combination of local (depth, temperature, salinity, zooplankton) and annual (May SST, sea ice anomaly, June wind speed cubed) indices. Our results corroborated some of those in previous distribution studies, but suggested that presence and density may also be influenced by other factors. Even though there were common environmental factors that influenced all species' distributions, it is not possible to generalize conditions for forage fish as a group.

  2. Post-processing of metal matrix composites by friction stir processing

    NASA Astrophysics Data System (ADS)

    Sharma, Vipin; Singla, Yogesh; Gupta, Yashpal; Raghuwanshi, Jitendra

    2018-05-01

    In metal matrix composites non-uniform distribution of reinforcement particles resulted in adverse affect on the mechanical properties. It is of great interest to explore post-processing techniques that can eliminate particle distribution heterogeneity. Friction stir processing is a relatively newer technique used for post-processing of metal matrix composites to improve homogeneity in particles distribution. In friction stir processing, synergistic effect of stirring, extrusion and forging resulted in refinement of grains, reduction of reinforcement particles size, uniformity in particles distribution, reduction in microstructural heterogeneity and elimination of defects.

  3. Analysis of incident-energy dependence of delayed neutron yields in actinides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasir, Mohamad Nasrun bin Mohd, E-mail: monasr211@gmail.com; Metorima, Kouhei, E-mail: kohei.m2420@hotmail.co.jp; Ohsawa, Takaaki, E-mail: ohsawa@mvg.biglobe.ne.jp

    The changes of delayed neutron yields (ν{sub d}) of Actinides have been analyzed for incident energy up to 20MeV using realized data of precursor after prompt neutron emission, from semi-empirical model, and delayed neutron emission probability data (P{sub n}) to carry out a summation method. The evaluated nuclear data of the delayed neutron yields of actinide nuclides are still uncertain at the present and the cause of the energy dependence has not been fully understood. In this study, the fission yields of precursor were calculated considering the change of the fission fragment mass yield based on the superposition of fivesmore » Gaussian distribution; and the change of the prompt neutrons number associated with the incident energy dependence. Thus, the incident energy dependent behavior of delayed neutron was analyzed.The total number of delayed neutron is expressed as ν{sub d}=∑Y{sub i} • P{sub ni} in the summation method, where Y{sub i} is the mass yields of precursor i and P{sub ni} is the delayed neutron emission probability of precursor i. The value of Y{sub i} is derived from calculation of post neutron emission mass distribution using 5 Gaussian equations with the consideration of large distribution of the fission fragments. The prompt neutron emission ν{sub p} increases at higher incident-energy but there are two different models; one model says that the fission fragment mass dependence that prompt neutron emission increases uniformly regardless of the fission fragments mass; and the other says that the major increases occur at heavy fission fragments area. In this study, the changes of delayed neutron yields by the two models have been investigated.« less

  4. Improving the efficiency of configurational-bias Monte Carlo: A density-guided method for generating bending angle trials for linear and branched molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu

    2014-08-21

    A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less

  5. Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation.

    PubMed

    Kong, Danyang; Asplund, Christopher L; Ling, Aiqing; Chee, Michael W L

    2015-08-01

    Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Sleep laboratory. There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. © 2015 Associated Professional Sleep Societies, LLC.

  6. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation

  7. V/V(max) test applied to SMM gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Matz, S. M.; Higdon, J. C.; Share, G. H.; Messina, D. C.; Iadicicco, A.

    1992-01-01

    We have applied the V/V(max) test to candidate gamma-ray bursts detected by the Gamma-Ray Spectrometer (GRS) aboard the SMM satellite to examine quantitatively the uniformity of the burst source population. For a sample of 132 candidate bursts identified in the GRS data by an automated search using a single uniform trigger criterion we find average V/V(max) = 0.40 +/- 0.025. This value is significantly different from 0.5, the average for a uniform distribution in space of the parent population of burst sources; however, the shape of the observed distribution of V/V(max) is unusual and our result conflicts with previous measurements. For these reasons we can currently draw no firm conclusion about the distribution of burst sources.

  8. Residential Mobility, Age, and the Life Cycle

    ERIC Educational Resources Information Center

    Yee, William; Arsdol, Maurice D. Van, Jr.

    1977-01-01

    A life cycle explanation of residential mobility is presented. It posits that age-related events in a normative context influence moving probabilities for homogeneous populations who have relatively uniform socialization. (Author)

  9. Design and development of novel bandages for compression therapy.

    PubMed

    Rajendran, Subbiyan; Anand, Subhash

    2003-03-01

    During the past few years there have been increasing concerns relating to the performance of bandages, especially their pressure distribution properties for the treatment of venous leg ulcers. This is because compression therapy is a complex system and requires two or multi-layer bandages, and the performance properties of each layer differs from other layers. The widely accepted sustained graduated compression mainly depends on the uniform pressure distribution of different layers of bandages, in which textile fibres and bandage structures play a major role. This article examines how the fibres, fibre blends and structures influence the absorption and pressure distribution properties of bandages. It is hoped that the research findings will help medical professionals, especially nurses, to gain an insight into the development of bandages. A total of 12 padding bandages have been produced using various fibres and fibre blends. A new technique that would facilitate good resilience and cushioning properties, higher and more uniform pressure distribution and enhanced water absorption and retention was adopted during the production. It has been found that the properties of developed padding bandages, which include uniform pressure distribution around the leg, are superior to existing commercial bandages and possess a number of additional properties required to meet the criteria stipulated for an ideal padding bandage. Results have indicated that none of the mostly used commercial padding bandages provide the required uniform pressure distribution around the limb.

  10. Shade tree spatial structure and pod production explain frosty pod rot intensity in cacao agroforests, Costa Rica.

    PubMed

    Gidoin, Cynthia; Avelino, Jacques; Deheuvels, Olivier; Cilas, Christian; Bieng, Marie Ange Ngo

    2014-03-01

    Vegetation composition and plant spatial structure affect disease intensity through resource and microclimatic variation effects. The aim of this study was to evaluate the independent effect and relative importance of host composition and plant spatial structure variables in explaining disease intensity at the plot scale. For that purpose, frosty pod rot intensity, a disease caused by Moniliophthora roreri on cacao pods, was monitored in 36 cacao agroforests in Costa Rica in order to assess the vegetation composition and spatial structure variables conducive to the disease. Hierarchical partitioning was used to identify the most causal factors. Firstly, pod production, cacao tree density and shade tree spatial structure had significant independent effects on disease intensity. In our case study, the amount of susceptible tissue was the most relevant host composition variable for explaining disease intensity by resource dilution. Indeed, cacao tree density probably affected disease intensity more by the creation of self-shading rather than by host dilution. Lastly, only regularly distributed forest trees, and not aggregated or randomly distributed forest trees, reduced disease intensity in comparison to plots with a low forest tree density. A regular spatial structure is probably crucial to the creation of moderate and uniform shade as recommended for frosty pod rot management. As pod production is an important service expected from these agroforests, shade tree spatial structure may be a lever for integrated management of frosty pod rot in cacao agroforests.

  11. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  12. Locus of frequency-dependent depression identified with multiple-probability fluctuation analysis at rat climbing fibre-Purkinje cell synapses

    PubMed Central

    Silver, R Angus; Momiyama, Akiko; Cull-Candy, Stuart G

    1998-01-01

    EPSCs were recorded under whole-cell voltage clamp at room temperature from Purkinje cells in slices of cerebellum from 12- to 14-day-old rats. EPSCs from individual climbing fibre (CF) inputs were identified on the basis of their large size, paired-pulse depression and all-or-none appearance in response to a graded stimulus. Synaptic transmission was investigated over a wide range of experimentally imposed release probabilities by analysing fluctuations in the peak of the EPSC. Release probability was manipulated by altering the extracellular [Ca2+] and [Mg2+]. Quantal parameters were estimated from plots of coefficient of variation (CV) or variance against mean conductance by fitting a multinomial model that incorporated both spatial variation in quantal size and non-uniform release probability. This ‘multiple-probability fluctuation’ (MPF) analysis gave an estimate of 510 ± 50 for the number of functional release sites (N) and a quantal size (q) of 0.5 ± 0.03 nS (n = 6). Control experiments, and simulations examining the effects of non-uniform release probability, indicate that MPF analysis provides a reliable estimate of quantal parameters. Direct measurement of quantal amplitudes in the presence of 5 mm Sr2+, which gave asynchronous release, yielded distributions with a mean quantal size of 0.55 ± 0.01 nS and a CV of 0.37 ± 0.01 (n = 4). Similar estimates of q were obtained in 2 mm Ca2+ when release probability was lowered with the calcium channel blocker Cd2+. The non-NMDA receptor antagonist 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX; 1 μm) reduced both the evoked current and the quantal size (estimated with MPF analysis) to a similar degree, but did not affect the estimate of N. We used MPF analysis to identify those quantal parameters that change during frequency-dependent depression at climbing fibre-Purkinje cell synaptic connections. At low stimulation frequencies, the mean release probability (P¯r) was unusually high (0.90 ± 0.03 at 0.033 Hz, n = 5), but as the frequency of stimulation was increased, pr fell dramatically (0.02 ± 0.01 at 10 Hz, n = 4) with no apparent change in either q or N. This indicates that the observed 50-fold depression in EPSC amplitude is presynaptic in origin. Presynaptic frequency-dependent depression was investigated with double-pulse and multiple-pulse protocols. EPSC recovery, following simultaneous release at practically all sites, was slow, being well fitted by the sum of two exponential functions (time constants of 0.35 ± 0.09 and 3.2 ± 0.4 s, n = 5). EPSC recovery following sustained stimulation was even slower. We propose that presynaptic depression at CF synapses reflects a slow recovery of release probability following release of each quantum of transmitter. The large number of functional release sites, relatively large quantal size, and unusual dynamics of transmitter release at the CF synapse appear specialized to ensure highly reliable olivocerebellar transmission at low frequencies but to limit transmission at higher frequencies. PMID:9660900

  13. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

  14. Gamma rays of energy or = 10(15) eV from Cyg X-3

    NASA Technical Reports Server (NTRS)

    Kifune, T.; Nishijima, K.; Hara, T.; Hatano, Y.; Hayashida, N.; Honda, M.; Kamata, K.; Matsubara, Y.; Mori, M.; Nagano, M.

    1985-01-01

    The experimental data of extensive air showers observed at Akeno have been analyzed to detect the gamma ray signal from Cyg X-3. After muon poor air showers are selected, the correlation of data acquisition time with 4.8 hours X-ray period is studied, giving the data concentration near the phase 0.6, the time of X-ray maximum. The probability that uniform backgrounds create the distribution is 0.2%. The time averaged integral gamma ray flux is estimated as (1.1 + or - 0.4)x 10 to the -14th power cm(-2) sec(-1) for Eo 10 to the 15th power eV and (8.8 + or - 5.0)x 10 to the 14th power cm(-2) sec(-1) for Eo 6 x 10 to the 14th power eV.

  15. An experimental investigation of the force network ensemble

    NASA Astrophysics Data System (ADS)

    Kollmer, Jonathan E.; Daniels, Karen E.

    2017-06-01

    We present an experiment in which a horizontal quasi-2D granular system with a fixed neighbor network is cyclically compressed and decompressed over 1000 cycles. We remove basal friction by floating the particles on a thin air cushion, so that particles only interact in-plane. As expected for a granular system, the applied load is not distributed uniformly, but is instead concentrated in force chains which form a network throughout the system. To visualize the structure of these networks, we use particles made from photoelastic material. The experimental setup and a new data-processing pipeline allow us to map out the evolution subject to the cyclic compressions. We characterize several statistical properties of the packing, including the probability density function of the contact force, and compare them with theoretical and numerical predictions from the force network ensemble theory.

  16. Complexity Induced Anisotropic Bimodal Intermittent Turbulence in Space Plasmas

    NASA Technical Reports Server (NTRS)

    Chang, Tom; Tam, Sunny W. Y.; Wu, Cheng-Chin

    2004-01-01

    The "physics of complexity" in space plasmas is the central theme of this exposition. It is demonstrated that the sporadic and localized interactions of magnetic coherent structures arising from the plasma resonances can be the source for the coexistence of nonpropagating spatiotemporal fluctuations and propagating modes. Non-Gaussian probability distribution functions of the intermittent fluctuations from direct numerical simulations are obtained and discussed. Power spectra and local intermittency measures using the wavelet analyses are presented to display the spottiness of the small-scale turbulent fluctuations and the non-uniformity of coarse-grained dissipation that can lead to magnetic topological reconfigurations. The technique of the dynamic renormalization group is applied to the study of the scaling properties of such type of multiscale fluctuations. Charged particle interactions with both the propagating and nonpropagating portions of the intermittent turbulence are also described.

  17. Statistical distributions of avalanche size and waiting times in an inter-sandpile cascade model

    NASA Astrophysics Data System (ADS)

    Batac, Rene; Longjas, Anthony; Monterola, Christopher

    2012-02-01

    Sandpile-based models have successfully shed light on key features of nonlinear relaxational processes in nature, particularly the occurrence of fat-tailed magnitude distributions and exponential return times, from simple local stress redistributions. In this work, we extend the existing sandpile paradigm into an inter-sandpile cascade, wherein the avalanches emanating from a uniformly-driven sandpile (first layer) is used to trigger the next (second layer), and so on, in a successive fashion. Statistical characterizations reveal that avalanche size distributions evolve from a power-law p(S)≈S-1.3 for the first layer to gamma distributions p(S)≈Sαexp(-S/S0) for layers far away from the uniformly driven sandpile. The resulting avalanche size statistics is found to be associated with the corresponding waiting time distribution, as explained in an accompanying analytic formulation. Interestingly, both the numerical and analytic models show good agreement with actual inventories of non-uniformly driven events in nature.

  18. Simulation of air velocity in a vertical perforated air distributor

    NASA Astrophysics Data System (ADS)

    Ngu, T. N. W.; Chu, C. M.; Janaun, J. A.

    2016-06-01

    Perforated pipes are utilized to divide a fluid flow into several smaller streams. Uniform flow distribution requirement is of great concern in engineering applications because it has significant influence on the performance of fluidic devices. For industrial applications, it is crucial to provide a uniform velocity distribution through orifices. In this research, flow distribution patterns of a closed-end multiple outlet pipe standing vertically for air delivery in the horizontal direction was simulated. Computational Fluid Dynamics (CFD), a tool of research for enhancing and understanding design was used as the simulator and the drawing software SolidWorks was used for geometry setup. The main purpose of this work is to establish the influence of size of orifices, intervals between outlets, and the length of tube in order to attain uniformity of exit flows through a multi outlet perforated tube. However, due to the gravitational effect, the compactness of paddy increases gradually from top to bottom of dryer, uniform flow pattern was aimed for top orifices and larger flow for bottom orifices.

  19. Bayesian analysis of the kinetics of quantal transmitter secretion at the neuromuscular junction.

    PubMed

    Saveliev, Anatoly; Khuzakhmetova, Venera; Samigullin, Dmitry; Skorinkin, Andrey; Kovyazina, Irina; Nikolsky, Eugeny; Bukharaeva, Ellya

    2015-10-01

    The timing of transmitter release from nerve endings is considered nowadays as one of the factors determining the plasticity and efficacy of synaptic transmission. In the neuromuscular junction, the moments of release of individual acetylcholine quanta are related to the synaptic delays of uniquantal endplate currents recorded under conditions of lowered extracellular calcium. Using Bayesian modelling, we performed a statistical analysis of synaptic delays in mouse neuromuscular junction with different patterns of rhythmic nerve stimulation and when the entry of calcium ions into the nerve terminal was modified. We have obtained a statistical model of the release timing which is represented as the summation of two independent statistical distributions. The first of these is the exponentially modified Gaussian distribution. The mixture of normal and exponential components in this distribution can be interpreted as a two-stage mechanism of early and late periods of phasic synchronous secretion. The parameters of this distribution depend on both the stimulation frequency of the motor nerve and the calcium ions' entry conditions. The second distribution was modelled as quasi-uniform, with parameters independent of nerve stimulation frequency and calcium entry. Two different probability density functions for the distribution of synaptic delays suggest at least two independent processes controlling the time course of secretion, one of them potentially involving two stages. The relative contribution of these processes to the total number of mediator quanta released depends differently on the motor nerve stimulation pattern and on calcium ion entry into nerve endings.

  20. School Uniform Policies in Public Schools

    ERIC Educational Resources Information Center

    Brunsma, David L.

    2006-01-01

    The movement for school uniforms in public schools continues to grow despite the author's research indicating little if any impact on student behavior, achievement, and self-esteem. The author examines the distribution of uniform policies by region and demographics, the impact of these policies on perceptions of school climate and safety, and…

  1. Aging transition in systems of oscillators with global distributed-delay coupling.

    PubMed

    Rahman, B; Blyuss, K B; Kyrychko, Y N

    2017-09-01

    We consider a globally coupled network of active (oscillatory) and inactive (nonoscillatory) oscillators with distributed-delay coupling. Conditions for aging transition, associated with suppression of oscillations, are derived for uniform and gamma delay distributions in terms of coupling parameters and the proportion of inactive oscillators. The results suggest that for the uniform distribution increasing the width of distribution for the same mean delay allows aging transition to happen for a smaller coupling strength and a smaller proportion of inactive elements. For gamma distribution with sufficiently large mean time delay, it may be possible to achieve aging transition for an arbitrary proportion of inactive oscillators, as long as the coupling strength lies in a certain range.

  2. Range-azimuth decouple beamforming for frequency diverse array with Costas-sequence modulated frequency offsets

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Wang, Wen-Qin; Shao, Huaizong

    2016-12-01

    Different from the phased-array using the same carrier frequency for each transmit element, the frequency diverse array (FDA) uses a small frequency offset across the array elements to produce range-angle-dependent transmit beampattern. FDA radar provides new application capabilities and potentials due to its range-dependent transmit array beampattern, but the FDA using linearly increasing frequency offsets will produce a range and angle coupled transmit beampattern. In order to decouple the range-azimuth beampattern for FDA radar, this paper proposes a uniform linear array (ULA) FDA using Costas-sequence modulated frequency offsets to produce random-like energy distribution in the transmit beampattern and thumbtack transmit-receive beampattern. In doing so, the range and angle of targets can be unambiguously estimated through matched filtering and subspace decomposition algorithms in the receiver signal processor. Moreover, random-like energy distributed beampattern can also be utilized for low probability of intercept (LPI) radar applications. Numerical results show that the proposed scheme outperforms the standard FDA in focusing the transmit energy, especially in the range dimension.

  3. New data on the presence of hemocyanin in Plecoptera: recomposing a puzzle.

    PubMed

    Amore, Valentina; Gaetani, Brunella; Puig, Maria Angeles; Fochetti, Romolo

    2011-01-01

    The specific role of hemocyanin in Plecoptera (stoneflies) is still not completely understood, since none of the hypotheses advanced have proven fully convincing. Previous data show that mRNA hemocyanin sequences are not present in all Plecoptera, and that hemocyanin does not seem to be uniformly distributed within the order. All species possess hexamerins, which are multifunction proteins that probably originated from hemocyanin. In order to obtain an increasingly detailed picture on the presence and distribution of hemocyanin across the order, this study presents new data regarding nymphs and adults of selected Plecoptera species. Results confirm that the hemocyanin expression differs among nymphs in the studied stonefly species. Even though previous studies have found hemocyanin in adults of two stonefly species it was not detected in the present study, even in species where nymphs show hemocyanin, suggesting that the physiological need of this protein can change during life cycle. The phylogenetic pattern obtained using hemocyanin sequences matches the accepted scheme of traditional phylogeny based on morphology, anatomy, and biology. It is remarkable to note that the hemocyanin conserved region acts like a phylogenetic molecular marker within Plecoptera.

  4. Kyllinga brevifolia mediated greener silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Isa, Norain; Bakhari, Nor Aziyah; Sarijo, Siti Halimah; Aziz, Azizan; Lockman, Zainovia

    2017-12-01

    Kyllinga brevifolia extract (KBE) was studied in this research as capping as well as reducing agent for the synthesis of greener plant mediated silver nanoparticles. This research was conducted in order to identify the compounds in the KBE that probable to work as reductant for the synthesis of Kyllinga brevifolia-mediated silver nanoparticles (AgNPs). Screening test such as Thin Layer Chromatography (TLC), Fourier Transform Infra-Red (FTIR), Carlo Erba Elemental analysis and Gas Chromatography-Mass Spectroscopy (GCMS) were used in identifying the natural compounds in KBE. The as-prepared AgNPs were characterized by UV-vis spectroscopy (UV-vis), Transmission Electron Microscope (TEM) and X-ray Diffraction (XRD). The TEM images showed that the as-synthesized silver have quasi-spherical particles are distributed uniformly with a narrow distribution from 5 nm to 40 nm. The XRD results demonstrated that the obtained AgNPs were face centre-cubic (FCC) structure. The catalytic activity of AgNPs on reduction of methylene blue (MB) using sodium borohydride (SB) was analyzed using UV-vis spectroscopy. This study showed that the efficacy of mediated AgNPs in catalysing the reduction of MB.

  5. Wind-Induced Reconfigurations in Flexible Branched Trees

    NASA Astrophysics Data System (ADS)

    Ojo, Oluwafemi; Shoele, Kourosh

    2017-11-01

    Wind induced stresses are the major mechanical cause of failure in trees. We know that the branching mechanism has an important effect on the stress distribution and stability of a tree in the wind. Eloy in PRL 2011, showed that Leonardo da Vinci's original observation which states the total cross section of branches is conserved across branching nodes is the best configuration for resisting wind-induced fracture in rigid trees. However, prediction of the fracture risk and pattern of a tree is also a function of their reconfiguration capabilities and how they mitigate large wind-induced stresses. In this studies through developing an efficient numerical simulation of flexible branched trees, we explore the role of the tree flexibility on the optimal branching. Our results show that the probability of a tree breaking at any point depends on both the cross-section changes in the branching nodes and the level of tree flexibility. It is found that the branching mechanism based on Leonardo da Vinci's original observation leads to a uniform stress distribution over a wide range of flexibilities but the pattern changes for more flexible systems.

  6. Mapping the distribution of vesicular textures on silicic lavas using the Thermal Infrared Multispectral Scanner

    NASA Technical Reports Server (NTRS)

    Ondrusek, Jaime; Christensen, Philip R.; Fink, Jonathan H.

    1993-01-01

    To investigate the effect of vesicularity on TIMS (Thermal Infrared Multispectral Scanner) imagery independent of chemical variations, we studied a large rhyolitic flow of uniform composition but textural heterogeneity. The imagery was recalibrated so that the digital number values for a lake in the scene matched a calculated ideal spectrum for water. TIMS spectra for the lava show useful differences in coarsely and finely vesicular pumice data, particularly in TIMS bands 3 and 4. Images generated by ratioing these bands accurately map out those areas known from field studies to be coarsely vesicular pumice. These texture-related emissivity variations are probably due to the larger vesicles being relatively deeper and separated by smaller septa leaving less smooth glass available to give the characteristic emission of the lava. In studies of inaccessible lava flows (as on Mars) areas of coarsely vesicular pumice must be identified and avoided before chemical variations can be interpreted. Remotely determined distributions of vesicular and glassy textures can also be related to the volatile contents and potential hazards associated with the emplacement of silicic lava flows on Earth.

  7. Colombia: A Country Under Constant Threat of Disasters

    DTIC Science & Technology

    2014-05-22

    disasters strike every nation in the world , and although these events do not occur with uniformity of distribution, developing nations suffer the greatest...strike every nation in the world , and although these events do not occur with uniformity of distribution, developing nations suffer the greatest...have been victims 4IHS Janes, “Jane’s World Insurgency and Terrorism.” Fuerzas Armadas

  8. Controlling Growth High Uniformity Indium Selenide (In2Se3) Nanowires via the Rapid Thermal Annealing Process at Low Temperature.

    PubMed

    Hsu, Ya-Chu; Hung, Yu-Chen; Wang, Chiu-Yen

    2017-09-15

    High uniformity Au-catalyzed indium selenide (In 2 Se 3) nanowires are grown with the rapid thermal annealing (RTA) treatment via the vapor-liquid-solid (VLS) mechanism. The diameters of Au-catalyzed In 2 Se 3 nanowires could be controlled with varied thicknesses of Au films, and the uniformity of nanowires is improved via a fast pre-annealing rate, 100 °C/s. Comparing with the slower heating rate, 0.1 °C/s, the average diameters and distributions (standard deviation, SD) of In 2 Se 3 nanowires with and without the RTA process are 97.14 ± 22.95 nm (23.63%) and 119.06 ± 48.75 nm (40.95%), respectively. The in situ annealing TEM is used to study the effect of heating rate on the formation of Au nanoparticles from the as-deposited Au film. The results demonstrate that the average diameters and distributions of Au nanoparticles with and without the RTA process are 19.84 ± 5.96 nm (30.00%) and about 22.06 ± 9.00 nm (40.80%), respectively. It proves that the diameter size, distribution, and uniformity of Au-catalyzed In 2 Se 3 nanowires are reduced and improved via the RTA pre-treated. The systemic study could help to control the size distribution of other nanomaterials through tuning the annealing rate, temperatures of precursor, and growth substrate to control the size distribution of other nanomaterials. Graphical Abstract Rapid thermal annealing (RTA) process proved that it can uniform the size distribution of Au nanoparticles, and then it can be used to grow the high uniformity Au-catalyzed In 2 Se 3 nanowires via the vapor-liquid-solid (VLS) mechanism. Comparing with the general growth condition, the heating rate is slow, 0.1 °C/s, and the growth temperature is a relatively high growth temperature, > 650 °C. RTA pre-treated growth substrate can form smaller and uniform Au nanoparticles to react with the In 2 Se 3 vapor and produce the high uniformity In 2 Se 3 nanowires. The in situ annealing TEM is used to realize the effect of heating rate on Au nanoparticle formation from the as-deposited Au film. The byproduct of self-catalyzed In 2 Se 3 nanoplates can be inhibited by lowering the precursors and growth temperatures.

  9. Processing of laser formed SiC powder

    NASA Technical Reports Server (NTRS)

    Haggerty, J. S.; Bowen, H. K.

    1985-01-01

    Superior SiC characteristics can be achieved through the use of ideal constituent powders and careful post-synthesis processing steps. High purity SiC powders of approx. 1000 A uniform diameter, nonagglomerated and spherical were produced. This required major revision of the particle formation and growth model from one based on classical nucleation and growth to one based on collision and coalescence of Si particles followed by their carburization. Dispersions based on pure organic solvents as well as steric stabilization were investigated. Although stable dispersions were formed by both, subsequent part fabrication emphasized the pure solvents since fewer problems with drying and residuals of the high purity particles were anticipated. Test parts were made by the colloidal pressing technique; both liquid filtration and consolidation (rearrangement) stages were modeled. Green densities corresponding to a random close packed structure (approx. 63%) were achieved; this highly perfect structure has a high, uniform coordination number (greater than 11) approaching the quality of an ordered structure without introducing domain boundary effects. After drying, parts were densified at temperatures ranging from 1800 to 2100 C. Optimum densification temperatures will probably be in the 1900 to 2000 C range based on these preliminary results which showed that 2050 C samples had experienced substantial grain growth. Although overfired, the 2050 C samples exhibited excellent mechanical properties. Biaxial tensile strengths up to 714 MPa and Vickers hardness values of 2430 kg/sq mm 2 were both more typical of hot pressed than sintered SiC. Both result from the absence of large defects and the confinement of residual porosity (less than 2.5%) to small diameter, uniformly distributed pores.

  10. Effect of the Temperature of the Moderator on the Velocity Distribution of Neutrons with Numerical Calculations for H as Moderator

    DOE R&D Accomplishments Database

    Wigner, E. P.; Wilkins, J. E. Jr.

    1944-09-14

    In this paper we set up an integral equation governing the energy distribution of neutrons that are being slowed down uniformly throughout the entire space by a uniformly distributed moderator whose atoms are in motion with a Maxwellian distribution of velocities. The effects of chemical binding and crystal reflection are ignored. When the moderator is hydrogen, the integral equation is reduced to a differential equation and solved by numerical methods. In this manner we obtain a refinement of the dv/v{sup 2} law. (auth)

  11. Characterization of Dispersive Ultrasonic Rayleigh Surface Waves in Asphalt Concrete

    NASA Astrophysics Data System (ADS)

    In, Chi-Won; Kim, Jin-Yeon; Jacobs, Laurence J.; Kurtis, Kimberly E.

    2008-02-01

    This research focuses on the application of ultrasonic Rayleigh surface waves to nondestructively characterize the mechanical properties and structural defects (non-uniformly distributed aggregate) in asphalt concrete. An efficient wedge technique is developed in this study to generate Rayleigh surface waves that is shown to be effective in characterizing Rayleigh waves in this highly viscoelastic (attenuating) and heterogeneous medium. Experiments are performed on an asphalt-concrete beam produced with uniformly distributed aggregate. Ultrasonic techniques using both contact and non-contact sensors are examined and their results are compared. Experimental results show that the wedge technique along with an air-coupled sensor appears to be effective in characterizing Rayleigh waves in asphalt concrete. Hence, measurement of theses material properties needs to be investigated in non-uniformly distributed aggregate material using these techniques.

  12. Three-Dimensional Radiobiologic Dosimetry: Application of Radiobiologic Modeling to Patient-Specific 3-Dimensional Imaging–Based Internal Dosimetry

    PubMed Central

    Prideaux, Andrew R.; Song, Hong; Hobbs, Robert F.; He, Bin; Frey, Eric C.; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George

    2010-01-01

    Phantom-based and patient-specific imaging-based dosimetry methodologies have traditionally yielded mean organ-absorbed doses or spatial dose distributions over tumors and normal organs. In this work, radiobiologic modeling is introduced to convert the spatial distribution of absorbed dose into biologically effective dose and equivalent uniform dose parameters. The methodology is illustrated using data from a thyroid cancer patient treated with radioiodine. Methods Three registered SPECT/CT scans were used to generate 3-dimensional images of radionuclide kinetics (clearance rate) and cumulated activity. The cumulated activity image and corresponding CT scan were provided as input into an EGSnrc-based Monte Carlo calculation: The cumulated activity image was used to define the distribution of decays, and an attenuation image derived from CT was used to define the corresponding spatial tissue density and composition distribution. The rate images were used to convert the spatial absorbed dose distribution to a biologically effective dose distribution, which was then used to estimate a single equivalent uniform dose for segmented volumes of interest. Equivalent uniform dose was also calculated from the absorbed dose distribution directly. Results We validate the method using simple models; compare the dose-volume histogram with a previously analyzed clinical case; and give the mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for an illustrative case of a pediatric thyroid cancer patient with diffuse lung metastases. The mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for the tumor were 57.7, 58.5, and 25.0 Gy, respectively. Corresponding values for normal lung tissue were 9.5, 9.8, and 8.3 Gy, respectively. Conclusion The analysis demonstrates the impact of radiobiologic modeling on response prediction. The 57% reduction in the equivalent dose value for the tumor reflects a high level of dose nonuniformity in the tumor and a corresponding reduced likelihood of achieving a tumor response. Such analyses are expected to be useful in treatment planning for radionuclide therapy. PMID:17504874

  13. Reconciling Streamflow Uncertainty Estimation and River Bed Morphology Dynamics. Insights from a Probabilistic Assessment of Streamflow Uncertainties Using a Reliability Diagram

    NASA Astrophysics Data System (ADS)

    Morlot, T.; Mathevet, T.; Perret, C.; Favre Pugin, A. C.

    2014-12-01

    Streamflow uncertainty estimation has recently received a large attention in the literature. A dynamic rating curve assessment method has been introduced (Morlot et al., 2014). This dynamic method allows to compute a rating curve for each gauging and a continuous streamflow time-series, while calculating streamflow uncertainties. Streamflow uncertainty takes into account many sources of uncertainty (water level, rating curve interpolation and extrapolation, gauging aging, etc.) and produces an estimated distribution of streamflow for each days. In order to caracterise streamflow uncertainty, a probabilistic framework has been applied on a large sample of hydrometric stations of the Division Technique Générale (DTG) of Électricité de France (EDF) hydrometric network (>250 stations) in France. A reliability diagram (Wilks, 1995) has been constructed for some stations, based on the streamflow distribution estimated for a given day and compared to a real streamflow observation estimated via a gauging. To build a reliability diagram, we computed the probability of an observed streamflow (gauging), given the streamflow distribution. Then, the reliability diagram allows to check that the distribution of probabilities of non-exceedance of the gaugings follows a uniform law (i.e., quantiles should be equipropables). Given the shape of the reliability diagram, the probabilistic calibration is caracterised (underdispersion, overdispersion, bias) (Thyer et al., 2009). In this paper, we present case studies where reliability diagrams have different statistical properties for different periods. Compared to our knowledge of river bed morphology dynamic of these hydrometric stations, we show how reliability diagram gives us invaluable information on river bed movements, like a continuous digging or backfilling of the hydraulic control due to erosion or sedimentation processes. Hence, the careful analysis of reliability diagrams allows to reconcile statistics and long-term river bed morphology processes. This knowledge improves our real-time management of hydrometric stations, given a better caracterisation of erosion/sedimentation processes and the stability of hydrometric station hydraulic control.

  14. Identification and handling of artifactual gene expression profiles emerging in microarray hybridization experiments

    PubMed Central

    Brodsky, Leonid; Leontovich, Andrei; Shtutman, Michael; Feinstein, Elena

    2004-01-01

    Mathematical methods of analysis of microarray hybridizations deal with gene expression profiles as elementary units. However, some of these profiles do not reflect a biologically relevant transcriptional response, but rather stem from technical artifacts. Here, we describe two technically independent but rationally interconnected methods for identification of such artifactual profiles. Our diagnostics are based on detection of deviations from uniformity, which is assumed as the main underlying principle of microarray design. Method 1 is based on detection of non-uniformity of microarray distribution of printed genes that are clustered based on the similarity of their expression profiles. Method 2 is based on evaluation of the presence of gene-specific microarray spots within the slides’ areas characterized by an abnormal concentration of low/high differential expression values, which we define as ‘patterns of differentials’. Applying two novel algorithms, for nested clustering (method 1) and for pattern detection (method 2), we can make a dual estimation of the profile’s quality for almost every printed gene. Genes with artifactual profiles detected by method 1 may then be removed from further analysis. Suspicious differential expression values detected by method 2 may be either removed or weighted according to the probabilities of patterns that cover them, thus diminishing their input in any further data analysis. PMID:14999086

  15. State of art of seismic design and seismic hazard analysis for oil and gas pipeline system

    NASA Astrophysics Data System (ADS)

    Liu, Aiwen; Chen, Kun; Wu, Jian

    2010-06-01

    The purpose of this paper is to adopt the uniform confidence method in both water pipeline design and oil-gas pipeline design. Based on the importance of pipeline and consequence of its failure, oil and gas pipeline can be classified into three pipe classes, with exceeding probabilities over 50 years of 2%, 5% and 10%, respectively. Performance-based design requires more information about ground motion, which should be obtained by evaluating seismic safety for pipeline engineering site. Different from a city’s water pipeline network, the long-distance oil and gas pipeline system is a spatially linearly distributed system. For the uniform confidence of seismic safety, a long-distance oil and pipeline formed with pump stations and different-class pipe segments should be considered as a whole system when analyzing seismic risk. Considering the uncertainty of earthquake magnitude, the design-basis fault displacements corresponding to the different pipeline classes are proposed to improve deterministic seismic hazard analysis (DSHA). A new empirical relationship between the maximum fault displacement and the surface-wave magnitude is obtained with the supplemented earthquake data in East Asia. The estimation of fault displacement for a refined oil pipeline in Wenchuan M S8.0 earthquake is introduced as an example in this paper.

  16. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.

  17. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191

  18. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  19. Enhancement of viability of muscle precursor cells on 3D scaffold in a perfusion bioreactor.

    PubMed

    Cimetta, E; Flaibani, M; Mella, M; Serena, E; Boldrin, L; De Coppi, P; Elvassore, N

    2007-05-01

    The aim of this study was to develop a methodology for the in vitro expansion of skeletal-muscle precursor cells (SMPC) in a three-dimensional (3D) environment in order to fabricate a cellularized artificial graft characterized by high density of viable cells and uniform cell distribution over the entire 3D domain. Cell seeding and culture within 3D porous scaffolds by conventional static techniques can lead to a uniform cell distribution only on the scaffold surface, whereas dynamic culture systems have the potential of allowing a uniform growth of SMPCs within the entire scaffold structure. In this work, we designed and developed a perfusion bioreactor able to ensure long-term culture conditions and uniform flow of medium through 3D collagen sponges. A mathematical model to assist the design of the experimental setup and of the operative conditions was developed. The effects of dynamic vs static culture in terms of cell viability and spatial distribution within 3D collagen scaffolds were evaluated at 1, 4 and 7 days and for different flow rates of 1, 2, 3.5 and 4.5 ml/min using C2C12 muscle cell line and SMPCs derived from satellite cells. C2C12 cells, after 7 days of culture in our bioreactor, perfused applying a 3.5 ml/min flow rate, showed a higher viability resulting in a three-fold increase when compared with the same parameter evaluated for cultures kept under static conditions. In addition, dynamic culture resulted in a more uniform 3D cell distribution. The 3.5 ml/min flow rate in the bioreactor was also applied to satellite cell-derived SMPCs cultured on 3D collagen scaffolds. The dynamic culture conditions improved cell viability leading to higher cell density and uniform distribution throughout the entire 3D collagen sponge for both C2C12 and satellite cells.

  20. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  1. High-resolution imaging of selenium in kidneys: a localized selenium pool associated with glutathione peroxidase 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinouski, M.; Kehr, S.; Finney, L.

    2012-04-17

    Recent advances in quantitative methods and sensitive imaging techniques of trace elements provide opportunities to uncover and explain their biological roles. In particular, the distribution of selenium in tissues and cells under both physiological and pathological conditions remains unknown. In this work, we applied high-resolution synchrotron X-ray fluorescence microscopy (XFM) to map selenium distribution in mouse liver and kidney. Liver showed a uniform selenium distribution that was dependent on selenocysteine tRNA{sup [Ser]Sec} and dietary selenium. In contrast, kidney selenium had both uniformly distributed and highly localized components, the latter visualized as thin circular structures surrounding proximal tubules. Other parts ofmore » the kidney, such as glomeruli and distal tubules, only manifested the uniformly distributed selenium pattern that co-localized with sulfur. We found that proximal tubule selenium localized to the basement membrane. It was preserved in Selenoprotein P knockout mice, but was completely eliminated in glutathione peroxidase 3 (GPx3) knockout mice, indicating that this selenium represented GPx3. We further imaged kidneys of another model organism, the naked mole rat, which showed a diminished uniformly distributed selenium pool, but preserved the circular proximal tubule signal. We applied XFM to image selenium in mammalian tissues and identified a highly localized pool of this trace element at the basement membrane of kidneys that was associated with GPx3. XFM allowed us to define and explain the tissue topography of selenium in mammalian kidneys at submicron resolution.« less

  2. Quantitative Microbial Risk Assessment for Clostridium perfringens in Natural and Processed Cheeses

    PubMed Central

    Lee, Heeyoung; Lee, Soomin; Kim, Sejeong; Lee, Jeeyeon; Ha, Jimyeong; Yoon, Yohan

    2016-01-01

    This study evaluated the risk of Clostridium perfringens (C. perfringens) foodborne illness from natural and processed cheeses. Microbial risk assessment in this study was conducted according to four steps: hazard identification, hazard characterization, exposure assessment, and risk characterization. The hazard identification of C. perfringens on cheese was identified through literature, and dose response models were utilized for hazard characterization of the pathogen. For exposure assessment, the prevalence of C. perfringens, storage temperatures, storage time, and annual amounts of cheese consumption were surveyed. Eventually, a simulation model was developed using the collected data and the simulation result was used to estimate the probability of C. perfringens foodborne illness by cheese consumption with @RISK. C. perfringens was determined to be low risk on cheese based on hazard identification, and the exponential model (r = 1.82×10−11) was deemed appropriate for hazard characterization. Annual amounts of natural and processed cheese consumption were 12.40±19.43 g and 19.46±14.39 g, respectively. Since the contamination levels of C. perfringens on natural (0.30 Log CFU/g) and processed cheeses (0.45 Log CFU/g) were below the detection limit, the initial contamination levels of natural and processed cheeses were estimated by beta distribution (α1 = 1, α2 = 91; α1 = 1, α2 = 309)×uniform distribution (a = 0, b = 2; a = 0, b = 2.8) to be −2.35 and −2.73 Log CFU/g, respectively. Moreover, no growth of C. perfringens was observed for exposure assessment to simulated conditions of distribution and storage. These data were used for risk characterization by a simulation model, and the mean values of the probability of C. perfringens foodborne illness by cheese consumption per person per day for natural and processed cheeses were 9.57×10−14 and 3.58×10−14, respectively. These results indicate that probability of C. perfringens foodborne illness by consumption cheese is low, and it can be used to establish microbial criteria for C. perfringens on natural and processed cheeses. PMID:26954204

  3. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  4. Visualizing Metal Content and Intracellular Distribution in Primary Hippocampal Neurons with Synchrotron X-Ray Fluorescence

    DOE PAGES

    Colvin, Robert A.; Jin, Qiaoling; Lai, Barry; ...

    2016-07-19

    Increasing evidence suggests that metal dyshomeostasis plays an important role in human neurodegenerative diseases. Although distinctive metal distributions are described for mature hippocampus and cortex, much less is known about metal levels and intracellular distribution in individual hippocampal neuronal somata. To solve this problem, we conducted quantitative metal analyses utilizing synchrotron radiation X-Ray fluorescence on frozen hydrated primary cultured neurons derived from rat embryonic cortex (CTX) and two regions of the hippocampus: dentate gyrus (DG) and CA1. Also, comparing average metal contents showed that the most abundant metals were calcium, iron, and zinc, whereas metals such as copper and manganesemore » were less than 10% of zinc. Average metal contents were generally similar when compared across neurons cultured from CTX, DG, and CA1, except for manganese that was larger in CA1. However, each metal showed a characteristic spatial distribution in individual neuronal somata. Zinc was uniformly distributed throughout the cytosol, with no evidence for the existence of previously identified zinc-enriched organelles, zincosomes. Calcium showed a peri-nuclear distribution consistent with accumulation in endoplasmic reticulum and/or mitochondria. Iron showed 2-3 distinct highly concentrated puncta only in peri-nuclear locations. Notwithstanding the small sample size, these analyses demonstrate that primary cultured neurons show characteristic metal signatures. The iron puncta probably represent iron-accumulating organelles, siderosomes. Thus, the metal distributions observed in mature brain structures are likely the result of both intrinsic neuronal factors that control cellular metal content and extrinsic factors related to the synaptic organization, function, and contacts formed and maintained in each region.« less

  5. Visualizing Metal Content and Intracellular Distribution in Primary Hippocampal Neurons with Synchrotron X-Ray Fluorescence

    PubMed Central

    2016-01-01

    Increasing evidence suggests that metal dyshomeostasis plays an important role in human neurodegenerative diseases. Although distinctive metal distributions are described for mature hippocampus and cortex, much less is known about metal levels and intracellular distribution in individual hippocampal neuronal somata. To solve this problem, we conducted quantitative metal analyses utilizing synchrotron radiation X-Ray fluorescence on frozen hydrated primary cultured neurons derived from rat embryonic cortex (CTX) and two regions of the hippocampus: dentate gyrus (DG) and CA1. Comparing average metal contents showed that the most abundant metals were calcium, iron, and zinc, whereas metals such as copper and manganese were less than 10% of zinc. Average metal contents were generally similar when compared across neurons cultured from CTX, DG, and CA1, except for manganese that was larger in CA1. However, each metal showed a characteristic spatial distribution in individual neuronal somata. Zinc was uniformly distributed throughout the cytosol, with no evidence for the existence of previously identified zinc-enriched organelles, zincosomes. Calcium showed a peri-nuclear distribution consistent with accumulation in endoplasmic reticulum and/or mitochondria. Iron showed 2–3 distinct highly concentrated puncta only in peri-nuclear locations. Notwithstanding the small sample size, these analyses demonstrate that primary cultured neurons show characteristic metal signatures. The iron puncta probably represent iron-accumulating organelles, siderosomes. Thus, the metal distributions observed in mature brain structures are likely the result of both intrinsic neuronal factors that control cellular metal content and extrinsic factors related to the synaptic organization, function, and contacts formed and maintained in each region. PMID:27434052

  6. A Comprehensive Theory of Algorithms for Wireless Networks and Mobile Systems

    DTIC Science & Technology

    2016-06-08

    David Peleg. Nonuniform SINR+Voronoi Diagrams are Effectively Uniform. In Yoram Moses, editor, Distributed Computing: 29th International Symposium...in Computer Science, page 559. Springer, 2014. [16] Erez Kantor, Zvi Lotker, Merav Parter, and David Peleg. Nonuniform sINR+Voronoi dia- grams are...Merav Parter, and David Peleg. Nonuniform SINR+Voronoi diagrams are effectively uniform. In Yoram Moses, editor, Distributed Computing - 29th

  7. Electrophoretic sample insertion. [device for uniformly distributing samples in flow path

    NASA Technical Reports Server (NTRS)

    Mccreight, L. R. (Inventor)

    1974-01-01

    Two conductive screens located in the flow path of an electrophoresis sample separation apparatus are charged electrically. The sample is introduced between the screens, and the charge is sufficient to disperse and hold the samples across the screens. When the charge is terminated, the samples are uniformly distributed in the flow path. Additionally, a first separation by charged properties has been accomplished.

  8. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  9. A Smart Cage With Uniform Wireless Power Distribution in 3D for Enabling Long-Term Experiments With Freely Moving Animals.

    PubMed

    Mirbozorgi, S Abdollah; Bahrami, Hadi; Sawan, Mohamad; Gosselin, Benoit

    2016-04-01

    This paper presents a novel experimental chamber with uniform wireless power distribution in 3D for enabling long-term biomedical experiments with small freely moving animal subjects. The implemented power transmission chamber prototype is based on arrays of parallel resonators and multicoil inductive links, to form a novel and highly efficient wireless power transmission system. The power transmitter unit includes several identical resonators enclosed in a scalable array of overlapping square coils which are connected in parallel to provide uniform power distribution along x and y. Moreover, the proposed chamber uses two arrays of primary resonators, facing each other, and connected in parallel to achieve uniform power distribution along the z axis. Each surface includes 9 overlapped coils connected in parallel and implemented into two layers of FR4 printed circuit board. The chamber features a natural power localization mechanism, which simplifies its implementation and ease its operation by avoiding the need for active detection and control mechanisms. A single power surface based on the proposed approach can provide a power transfer efficiency (PTE) of 69% and a power delivered to the load (PDL) of 120 mW, for a separation distance of 4 cm, whereas the complete chamber prototype provides a uniform PTE of 59% and a PDL of 100 mW in 3D, everywhere inside the chamber with a size of 27×27×16 cm(3).

  10. Numerical simulation for the magnetic force distribution in electromagnetic forming of small size flat sheet

    NASA Astrophysics Data System (ADS)

    Chen, Xiaowei; Wang, Wenping; Wan, Min

    2013-12-01

    It is essential to calculate magnetic force in the process of studying electromagnetic flat sheet forming. Calculating magnetic force is the basis of analyzing the sheet deformation and optimizing technical parameters. Magnetic force distribution on the sheet can be obtained by numerical simulation of electromagnetic field. In contrast to other computing methods, the method of numerical simulation has some significant advantages, such as higher calculation accuracy, easier using and other advantages. In this paper, in order to study of magnetic force distribution on the small size flat sheet in electromagnetic forming when flat round spiral coil, flat rectangular spiral coil and uniform pressure coil are adopted, the 3D finite element models are established by software ANSYS/EMAG. The magnetic force distribution on the sheet are analyzed when the plane geometries of sheet are equal or less than the coil geometries under fixed discharge impulse. The results showed that when the physical dimensions of sheet are less than the corresponding dimensions of the coil, the variation of induced current channel width on the sheet will cause induced current crowding effect that seriously influence the magnetic force distribution, and the degree of inhomogeneity of magnetic force distribution is increase nearly linearly with the variation of induced current channel width; the small size uniform pressure coil will produce approximately uniform magnetic force distribution on the sheet, but the coil is easy to early failure; the desirable magnetic force distribution can be achieved when the unilateral placed flat rectangular spiral coil is adopted, and this program can be take as preferred one, because the longevity of flat rectangular spiral coil is longer than the working life of small size uniform pressure coil.

  11. A statistical model for analyzing the rotational error of single isocenter for multiple targets technique.

    PubMed

    Chang, Jenghwa

    2017-06-01

    To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.

  12. Cylindrically distributing optical fiber tip for uniform laser illumination of hollow organs

    NASA Astrophysics Data System (ADS)

    Buonaccorsi, Giovanni A.; Burke, T.; MacRobert, Alexander J.; Hill, P. D.; Essenpreis, Matthias; Mills, Timothy N.

    1993-05-01

    To predict the outcome of laser therapy it is important to possess, among other things, an accurate knowledge of the intensity and distribution of the laser light incident on the tissue. For irradiation of the internal surfaces of hollow organs, modified fiber tips can be used to shape the light distribution to best suit the treatment geometry. There exist bulb-tipped optical fibers emitting a uniform isotropic distribution of light suitable for the treatment of organs which approximate a spherical geometry--the bladder, for example. For the treatment of organs approximating a cylindrical geometry--e.g. the oesophagus--an optical fiber tip which emits a uniform cylindrical distribution of light is required. We report on the design, development and testing of such a device, the CLD fiber tip. The device was made from a solid polymethylmethacrylate (PMMA) rod, 27 mm in length and 4 mm in diameter. One end was shaped and 'silvered' to form a mirror which reflected the light emitted from the delivery fiber positioned at the other end of the rod. The shape of the mirror was such that the light fell with uniform intensity on the circumferential surface of the rod. This surface was coated with BaSO4 reflectance paint to couple the light out of the rod and onto the surface of the tissue.

  13. Radiobiological evaluation of the influence of dwell time modulation restriction in HIPO optimized HDR prostate brachytherapy implants.

    PubMed

    Mavroidis, Panayiotis; Katsilieri, Zaira; Kefala, Vasiliki; Milickovic, Natasa; Papanikolaou, Nikos; Karabis, Andreas; Zamboglou, Nikolaos; Baltas, Dimos

    2010-09-01

    One of the issues that a planner is often facing in HDR brachytherapy is the selective existence of high dose volumes around some few dominating dwell positions. If there is no information available about its necessity (e.g. location of a GTV), then it is reasonable to investigate whether this can be avoided. This effect can be eliminated by limiting the free modulation of the dwell times. HIPO, an inverse treatment plan optimization algorithm, offers this option. In treatment plan optimization there are various methods that try to regularize the variation of dose non-uniformity using purely dosimetric measures. However, although these methods can help in finding a good dose distribution they do not provide any information regarding the expected treatment outcome as described by radiobiology based indices. The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO and modulation restriction (MR) has been compared to alternative plans with HIPO and free modulation (without MR). All common dose-volume indices for the prostate and the organs at risk have been considered together with radiobiological measures. The clinical effectiveness of the different dose distributions was investigated by calculating the response probabilities of the tumors and organs-at-risk (OARs) involved in these prostate cancer cases. The radiobiological models used are the Poisson and the relative seriality models. Furthermore, the complication-free tumor control probability, P + and the biologically effective uniform dose ([Formula: see text]) were used for treatment plan evaluation and comparison. Our results demonstrate that HIPO with a modulation restriction value of 0.1-0.2 delivers high quality plans which are practically equivalent to those achieved with free modulation regarding the clinically used dosimetric indices. In the comparison, many of the dosimetric and radiobiological indices showed significantly different results. The modulation restricted clinical plans demonstrated a lower total dwell time by a mean of 1.4% that was proved to be statistically significant ( p = 0.002). The HIPO with MR treatment plans produced a higher P + by 0.5%, which stemmed from a better sparing of the OARs by 1.0%. Both the dosimetric and radiobiological comparison shows that the modulation restricted optimization gives on average similar results with the optimization without modulation restriction in the examined clinical cases. Concluding, based on our results, it appears that the applied dwell time regularization technique is expected to introduce a minor improvement in the effectiveness of the optimized HDR dose distributions.

  14. Radiobiological evaluation of the influence of dwell time modulation restriction in HIPO optimized HDR prostate brachytherapy implants

    PubMed Central

    Katsilieri, Zaira; Kefala, Vasiliki; Milickovic, Natasa; Papanikolaou, Nikos; Karabis, Andreas; Zamboglou, Nikolaos; Baltas, Dimos

    2010-01-01

    Purpose One of the issues that a planner is often facing in HDR brachytherapy is the selective existence of high dose volumes around some few dominating dwell positions. If there is no information available about its necessity (e.g. location of a GTV), then it is reasonable to investigate whether this can be avoided. This effect can be eliminated by limiting the free modulation of the dwell times. HIPO, an inverse treatment plan optimization algorithm, offers this option. In treatment plan optimization there are various methods that try to regularize the variation of dose non-uniformity using purely dosimetric measures. However, although these methods can help in finding a good dose distribution they do not provide any information regarding the expected treatment outcome as described by radiobiology based indices. Material and methods The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO and modulation restriction (MR) has been compared to alternative plans with HIPO and free modulation (without MR). All common dose-volume indices for the prostate and the organs at risk have been considered together with radiobiological measures. The clinical effectiveness of the different dose distributions was investigated by calculating the response probabilities of the tumors and organs-at-risk (OARs) involved in these prostate cancer cases. The radiobiological models used are the Poisson and the relative seriality models. Furthermore, the complication-free tumor control probability, P+ and the biologically effective uniform dose (D¯¯) were used for treatment plan evaluation and comparison. Results Our results demonstrate that HIPO with a modulation restriction value of 0.1-0.2 delivers high quality plans which are practically equivalent to those achieved with free modulation regarding the clinically used dosimetric indices. In the comparison, many of the dosimetric and radiobiological indices showed significantly different results. The modulation restricted clinical plans demonstrated a lower total dwell time by a mean of 1.4% that was proved to be statistically significant (p = 0.002). The HIPO with MR treatment plans produced a higher P+ by 0.5%, which stemmed from a better sparing of the OARs by 1.0%. Conclusions Both the dosimetric and radiobiological comparison shows that the modulation restricted optimization gives on average similar results with the optimization without modulation restriction in the examined clinical cases. Concluding, based on our results, it appears that the applied dwell time regularization technique is expected to introduce a minor improvement in the effectiveness of the optimized HDR dose distributions. PMID:27853473

  15. Acid Hydrolysis and Molecular Density of Phytoglycogen and Liver Glycogen Helps Understand the Bonding in Glycogen α (Composite) Particles

    PubMed Central

    Powell, Prudence O.; Sullivan, Mitchell A.; Sheehy, Joshua J.; Schulz, Benjamin L.; Warren, Frederick J.; Gilbert, Robert G.

    2015-01-01

    Phytoglycogen (from certain mutant plants) and animal glycogen are highly branched glucose polymers with similarities in structural features and molecular size range. Both appear to form composite α particles from smaller β particles. The molecular size distribution of liver glycogen is bimodal, with distinct α and β components, while that of phytoglycogen is monomodal. This study aims to enhance our understanding of the nature of the link between liver-glycogen β particles resulting in the formation of large α particles. It examines the time evolution of the size distribution of these molecules during acid hydrolysis, and the size dependence of the molecular density of both glucans. The monomodal distribution of phytoglycogen decreases uniformly in time with hydrolysis, while with glycogen, the large particles degrade significantly more quickly. The size dependence of the molecular density shows qualitatively different shapes for these two types of molecules. The data, combined with a quantitative model for the evolution of the distribution during degradation, suggest that the bonding between β into α particles is different between phytoglycogen and liver glycogen, with the formation of a glycosidic linkage for phytoglycogen and a covalent or strong non-covalent linkage, most probably involving a protein, for glycogen as most likely. This finding is of importance for diabetes, where α-particle structure is impaired. PMID:25799321

  16. Kinetic market models with single commodity having price fluctuations

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Chakrabarti, B. K.

    2006-12-01

    We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.

  17. Small violations of Bell inequalities for multipartite pure random states

    NASA Astrophysics Data System (ADS)

    Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.

    2018-05-01

    For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.

  18. Future changes of precipitation characteristics in China

    NASA Astrophysics Data System (ADS)

    Wu, S.; Wu, Y.; Wen, J.

    2017-12-01

    Global warming has the potential to alter the hydrological cycle, with significant impacts on the human society, the environment and ecosystems. This study provides a detailed assessment of potential changes in precipitation characteristics in China using a suite of 12 high-resolution CMIP5 climate models under a medium and a high Representative Concentration Pathways: RCP4.5 and RCP8.5. We examine future changes over the entire distribution of precipitation, and identify any shift in the shape and/or scale of the distribution. In addition, we use extreme-value theory to evaluate the change in probability and magnitude for extreme precipitation events. Overall, China is going to experience an increase in total precipitation (by 8% under RCP4.5 and 12% under RCP8.5). This increase is uneven spatially, with more increase in the west and less increase in the east. Precipitation frequency is projected to increase in the west and decrease in the east. Under RCP4.5, the overall precipitation frequency for the entire China remains largely unchanged (0.08%). However, RCP8.5 projects a more significant decrease in frequency for large part of China, resulting in an overall decrease of 2.08%. Precipitation intensity is likely increase more uniformly, with an overall increase of 11% for RCP4.5 and 19% for RCP8.5. Precipitation increases for all parts of the distribution, but the increase is more for higher quantiles, i.e. strong events. The relative contribution of small quantiles is likely to decrease, whereas contribution from heavy events is likely to increase. Extreme precipitation increase at much higher rates than average precipitation, and high rates of increase are expected for more extreme events. 1-year events are likely to increase by 15%, but 20-year events are going to increase by 21% under RCP4.5, 26% and 40% respectively under RCP8.5. The increase of extreme events is likely to be more spatially uniform.

  19. Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps

    NASA Astrophysics Data System (ADS)

    Brooks, E. M.; Stein, S. A.; Spencer, B. D.

    2015-12-01

    The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.

  20. Uniformly sized gold nanoparticles derived from PS-b-P2VP block copolymer templates for the controllable synthesis of Si nanowires.

    PubMed

    Lu, Jennifer Q; Yi, Sung Soo

    2006-04-25

    A monolayer of gold-containing surface micelles has been produced by spin-coating solution micelles formed by the self-assembly of the gold-modified polystyrene-b-poly(2-vinylpyridine) block copolymer in toluene. After oxygen plasma removed the block copolymer template, highly ordered and uniformly sized nanoparticles have been generated. Unlike other published methods that require reduction treatments to form gold nanoparticles in the zero-valent state, these as-synthesized nanoparticles are in form of metallic gold. These gold nanoparticles have been demonstrated to be an excellent catalyst system for growing small-diameter silicon nanowires. The uniformly sized gold nanoparticles have promoted the controllable synthesis of silicon nanowires with a narrow diameter distribution. Because of the ability to form a monolayer of surface micelles with a high degree of order, evenly distributed gold nanoparticles have been produced on a surface. As a result, uniformly distributed, high-density silicon nanowires have been generated. The process described herein is fully compatible with existing semiconductor processing techniques and can be readily integrated into device fabrication.

  1. Spatial model of the gecko foot hair: functional significance of highly specialized non-uniform geometry.

    PubMed

    Filippov, Alexander E; Gorb, Stanislav N

    2015-02-06

    One of the important problems appearing in experimental realizations of artificial adhesives inspired by gecko foot hair is so-called clusterization. If an artificially produced structure is flexible enough to allow efficient contact with natural rough surfaces, after a few attachment-detachment cycles, the fibres of the structure tend to adhere one to another and form clusters. Normally, such clusters are much larger than original fibres and, because they are less flexible, form much worse adhesive contacts especially with the rough surfaces. Main problem here is that the forces responsible for the clusterization are the same intermolecular forces which attract fibres to fractal surface of the substrate. However, arrays of real gecko setae are much less susceptible to this problem. One of the possible reasons for this is that ends of the seta have more sophisticated non-uniformly distributed three-dimensional structure than that of existing artificial systems. In this paper, we simulated three-dimensional spatial geometry of non-uniformly distributed branches of nanofibres of the setal tip numerically, studied its attachment-detachment dynamics and discussed its advantages versus uniformly distributed geometry.

  2. Pattern optimization of compound optical film for uniformity improvement in liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Huang, Bing-Le; Lin, Jin-tang; Ye, Yun; Xu, Sheng; Chen, En-guo; Guo, Tai-Liang

    2017-12-01

    The density dynamic adjustment algorithm (DDAA) is designed to efficiently promote the uniformity of the integrated backlight module (IBLM) by adjusting the microstructures' distribution on the compound optical film (COF), in which the COF is constructed in the SolidWorks and simulated in the TracePro. In order to demonstrate the universality of the proposed algorithm, the initial distribution is allocated by the Bezier curve instead of an empirical value. Simulation results maintains that the uniformity of the IBLM reaches over 90% only after four rounds. Moreover, the vertical and horizontal full width at half maximum of angular intensity are collimated to 24 deg and 14 deg, respectively. Compared with the current industry requirement, the IBLM has an 85% higher luminance uniformity of the emerging light, which demonstrate the feasibility and universality of the proposed algorithm.

  3. Grain velocity of bedload movement in an armored non-uniform mobile bed

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2015-12-01

    The velocity of bedload particles, which directly reflects the interaction between flow and sediment, is one of the important parameters to predict sediment transport rate, is also one of the fundamental problems for sediment transport. Many excellent works have been accomplished in this filed. However, the existing researches are mostly based on the artificial fixed bed, few moveable bed studies are focus on uniform sediment bed, these boundary conditions are different from a real river. In this research, an experiment on non-uniform sediment with an armored, moveable bed were carried out in a flume, the range of bed material is from 0.2mm to 20mm. With a special hanging glass and illumination system, the motion particles in the bed were clearly shoot on top of the flume by a video camera, avoiding the interference of waves at the flow surface. The speed of the camera is 50 frames per second. About 7000 unique coordinates of moving particles were determined from 3000 frames of successive pictures, the particle velocity of longitudinal and crosswise directions were obtained from the coordinates. The results show that, the probability density distribution of grain velocities of both directions are similar to that in the uniform sediment, which have an exponent decay trend, whereas the value of cross velocity of particles is clearly greater than that in the uniform sediment condition. Negative particle velocity was recognized in the experiment, it is shown that these negative may occur at two conditions, one is the backflow of fine particles behind the coarser particles, and the other is a state of movement change, such as a particle from static state to motion or vice versa. Furthermore, the particle movement was strongly affected by the arrangement of local coarse particles. The influence of coarser particles to the movement of fine particles also identified by two opposite effects, one is the acceleration effects in a 'tunnel' between pair of series particles, the other is the deceleration effects out of the tunnel, or fine particles captured by the backwater flow just behind a coarse particle. In addition, ensemble particle velocity in the armored bed is distinctly less than which in the fixed bed and uniform bed condition with same particle Reynolds number and Shields parameter. (Supported by(2012BAB04B01;NSFC(11472310))

  4. Terawatt x-ray free-electron-laser optimization by transverse electron distribution shaping

    DOE PAGES

    Emma, C.; Wu, J.; Fang, K.; ...

    2014-11-03

    We study the dependence of the peak power of a 1.5 Å Terawatt (TW), tapered x-ray free-electron laser (FEL) on the transverse electron density distribution. Multidimensional optimization schemes for TW hard x-ray free-electron lasers are applied to the cases of transversely uniform and parabolic electron beam distributions and compared to a Gaussian distribution. The optimizations are performed for a 200 m undulator and a resonant wavelength of λ r = 1.5 Å using the fully three-dimensional FEL particle code GENESIS. The study shows that the flatter transverse electron distributions enhance optical guiding in the tapered section of the undulator andmore » increase the maximum radiation power from a maximum of 1.56 TW for a transversely Gaussian beam to 2.26 TW for the parabolic case and 2.63 TW for the uniform case. Spectral data also shows a 30%–70% reduction in energy deposited in the sidebands for the uniform and parabolic beams compared with a Gaussian. An analysis of the transverse coherence of the radiation shows the coherence area to be much larger than the beam spotsize for all three distributions, making coherent diffraction imaging experiments possible.« less

  5. Integrated Joule switches for the control of current dynamics in parallel superconducting strips

    NASA Astrophysics Data System (ADS)

    Casaburi, A.; Heath, R. M.; Cristiano, R.; Ejrnaes, M.; Zen, N.; Ohkubo, M.; Hadfield, R. H.

    2018-06-01

    Understanding and harnessing the physics of the dynamic current distribution in parallel superconducting strips holds the key to creating next generation sensors for single molecule and single photon detection. Non-uniformity in the current distribution in parallel superconducting strips leads to low detection efficiency and unstable operation, preventing the scale up to large area sensors. Recent studies indicate that non-uniform current distributions occurring in parallel strips can be understood and modeled in the framework of the generalized London model. Here we build on this important physical insight, investigating an innovative design with integrated superconducting-to-resistive Joule switches to break the superconducting loops between the strips and thus control the current dynamics. Employing precision low temperature nano-optical techniques, we map the uniformity of the current distribution before- and after the resistive strip switching event, confirming the effectiveness of our design. These results provide important insights for the development of next generation large area superconducting strip-based sensors.

  6. Keeping an eye on the ring: COMS plaque loading optimization for improved dose conformity and homogeneity.

    PubMed

    Gagne, Nolan L; Cutright, Daniel R; Rivard, Mark J

    2012-09-01

    To improve tumor dose conformity and homogeneity for COMS plaque brachytherapy by investigating the dosimetric effects of varying component source ring radionuclides and source strengths. The MCNP5 Monte Carlo (MC) radiation transport code was used to simulate plaque heterogeneity-corrected dose distributions for individually-activated source rings of 14, 16 and 18 mm diameter COMS plaques, populated with (103)Pd, (125)I and (131)Cs sources. Ellipsoidal tumors were contoured for each plaque size and MATLAB programming was developed to generate tumor dose distributions for all possible ring weighting and radionuclide permutations for a given plaque size and source strength resolution, assuming a 75 Gy apical prescription dose. These dose distributions were analyzed for conformity and homogeneity and compared to reference dose distributions from uniformly-loaded (125)I plaques. The most conformal and homogeneous dose distributions were reproduced within a reference eye environment to assess organ-at-risk (OAR) doses in the Pinnacle(3) treatment planning system (TPS). The gamma-index analysis method was used to quantitatively compare MC and TPS-generated dose distributions. Concentrating > 97% of the total source strength in a single or pair of central (103)Pd seeds produced the most conformal dose distributions, with tumor basal doses a factor of 2-3 higher and OAR doses a factor of 2-3 lower than those of corresponding uniformly-loaded (125)I plaques. Concentrating 82-86% of the total source strength in peripherally-loaded (131)Cs seeds produced the most homogeneous dose distributions, with tumor basal doses 17-25% lower and OAR doses typically 20% higher than those of corresponding uniformly-loaded (125)I plaques. Gamma-index analysis found > 99% agreement between MC and TPS dose distributions. A method was developed to select intra-plaque ring radionuclide compositions and source strengths to deliver more conformal and homogeneous tumor dose distributions than uniformly-loaded (125)I plaques. This method may support coordinated investigations of an appropriate clinical target for eye plaque brachytherapy.

  7. Impact of deformed extreme-ultraviolet pellicle in terms of CD uniformity

    NASA Astrophysics Data System (ADS)

    Kim, In-Seon; Yeung, Michael; Barouch, Eytan; Oh, Hye-Keun

    2015-07-01

    The usage of the extreme ultraviolet (EUV) pellicle is regarded as the solution for defect control since it can protect the mask from airborne debris. However some obstacles disrupt real-application of the pellicle such as structural weakness, thermal damage and so on. For these reasons, flawless fabrication of the pellicle is impossible. In this paper, we discuss the influence of deformed pellicle in terms of non-uniform intensity distribution and critical dimension (CD) uniformity. It was found that non-uniform intensity distribution is proportional to local tilt angle of pellicle and CD variation was linearly proportional to transmission difference. When we consider the 16 nm line and space pattern with dipole illumination (σc=0.8, σr=0.1, NA=0.33), the transmission difference (max-min) of 0.7 % causes 0.1 nm CD uniformity. Influence of gravity caused deflection to the aerial image is small enough to ignore. CD uniformity is less than 0.1 nm even for the current gap of 2 mm between mask and pellicle. However, heat caused EUV pellicle wrinkle might cause serious image distortion because a wrinkle of EUV pellicle causes a transmission loss variation as well as CD non-uniformity. In conclusion, local angle of a wrinkle, not a period or an amplitude of a wrinkle is a main factor to CD uniformity, and local angle of less than ~270 mrad is needed to achieve 0.1 nm CD uniformity with 16 nm L/S pattern.

  8. Hydrostatic bearings for a turbine fluid flow metering device

    DOEpatents

    Fincke, J.R.

    1980-05-02

    A rotor assembly fluid metering device has been improved by development of a hydrostatic bearing fluid system which provides bearing fluid at a common pressure to rotor assembly bearing surfaces. The bearing fluid distribution system produces a uniform film of fluid distribution system produces a uniform film of fluid between bearing surfaces and allows rapid replacement of bearing fluid between bearing surfaces, thereby minimizing bearing wear and corrosion.

  9. Development of extended release dosage forms using non-uniform drug distribution techniques.

    PubMed

    Huang, Kuo-Kuang; Wang, Da-Peng; Meng, Chung-Ling

    2002-05-01

    Development of an extended release oral dosage form for nifedipine using the non-uniform drug distribution matrix method was conducted. The process conducted in a fluid bed processing unit was optimized by controlling the concentration gradient of nifedipine in the coating solution and the spray rate applied to the non-pareil beads. The concentration of nifedipine in the coating was controlled by instantaneous dilutions of coating solution with polymer dispersion transported from another reservoir into the coating solution at a controlled rate. The USP dissolution method equipped with paddles at 100 rpm in 0.1 N hydrochloric acid solution maintained at 37 degrees C was used for the evaluation of release rate characteristics. Results indicated that (1) an increase in the ethyl cellulose content in the coated beads decreased the nifedipine release rate, (2) incorporation of water-soluble sucrose into the formulation increased the release rate of nifedipine, and (3) adjustment of the spray coating solution and the transport rate of polymer dispersion could achieve a dosage form with a zero-order release rate. Since zero-order release rate and constant plasma concentration were achieved in this study using the non-uniform drug distribution technique, further studies to determine in vivo/in vitro correlation with various non-uniform drug distribution dosage forms will be conducted.

  10. Studies of Transient X-Ray Sources with the Ariel 5 All-Sky Monitor. Ph.D. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Kaluzienski, L. J.

    1977-01-01

    The All-Sky Monitor, an imaging X-ray detector launched aboard the Ariel 5 satellite, was used to obtain detailed light curves of three new sources. Additional data essential to the determination of the characteristic luminosities, rates of occurrence (and possible recurrence), and spatial distribution of these objects was also obtained. The observations are consistent with a roughly uniform galactic disk population consisting of at least two source sub-classes, with the second group (Type 2) at least an order of magnitude less luminous and correspondingly more frequent than the first (Type 1). While both subtypes are probably unrelated to the classical optical novae (or supernovae), they are most readily interpreted within the standard mass exchange X-ray binary model, with outbursts triggered by Roche-lobe overflow (Type 1) or enhancements in the stellar wind density of the companion (Type 2), respectively.

  11. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    NASA Technical Reports Server (NTRS)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  12. Evidence for calcium soaps in human hair shaft revealed by sub-micrometer X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Briki, F.; Mérigoux, C.; Sarrot-Reynauld, F.; Salomé, M.; Fayard, B.; Susini, J.; Doucet, J.

    2003-03-01

    New information about calcium status in human scalp hair shaft, deduced from X-ray microfluorescence imaging, including its distribution over the hair section, the existence of one or several binding-types and its variation between people, is presented. The existence of two different calcium types is inferred. The first one corresponds to atoms (or ions) easily removable by hydrochloric acid, located in the cortex (granules), in the cuticle zone and also in the core of the medulla, which are identified as calcium soaps cy comparison with X-ray diffraction and IR spectromicroscopy data. The second type consists of non-easily removable calcium atoms (or ions) that are located in the medulla wall, probably also the cuticle, and rather uniformly in the cortex; these calcium atoms may be involved in Ca^{2+}-binding proteins, their concentration is fairly constant from one subject to another.

  13. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  14. Quantification of brain tissue through incorporation of partial volume effects

    NASA Astrophysics Data System (ADS)

    Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.

    1992-06-01

    This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.

  15. Statistical characterization of a large geochemical database and effect of sample size

    USGS Publications Warehouse

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  16. Quantitative Analyses of Pediatric Cervical Spine Ossification Patterns Using Computed Tomography

    PubMed Central

    Yoganandan, Narayan; Pintar, Frank A.; Lew, Sean M.; Rao, Raj D.; Rangarajan, Nagarajan

    2011-01-01

    The objective of the present study was to quantify ossification processes of the human pediatric cervical spine. Computed tomography images were obtained from a high resolution scanner according to clinical protocols. Bone window images were used to identify the presence of the primary synchondroses of the atlas, axis, and C3 vertebrae in 101 children. Principles of logistic regression were used to determine probability distributions as a function of subject age for each synchondrosis for each vertebra. The mean and 95% upper and 95% lower confidence intervals are given for each dataset delineating probability curves. Posterior ossifications preceded bilateral anterior closures of the synchondroses in all vertebrae. However, ossifications occurred at different ages. Logistic regression results for closures of different synchondrosis indicated p-values of <0.001 for the atlas, ranging from 0.002 to <0.001 for the axis, and 0.021 to 0.005 for the C3 vertebra. Fifty percent probability of three, two, and one synchondroses occurred at 2.53, 6.97, and 7.57 years of age for the atlas; 3.59, 4.74, and 5.7 years of age for the axis; and 1.28, 2.22, and 3.17 years of age for the third cervical vertebrae, respectively. Ossifications occurring at different ages indicate non-uniform maturations of bone growth/strength. They provide an anatomical rationale to reexamine dummies, scaling processes, and injury metrics for improved understanding of pediatric neck injuries PMID:22105393

  17. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-29

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In that case, it is particularly important to be explicit about the performance required of the survey. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine 1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, 2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and 3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  18. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-28

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In such cases, it is particularly important to be explicit about the performance required of the surveys. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine (1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, (2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and (3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  19. WE-DE-201-12: Thermal and Dosimetric Properties of a Ferrite-Based Thermo-Brachytherapy Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warrell, G; Shvydka, D; Parsai, E I

    Purpose: The novel thermo-brachytherapy (TB) seed provides a simple means of adding hyperthermia to LDR prostate permanent implant brachytherapy. The high blood perfusion rate (BPR) within the prostate motivates the use of the ferrite and conductive outer layer design for the seed cores. We describe the results of computational analyses of the thermal properties of this ferrite-based TB seed in modelled patient-specific anatomy, as well as studies of the interseed and scatter (ISA) effect. Methods: The anatomies (including the thermophysical properties of the main tissue types) and seed distributions of 6 prostate patients who had been treated with LDR brachytherapymore » seeds were modelled in the finite element analysis software COMSOL, using ferrite-based TB and additional hyperthermia-only (HT-only) seeds. The resulting temperature distributions were compared to those computed for patient-specific seed distributions, but in uniform anatomy with a constant blood perfusion rate. The ISA effect was quantified in the Monte Carlo software package MCNP5. Results: Compared with temperature distributions calculated in modelled uniform tissue, temperature distributions in the patient-specific anatomy were higher and more heterogeneous. Moreover, the maximum temperature to the rectal wall was typically ∼1 °C greater for patient-specific anatomy than for uniform anatomy. The ISA effect of the TB and HT-only seeds caused a reduction in D90 similar to that found for previously-investigated NiCu-based seeds, but of a slightly smaller magnitude. Conclusion: The differences between temperature distributions computed for uniform and patient-specific anatomy for ferrite-based seeds are significant enough that heterogeneous anatomy should be considered. Both types of modelling indicate that ferrite-based seeds provide sufficiently high and uniform hyperthermia to the prostate, without excessively heating surrounding tissues. The ISA effect of these seeds is slightly less than that for the previously-presented NiCu-based seeds.« less

  20. The distribution of cigarette prices under different tax structures: findings from the International Tobacco Control Policy Evaluation (ITC) Project

    PubMed Central

    Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T

    2013-01-01

    Background The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. Objective This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. Methods We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Findings Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax. PMID:23792324

  1. The distribution of cigarette prices under different tax structures: findings from the International Tobacco Control Policy Evaluation (ITC) Project.

    PubMed

    Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T

    2014-03-01

    The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax.

  2. Aneurysm permeability following coil embolization: packing density and coil distribution

    PubMed Central

    Chueh, Ju-Yu; Vedantham, Srinivasan; Wakhloo, Ajay K; Carniato, Sarena L; Puri, Ajit S; Bzura, Conrad; Coffin, Spencer; Bogdanov, Alexei A; Gounis, Matthew J

    2015-01-01

    Background Rates of durable aneurysm occlusion following coil embolization vary widely, and a better understanding of coil mass mechanics is desired. The goal of this study is to evaluate the impact of packing density and coil uniformity on aneurysm permeability. Methods Aneurysm models were coiled using either Guglielmi detachable coils or Target coils. The permeability was assessed by taking the ratio of microspheres passing through the coil mass to those in the working fluid. Aneurysms containing coil masses were sectioned for image analysis to determine surface area fraction and coil uniformity. Results All aneurysms were coiled to a packing density of at least 27%. Packing density, surface area fraction of the dome and neck, and uniformity of the dome were significantly correlated (p<0.05). Hence, multivariate principal components-based partial least squares regression models were used to predict permeability. Similar loading vectors were obtained for packing and uniformity measures. Coil mass permeability was modeled better with the inclusion of packing and uniformity measures of the dome (r2=0.73) than with packing density alone (r2=0.45). The analysis indicates the importance of including a uniformity measure for coil distribution in the dome along with packing measures. Conclusions A densely packed aneurysm with a high degree of coil mass uniformity will reduce permeability. PMID:25031179

  3. Nonadditive entropies yield probability distributions with biases not warranted by the data.

    PubMed

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A

    2013-11-01

    Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.

  4. Normal uniform mixture differential gene expression detection for cDNA microarrays

    PubMed Central

    Dean, Nema; Raftery, Adrian E

    2005-01-01

    Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807

  5. Effects of non-uniform root zone salinity on water use, Na+ recirculation, and Na+ and H+ flux in cotton

    PubMed Central

    Kong, Xiangqiang; Luo, Zhen; Dong, Hezhong; Eneji, A. Egrinya

    2012-01-01

    A new split-root system was established through grafting to study cotton response to non-uniform salinity. Each root half was treated with either uniform (100/100 mM) or non-uniform NaCl concentrations (0/200 and 50/150 mM). In contrast to uniform control, non-uniform salinity treatment improved plant growth and water use, with more water absorbed from the non- and low salinity side. Non-uniform treatments decreased Na+ concentrations in leaves. The [Na+] in the ‘0’ side roots of the 0/200 treatment was significantly higher than that in either side of the 0/0 control, but greatly decreased when the ‘0’ side phloem was girdled, suggesting that the increased [Na+] in the ‘0’ side roots was possibly due to transportation of foliar Na+ to roots through phloem. Plants under non-uniform salinity extruded more Na+ from the root than those under uniform salinity. Root Na+ efflux in the low salinity side was greatly enhanced by the higher salinity side. NaCl-induced Na+ efflux and H+ influx were inhibited by amiloride and sodium orthovanadate, suggesting that root Na+ extrusion was probably due to active Na+/H+ antiport across the plasma membrane. Improved plant growth under non-uniform salinity was thus attributed to increased water use, reduced leaf Na+ concentration, transport of excessive foliar Na+ to the low salinity side, and enhanced Na+ efflux from the low salinity root. PMID:22200663

  6. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  7. Persistence of canine distemper virus in the Greater Yellowstone ecosystem's carnivore community.

    PubMed

    Almberg, Emily S; Cross, Paul C; Smith, Douglas W

    2010-10-01

    Canine distemper virus (CDV) is an acute, highly immunizing pathogen that should require high densities and large populations of hosts for long-term persistence, yet CDV persists among terrestrial carnivores with small, patchily distributed groups. We used CDV in the Greater Yellowstone ecosystem's (GYE) wolves (Canis lupus) and coyotes (Canis latrans) as a case study for exploring how metapopulation structure, host demographics, and multi-host transmission affect the critical community size and spatial scale required for CDV persistence. We illustrate how host spatial connectivity and demographic turnover interact to affect both local epidemic dynamics, such as the length and variation in inter-epidemic periods, and pathogen persistence using stochastic, spatially explicit susceptible-exposed-infectious-recovered simulation models. Given the apparent absence of other known persistence mechanisms (e.g., a carrier or environmental state, densely populated host, chronic infection, or a vector), we suggest that CDV requires either large spatial scales or multi-host transmission for persistence. Current GYE wolf populations are probably too small to support endemic CDV. Coyotes are a plausible reservoir host, but CDV would still require 50000-100000 individuals for moderate persistence (> 50% over 10 years), which would equate to an area of 1-3 times the size of the GYE (60000-200000 km2). Coyotes, and carnivores in general, are not uniformly distributed; therefore, this is probably a gross underestimate of the spatial scale of CDV persistence. However, the presence of a second competent host species can greatly increase the probability of long-term CDV persistence at much smaller spatial scales. Although no management of CDV is currently recommended for the GYE, wolf managers in the region should expect periodic but unpredictable CDV-related population declines as often as every 2-5 years. Awareness and monitoring of such outbreaks will allow corresponding adjustments in management activities such as regulated public harvest, creating a smooth transition to state wolf management and conservation after > 30 years of being protected by the Endangered Species Act.

  8. Nonstationarity in timing of extreme precipitation across China and impact of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Gu, Xihui; Zhang, Qiang; Singh, Vijay P.; Shi, Peijun

    2017-02-01

    This study examines the seasonality and nonstationarity in the timing of extreme precipitation obtained by annual maximum (AM) sampling and peak-over-threshold (POT) sampling techniques using circular statistics. Daily precipitation data from 728 stations with record length of at least 55 years across China were analyzed. In general, the average seasonality is subject mainly to summer season (June-July - August), which is potentially related to East Asian monsoon and Indian monsoon activities. The strength of precipitation seasonality varied across China with the highest strength being in northeast, north, and central-north China; whereas the weakest seasonality was found in southeast China. There are three seasonality types: circular uniform, reflective symmetric, and asymmetric. However, the circular uniform seasonality of extreme precipitation was not detected at stations across China. The asymmetric distribution was observed mainly in southeast China, and the reflective distribution of precipitation extremes was also identified the other regions besides the above-mentioned regions. Furthermore, a strong signal of nonstationarity in the seasonality was detected at half of the weather stations considered in the study, exhibiting a significant shift in the timing of extreme precipitation, and also significant trends in the average and strength of seasonality. Seasonal vapor flux and related delivery pathways and also tropical cyclones (TCs) are most probably the driving factors for the shifts or changes in the seasonality of extreme precipitation across China. Timing of precipitation extremes is closely related to seasonal shifts of floods and droughts and which means much for management of agricultural irrigation and water resources management. This study sheds new light on nonstationarity in timing of precipitation extremes which differs from existing ones which focused on precipitation extremes from perspective of magnitude and intensity.

  9. Origin and heterogeneity of pore sizes in the Mount Simon Sandstone and Eau Claire Formation: Implications for multiphase fluid flow

    DOE PAGES

    Mozley, Peter S.; Heath, Jason E.; Dewers, Thomas A.; ...

    2016-01-01

    The Mount Simon Sandstone and Eau Claire Formation represent a principal reservoir - caprock system for wastewater disposal, geologic CO 2 storage, and compressed air energy storage (CAES) in the Midwestern United States. Of primary concern to site performance is heterogeneity in flow properties that could lead to non-ideal injectivity and distribution of injected fluids (e.g., poor sweep efficiency). Using core samples from the Dallas Center Structure, Iowa, we investigate pore structure that governs flow properties of major lithofacies of these formations. Methods include gas porosimetry and permeametry, mercury intrusion porosimetry, thin section petrography, and X-ray diffraction. The lithofacies exhibitmore » highly variable intra- and inter-informational distributions of pore throat and body sizes. Based on pore-throat size, samples fall into four distinct groups. Micropore-throat dominated samples are from the Eau Claire Formation, whereas the macropore-, mesopore-, and uniform-dominated samples are from the Mount Simon Sandstone. Complex paragenesis governs the high degree of pore and pore-throat size heterogeneity, due to an interplay of precipitation, non-uniform compaction, and later dissolution of cements. Furthermore, the cement dissolution event probably accounts for much of the current porosity in the unit. The unusually heterogeneous nature of the pore networks in the Mount Simon Sandstone indicates that there is a greater-than-normal opportunity for reservoir capillary trapping of non-wetting fluids — as quantified by CO 2 and air column heights — which should be taken into account when assessing the potential of the reservoir-caprock system for CO 2 storage and CAES.« less

  10. CFD simulation of the gas flow in a pulse tube cryocooler with two pulse tubes

    NASA Astrophysics Data System (ADS)

    Yin, C. L.

    2015-12-01

    In this paper, in order to instruct the next optimization work, a two-dimension Computational Fluid Dynamics (CFD) model is developed to simulate temperature distribution and velocity distribution of oscillating fluid in the DPTC by individual phase-shifting. It is found that the axial temperature distribution of regenerator is generally uniform and the temperatures near the center at the same cross setion of two pulse tubes are obviously higher than their near wall temperatures. The wall temperature difference about 0-7 K exists between the two pulse tubes. The velocity distribution near the center of the regenerator is uniform and there is obvious injection stream coming at the center of the pulse tubes from the hot end. The formation reason of temperature distribution and velocity distribution is explained.

  11. Mean-field calculations of chain packing and conformational statistics in lipid bilayers: comparison with experiments and molecular dynamics studies.

    PubMed Central

    Fattal, D R; Ben-Shaul, A

    1994-01-01

    A molecular, mean-field theory of chain packing statistics in aggregates of amphiphilic molecules is applied to calculate the conformational properties of the lipid chains comprising the hydrophobic cores of dipalmitoyl-phosphatidylcholine (DPPC), dioleoyl-phosphatidylcholine (DOPC), and palmitoyl-oleoyl-phosphatidylcholine (POPC) bilayers in their fluid state. The central quantity in this theory, the probability distribution of chain conformations, is evaluated by minimizing the free energy of the bilayer assuming only that the segment density within the hydrophobic region is uniform (liquidlike). Using this distribution we calculate chain conformational properties such as bond orientational order parameters and spatial distributions of the various chain segments. The lipid chains, both the saturated palmitoyl (-(CH2)14-CH3) and the unsaturated oleoyl (-(CH2)7-CH = CH-(CH2)7-CH3) chains are modeled using rotational isomeric state schemes. All possible chain conformations are enumerated and their statistical weights are determined by the self-consistency equations expressing the condition of uniform density. The hydrophobic core of the DPPC bilayer is treated as composed of single (palmitoyl) chain amphiphiles, i.e., the interactions between chains originating from the same lipid headgroup are assumed to be the same as those between chains belonging to different molecules. Similarly, the DOPC system is treated as a bilayer of oleoyl chains. The POPC bilayer is modeled as an equimolar mixture of palmitoyl and oleoyl chains. Bond orientational order parameter profiles, and segment spatial distributions are calculated for the three systems above, for several values of the bilayer thickness (or, equivalently, average area/headgroup) chosen, where possible, so as to allow for comparisons with available experimental data and/or molecular dynamics simulations. In most cases the agreement between the mean-field calculations, which are relatively easy to perform, and the experimental and simulation data is very good, supporting their use as an efficient tool for analyzing a variety of systems subject to varying conditions (e.g., bilayers of different compositions or thicknesses at different temperatures). PMID:7811955

  12. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. PRODIGEN: visualizing the probability landscape of stochastic gene regulatory networks in state and time space.

    PubMed

    Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta

    2017-02-15

    Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.

  14. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less

  15. Parametrization of Backbone Flexibility in a Coarse-Grained Force Field for Proteins (COFFDROP) Derived from All-Atom Explicit-Solvent Molecular Dynamics Simulations of All Possible Two-Residue Peptides.

    PubMed

    Frembgen-Kesner, Tamara; Andrews, Casey T; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A; Jain, Aakash; Olayiwola, Oluwatoni J; Weishaar, Mitch R; Elcock, Adrian H

    2015-05-12

    Recently, we reported the parametrization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral, and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral, and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downward in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multidomain proteins connected by flexible linkers.

  16. High-Resolution Imaging of Selenium in Kidneys: A Localized Selenium Pool Associated with Glutathione Peroxidase 3

    PubMed Central

    Malinouski, Mikalai; Kehr, Sebastian; Finney, Lydia; Vogt, Stefan; Carlson, Bradley A.; Seravalli, Javier; Jin, Richard; Handy, Diane E.; Park, Thomas J.; Loscalzo, Joseph; Hatfield, Dolph L.

    2012-01-01

    Abstract Aim: Recent advances in quantitative methods and sensitive imaging techniques of trace elements provide opportunities to uncover and explain their biological roles. In particular, the distribution of selenium in tissues and cells under both physiological and pathological conditions remains unknown. In this work, we applied high-resolution synchrotron X-ray fluorescence microscopy (XFM) to map selenium distribution in mouse liver and kidney. Results: Liver showed a uniform selenium distribution that was dependent on selenocysteine tRNA[Ser]Sec and dietary selenium. In contrast, kidney selenium had both uniformly distributed and highly localized components, the latter visualized as thin circular structures surrounding proximal tubules. Other parts of the kidney, such as glomeruli and distal tubules, only manifested the uniformly distributed selenium pattern that co-localized with sulfur. We found that proximal tubule selenium localized to the basement membrane. It was preserved in Selenoprotein P knockout mice, but was completely eliminated in glutathione peroxidase 3 (GPx3) knockout mice, indicating that this selenium represented GPx3. We further imaged kidneys of another model organism, the naked mole rat, which showed a diminished uniformly distributed selenium pool, but preserved the circular proximal tubule signal. Innovation: We applied XFM to image selenium in mammalian tissues and identified a highly localized pool of this trace element at the basement membrane of kidneys that was associated with GPx3. Conclusion: XFM allowed us to define and explain the tissue topography of selenium in mammalian kidneys at submicron resolution. Antioxid. Redox Signal. 16, 185–192. PMID:21854231

  17. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  18. Sunspots and the Newcomb-Benford Law. (Spanish Title: Manchas Solares y la Ley de Newcomb-Benford.) Manchas Solares e a Lei de Newcomb-Benford

    NASA Astrophysics Data System (ADS)

    Alves, Mauro A.; Lyra, Cássia S.

    2008-12-01

    The Newcomb-Benford's Law (LNB) of first digits is introduced to high school students in an extracurricular activity through the study of sunspots. The LNB establishes that the first digits of various sets of data describing natural occurrences are not distributed uniformly, but according to a logarithmic distribution of probability. The LNB is counter-intuitive and is a good example of how mathematics applied to the study of natural phenomena can provide surprising and unexpected results serving also as a motivating agent in the study of physical sciences. En este trabajo se describe una actividad extracurricular donde se presenta a los estudiantes la ley de los primeros dígitos de Newcomb-Benford (LNB) con el estudio de manchas solares. La LNB establece que los primeros dígitos de algunos tipos de dados de ocurrencia natural no están distribuidos en manera uniforme, pero sí de acuerdo con una distribución logarítmica de probabilidad. La LNB es contra-intuitiva y es un excelente ejemplo de como las matemáticas aplicadas al estudio de fenómenos naturales pueden sorprender al estudiante, sirviendo también como elemento motivador en la educación de ciencias y de matemáticas. Este trabalho descreve uma atividade extracurricular na qual a lei dos primeiros dígitos de Newcomb-Benford (LNB) é introduzida a estudantes através do estudo de manchas solares. A LNB estabelece que os primeiros dígitos de vários tipos de conjunto de dados de ocorrência natural não são distribuídos de maneira uniforme, mas sim de acordo com uma distribuição logarítmica de probabilidade. A LNB é contra-intuitiva e é um ótimo exemplo de como a matemática aplicada ao estudo de fenômenos naturais pode fornecer resultados surpreendentes e inesperados, servindo também como um agente motivador no ensino de ciências e matemática.

  19. Semantic Importance Sampling for Statistical Model Checking

    DTIC Science & Technology

    2015-01-16

    SMT calls while maintaining correctness. Finally, we implement SIS in a tool called osmosis and use it to verify a number of stochastic systems with...2 surveys related work. Section 3 presents background definitions and concepts. Section 4 presents SIS, and Section 5 presents our tool osmosis . In...which I∗M|=Φ(x) = 1. We do this by first randomly selecting a cube c from C∗ with uniform probability since each cube has equal probability 9 5. OSMOSIS

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvin, Robert A.; Jin, Qiaoling; Lai, Barry

    Increasing evidence suggests that metal dyshomeostasis plays an important role in human neurodegenerative diseases. Although distinctive metal distributions are described for mature hippocampus and cortex, much less is known about metal levels and intracellular distribution in individual hippocampal neuronal somata. To solve this problem, we conducted quantitative metal analyses utilizing synchrotron radiation X-Ray fluorescence on frozen hydrated primary cultured neurons derived from rat embryonic cortex (CTX) and two regions of the hippocampus: dentate gyrus (DG) and CA1. Also, comparing average metal contents showed that the most abundant metals were calcium, iron, and zinc, whereas metals such as copper and manganesemore » were less than 10% of zinc. Average metal contents were generally similar when compared across neurons cultured from CTX, DG, and CA1, except for manganese that was larger in CA1. However, each metal showed a characteristic spatial distribution in individual neuronal somata. Zinc was uniformly distributed throughout the cytosol, with no evidence for the existence of previously identified zinc-enriched organelles, zincosomes. Calcium showed a peri-nuclear distribution consistent with accumulation in endoplasmic reticulum and/or mitochondria. Iron showed 2-3 distinct highly concentrated puncta only in peri-nuclear locations. Notwithstanding the small sample size, these analyses demonstrate that primary cultured neurons show characteristic metal signatures. The iron puncta probably represent iron-accumulating organelles, siderosomes. Thus, the metal distributions observed in mature brain structures are likely the result of both intrinsic neuronal factors that control cellular metal content and extrinsic factors related to the synaptic organization, function, and contacts formed and maintained in each region.« less

  1. High-voltage electrode optimization towards uniform surface treatment by a pulsed volume discharge

    NASA Astrophysics Data System (ADS)

    Ponomarev, A. V.; Pedos, M. S.; Scherbinin, S. V.; Mamontov, Y. I.; Ponomarev, S. V.

    2015-11-01

    In this study, the shape and material of the high-voltage electrode of an atmospheric pressure plasma generation system were optimised. The research was performed with the goal of achieving maximum uniformity of plasma treatment of the surface of the low-voltage electrode with a diameter of 100 mm. In order to generate low-temperature plasma with the volume of roughly 1 cubic decimetre, a pulsed volume discharge was used initiated with a corona discharge. The uniformity of the plasma in the region of the low-voltage electrode was assessed using a system for measuring the distribution of discharge current density. The system's low-voltage electrode - collector - was a disc of 100 mm in diameter, the conducting surface of which was divided into 64 radially located segments of equal surface area. The current at each segment was registered by a high-speed measuring system controlled by an ARM™-based 32-bit microcontroller. To facilitate the interpretation of results obtained, a computer program was developed to visualise the results. The program provides a 3D image of the current density distribution on the surface of the low-voltage electrode. Based on the results obtained an optimum shape for a high-voltage electrode was determined. Uniformity of the distribution of discharge current density in relation to distance between electrodes was studied. It was proven that the level of non-uniformity of current density distribution depends on the size of the gap between electrodes. Experiments indicated that it is advantageous to use graphite felt VGN-6 (Russian abbreviation) as the material of the high-voltage electrode's emitting surface.

  2. Incorporating Skew into RMS Surface Roughness Probability Distribution

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  3. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    PubMed

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  4. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    PubMed Central

    Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828

  5. Enhancement of a 2D front-tracking algorithm with a non-uniform distribution of Lagrangian markers

    NASA Astrophysics Data System (ADS)

    Febres, Mijail; Legendre, Dominique

    2018-04-01

    The 2D front tracking method is enhanced to control the development of spurious velocities for non-uniform distributions of markers. The hybrid formulation of Shin et al. (2005) [7] is considered. A new tangent calculation is proposed for the calculation of the tension force at markers. A new reconstruction method is also proposed to manage non-uniform distributions of markers. We show that for both the static and the translating spherical drop test case the spurious currents are reduced to the machine precision. We also show that the ratio of the Lagrangian grid size Δs over the Eulerian grid size Δx has to satisfy Δs / Δx > 0.2 for ensuring such low level of spurious velocity. The method is found to provide very good agreement with benchmark test cases from the literature.

  6. Results on Vertex Degree and K-Connectivity in Uniform S-Intersection Graphs

    DTIC Science & Technology

    2014-01-01

    distribution. A uniform s-intersection graph models the topology of a secure wireless sensor network employing the widely used s-composite key predistribution scheme. Our theoretical findings is also confirmed by numerical results.

  7. 3D reconstruction from non-uniform point clouds via local hierarchical clustering

    NASA Astrophysics Data System (ADS)

    Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo

    2017-07-01

    Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.

  8. Visualization of self-heating of an all climate battery by infrared thermography

    NASA Astrophysics Data System (ADS)

    Zhang, Guangsheng; Tian, Hua; Ge, Shanhai; Marple, Dan; Sun, Fengchun; Wang, Chao-Yang

    2018-02-01

    Self-heating Li-ion battery (SHLB), a.k.a. all climate battery, has provided a novel and practical solution to the low temperature power loss challenge. During its rapid self-heating, it is critical to keep the heating process and temperature distributions uniform for superior battery performance, durability and safety. Through infrared thermography of an experimental SHLB cell activated from various low ambient temperatures, we find that temperature distribution is uniform over the active electrode area, suggesting uniform heating. We also find that a hot spot exists at the activation terminal during self-heating, which provides diagnostics for improvement of next generation SHLB cells without the hot spot.

  9. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  10. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  11. Sci—Fri PM: Topics — 04: What if bystander effects influence cell kill within a target volume? Potential consequences of dose heterogeneity on TCP and EUD on intermediate risk prostate patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balderson, M.J.; Kirkby, C.; Department of Medical Physics, Tom Baker Cancer Centre, Calgary, Alberta

    In vitro evidence has suggested that radiation induced bystander effects may enhance non-local cell killing which may influence radiotherapy treatment planning paradigms. This work applies a bystander effect model, which has been derived from published in vitro data, to calculate equivalent uniform dose (EUD) and tumour control probability (TCP) and compare them with predictions from standard linear quadratic (LQ) models that assume a response due only to local absorbed dose. Comparisons between the models were made under increasing dose heterogeneity scenarios. Dose throughout the CTV was modeled with normal distributions, where the degree of heterogeneity was then dictated by changingmore » the standard deviation (SD). The broad assumptions applied in the bystander effect model are intended to place an upper limit on the extent of the results in a clinical context. The bystander model suggests a moderate degree of dose heterogeneity yields as good or better outcome compared to a uniform dose in terms of EUD and TCP. Intermediate risk prostate prescriptions of 78 Gy over 39 fractions had maximum EUD and TCP values at SD of around 5Gy. The plots only dropped below the uniform dose values for SD ∼ 10 Gy, almost 13% of the prescribed dose. The bystander model demonstrates the potential to deviate from the common local LQ model predictions as dose heterogeneity through a prostate CTV is varies. The results suggest the potential for allowing some degree of dose heterogeneity within a CTV, although further investigations of the assumptions of the bystander model are warranted.« less

  12. Predicting the probability of slip in gait: methodology and distribution study.

    PubMed

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  13. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  14. Do Red and Blue Uniforms Matter in Football and Handball Penalties?

    PubMed

    Krenn, Bjoern; Pernhaupt, Niklas; Handsteiner, Markus

    2017-12-01

    Past research has revealed ambiguous results on the impact of red uniforms in sports competition. The current study was aimed at analyzing the role of red and blue uniforms in football and handball penalties. Two experiments were conducted using a within subjects design, where participants rated uniform color-manipulated video clips. In the first study, participants (n = 39) watched footage of football players kicking a penalty, whereas in the second study (n = 118) videos of handball penalty takers, handball goalkeepers and football goalkeepers preparing themselves to score/save a penalty were shown. Participants rated player's/goalkeeper's level of confidence and the expected position of the ball crossing the goal line in the first experiment and additionally the probability of scoring the penalty against the goalkeepers in the second experiment. The videos stopped at the point where the ball was leaving the foot and hand respectively. Results did not show any beneficial impact of red uniforms. Rather, football players wearing blue were rated to kick the ball higher. The study contradicts any positive effect of red versus blue uniforms in the context of football and handball penalties, which emphasizes the need of searching for potential moderators of color's impact on human behavior.

  15. Design and testing of a uniformly solar energy TIR-R concentration lenses for HCPV systems.

    PubMed

    Shen, S C; Chang, S J; Yeh, C Y; Teng, P C

    2013-11-04

    In this paper, total internal reflection-refraction (TIR-R) concentration (U-TIR-R-C) lens module were designed for uniformity using the energy configuration method to eliminate hot spots on the surface of solar cell and increase conversion efficiency. The design of most current solar concentrators emphasizes the high-power concentration of solar energy, however neglects the conversion inefficiency resulting from hot spots generated by uneven distributions of solar energy concentrated on solar cells. The energy configuration method proposed in this study employs the concept of ray tracing to uniformly distribute solar energy to solar cells through a U-TIR-R-C lens module. The U-TIR-R-C lens module adopted in this study possessed a 76-mm diameter, a 41-mm thickness, concentration ratio of 1134 Suns, 82.6% optical efficiency, and 94.7% uniformity. The experiments demonstrated that the U-TIR-R-C lens module reduced the core temperature of the solar cell from 108 °C to 69 °C and the overall temperature difference from 45 °C to 10 °C, and effectively relative increased the conversion efficiency by approximately 3.8%. Therefore, the U-TIR-R-C lens module designed can effectively concentrate a large area of sunlight onto a small solar cell, and the concentrated solar energy can be evenly distributed in the solar cell to achieve uniform irradiance and effectively eliminate hot spots.

  16. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  17. Comparison of Turbulent Heat-Transfer Results for Uniform Wall Heat Flux and Uniform Wall Temperature

    NASA Technical Reports Server (NTRS)

    Siegel, R.; Sparrow, E. M.

    1960-01-01

    The purpose of this note is to examine in a more precise way how the Nusselt numbers for turbulent heat transfer in both the fully developed and thermal entrance regions of a circular tube are affected by two different wall boundary conditions. The comparisons are made for: (a) Uniform wall temperature (UWT); and (b) uniform wall heat flux (UHF). Several papers which have been concerned with the turbulent thermal entrance region problem are given. 1 Although these analyses have all utilized an eigenvalue formulation for the thermal entrance region there were differences in the choices of eddy diffusivity expressions, velocity distributions, and methods for carrying out the numerical solutions. These differences were also found in the fully developed analyses. Hence when making a comparison of the analytical results for uniform wall temperature and uniform wall heat flux, it was not known if differences in the Nusselt numbers could be wholly attributed to the difference in wall boundary conditions, since all the analytical results were not obtained in a consistent way. To have results which could be directly compared, computations were carried out for the uniform wall temperature case, using the same eddy diffusivity, velocity distribution, and digital computer program employed for uniform wall heat flux. In addition, the previous work was extended to a lower Reynolds number range so that comparisons could be made over a wide range of both Reynolds and Prandtl numbers.

  18. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  19. Variable area fuel cell process channels

    DOEpatents

    Kothmann, Richard E.

    1981-01-01

    A fuel cell arrangement having a non-uniform distribution of fuel and oxidant flow paths, on opposite sides of an electrolyte matrix, sized and positioned to provide approximately uniform fuel and oxidant utilization rates, and cell conditions, across the entire cell.

  20. 10 CFR Appendix A to Subpart K of... - Uniform Test Method for Measuring the Energy Consumption of Distribution Transformers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Uniform Test Method is used to test more than one unit of a basic model to determine the efficiency of... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2014-01-01 2014-01-01 false Uniform Test Method for Measuring the Energy Consumption...

  1. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  2. Confined energy distribution for charged particle beams

    DOEpatents

    Jason, Andrew J.; Blind, Barbara

    1990-01-01

    A charged particle beam is formed to a relatively larger area beam which is well-contained and has a beam area which relatively uniformly deposits energy over a beam target. Linear optics receive an accelerator beam and output a first beam with a first waist defined by a relatively small size in a first dimension normal to a second dimension. Nonlinear optics, such as an octupole magnet, are located about the first waist and output a second beam having a phase-space distribution which folds the beam edges along the second dimension toward the beam core to develop a well-contained beam and a relatively uniform particle intensity across the beam core. The beam may then be expanded along the second dimension to form the uniform ribbon beam at a selected distance from the nonlinear optics. Alternately, the beam may be passed through a second set of nonlinear optics to fold the beam edges in the first dimension. The beam may then be uniformly expanded along the first and second dimensions to form a well-contained, two-dimensional beam for illuminating a two-dimensional target with a relatively uniform energy deposition.

  3. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  4. Positive phase space distributions and uncertainty relations

    NASA Technical Reports Server (NTRS)

    Kruger, Jan

    1993-01-01

    In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.

  5. On the Number of Non-equivalent Ancestral Configurations for Matching Gene Trees and Species Trees.

    PubMed

    Disanto, Filippo; Rosenberg, Noah A

    2017-09-14

    An ancestral configuration is one of the combinatorially distinct sets of gene lineages that, for a given gene tree, can reach a given node of a specified species tree. Ancestral configurations have appeared in recursive algebraic computations of the conditional probability that a gene tree topology is produced under the multispecies coalescent model for a given species tree. For matching gene trees and species trees, we study the number of ancestral configurations, considered up to an equivalence relation introduced by Wu (Evolution 66:763-775, 2012) to reduce the complexity of the recursive probability computation. We examine the largest number of non-equivalent ancestral configurations possible for a given tree size n. Whereas the smallest number of non-equivalent ancestral configurations increases polynomially with n, we show that the largest number increases with [Formula: see text], where k is a constant that satisfies [Formula: see text]. Under a uniform distribution on the set of binary labeled trees with a given size n, the mean number of non-equivalent ancestral configurations grows exponentially with n. The results refine an earlier analysis of the number of ancestral configurations considered without applying the equivalence relation, showing that use of the equivalence relation does not alter the exponential nature of the increase with tree size.

  6. The relationship between flesh quality and numbers of Kudoa thyrsites plasmodia and spores in farmed Atlantic salmon, Salmo salar L.

    PubMed

    Dawson-Coates, J A; Chase, J C; Funk, V; Booy, M H; Haines, L R; Falkenberg, C L; Whitaker, D J; Olafson, R W; Pearson, T W

    2003-08-01

    Atlantic salmon, Salmo salar L., were exposed to Kudoa thyrsites (Myxozoa, Myxosporea)-containing sea water for 15 months, and then harvested and assessed for parasite burden and fillet quality. At harvest, parasites were enumerated in muscle samples from a variety of somatic and opercular sites, and mean counts were determined for each fish. After 6 days storage at 4 degrees C, fillet quality was determined by visual assessment and by analysis of muscle firmness using a texture analyzer. Fillet quality could best be predicted by determining mean parasite numbers and spore counts in all eight tissue samples (somatic and opercular) or in four fillet samples, as the counts from opercular samples alone showed greater variability and thus decreased reliability. The variability in both plasmodia and spore numbers between tissue samples taken from an individual fish indicated that the parasites were not uniformly distributed in the somatic musculature. Therefore, to best predict the probable level of fillet degradation caused by K. thyrsites infections, multiple samples must be taken from each fish. If this is performed, a mean plasmodia count of 0.3 mm(-2) or a mean spore count of 4.0 x 10(5) g(-1) of tissue are the levels where the probability of severe myoliquefaction becomes a significant risk.

  7. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  8. Combined Loads Test Fixture for Thermal-Structural Testing Aerospace Vehicle Panel Concepts

    NASA Technical Reports Server (NTRS)

    Fields, Roger A.; Richards, W. Lance; DeAngelis, Michael V.

    2004-01-01

    A structural test requirement of the National Aero-Space Plane (NASP) program has resulted in the design, fabrication, and implementation of a combined loads test fixture. Principal requirements for the fixture are testing a 4- by 4-ft hat-stiffened panel with combined axial (either tension or compression) and shear load at temperatures ranging from room temperature to 915 F, keeping the test panel stresses caused by the mechanical loads uniform, and thermal stresses caused by non-uniform panel temperatures minimized. The panel represents the side fuselage skin of an experimental aerospace vehicle, and was produced for the NASP program. A comprehensive mechanical loads test program using the new test fixture has been conducted on this panel from room temperature to 500 F. Measured data have been compared with finite-element analyses predictions, verifying that uniform load distributions were achieved by the fixture. The overall correlation of test data with analysis is excellent. The panel stress distributions and temperature distributions are very uniform and fulfill program requirements. This report provides details of an analytical and experimental validation of the combined loads test fixture. Because of its simple design, this unique test fixture can accommodate panels from a variety of aerospace vehicle designs.

  9. Global Flow Instability and Control IV Held in Crete, Greece on September 28-October 2, 2009: A Synthesis of Presentations and Discussions

    DTIC Science & Technology

    2009-09-01

    non-uniform, stationary rotation / non- Distribution A: Approved for public release; distribution is unlimited. 8 stationary rotation , mass...Cayley spectral transformation as a means of rotating the basin of convergence of the Arnoldi algorithm. Instead of doing the inversion of the large...pair of counter rotating streamwise vortices embedded in uniform shear flow. Consistently with earlier work by the same group, the main present finding

  10. Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1991-01-01

    The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.

  11. Therapeutic analysis of high-dose-rate {sup 192}Ir vaginal cuff brachytherapy for endometrial cancer using a cylindrical target volume model and varied cancer cell distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu; Donnelly, Eric D.; Strauss, Jonathan B.

    Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model wasmore » used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0){sup 4} to (13.4){sup 4} for the radiosensitive normal tissue depending on the cylinder size, treatment lengths, prescription depth, and dose as well. However, for a uniform cancer cell distribution, the EUDs were between 6.3 Gy × 4 and 7.1 Gy × 4, and the TRs were found to be between (1.4){sup 4} and (1.7){sup 4}. For the uniformly interspersed cancer and radio-resistant normal cells, the TRs were less than 1. The two VCBT prescription regimens were found to be equivalent in terms of EUDs and TRs. Conclusions: HDR VCBT strongly favors cylindrical target volume with the cancer cell distribution following its dosimetric trend. Assuming a half-Gaussian distribution of cancer cells, the HDR VCBT provides a considerable radiobiological advantage over the external beam radiotherapy (EBRT) in terms of sparing more normal tissues while maintaining the same level of cancer cell killing. But for the uniform cancer cell distribution and radio-resistant normal tissue, the radiobiology outcome of the HDR VCBT does not show an advantage over the EBRT. This study strongly suggests that radiation therapy design should consider the cancer cell distribution inside the target volume in addition to the shape of target.« less

  12. VizieR Online Data Catalog: Abundances of M33 HII regions (Magrini+, 2010)

    NASA Astrophysics Data System (ADS)

    Magrini, L.; Stanghellini, L.; Corbelli, E.; Galli, D.; Villaver, E.

    2009-11-01

    We analyze the spatial distribution of metals in M33 using a new sample and literature data of HII regions, constraining a model of galactic chemical evolution with HII region and planetary nebula (PN) abundances. We consider chemical abundances of a new sample of HII regions complemented with previous literature data-sets. Supported by a uniform sample of nebular spectroscopic observations, we conclude that: i) the metallicity distribution in M33 is very complex, showing a central depression in metallicity probably due to observational bias; ii) the metallicity gradient in the disk of M33 has a slope of -0.037+/-0.009dex/kpc in the whole radial range up to ~8kpc, and -0.044+/-0.009dex/kpc excluding the central kpc; iii) there is a small evolution of the slope with time from the epoch of PN progenitor formation to the present-time. Description: Emission line fluxes, observed and dereddened of 33 HII regions are presented. Physical and chemical properties, such as electron temperatures and density, ionic and total chemical abundances of He, O, N, Ne, Ar, S, are derived. (3 data files).

  13. Vascularization of bioprosthetic valve material

    NASA Astrophysics Data System (ADS)

    Boughner, Derek R.; Dunmore-Buyze, Joy; Heenatigala, Dino; Lohmann, Tara; Ellis, Chris G.

    1999-04-01

    Cell membrane remnants represent a probable nucleation site for calcium deposition in bioprosthetic heart valves. Calcification is a primary failure mode of both bovine pericardial and porcine aortic heterograft bioprosthesis but the nonuniform pattern of calcium distribution within the tissue remains unexplained. Searching for a likely cellular source, we considered the possibility of a previously overlooked small blood vessel network. Using a videomicroscopy technique, we examined 5 matched pairs of porcine aortic and pulmonary valves and 14 samples from 6 bovine pericardia. Tissue was placed on a Leitz Metallux microscope and transilluminated with a 75 watt mercury lamp. Video images were obtained using a silicon intensified target camera equipped with a 431 nm interference filter to maximize contrast of red cells trapped in a capillary microvasculature. Video images were recorded for analysis on a Silicon Graphics Image Analysis work station equipped with a video frame grabber. For porcine valves, the technique demonstrated a vascular bed in the central spongiosa at cusp bases with vessel sizes from 6-80 micrometers . Bovine pericardium differed with a more uniform distribution of 7-100 micrometers vessels residing centrally. Thus, small blood vessel endothelial cells provide a potential explanation patterns of bioprosthetic calcification.

  14. Leukemia and other cancers following radiation treatment of pelvic disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, P.G.

    1977-04-01

    Follow-up studies of patients treated for cancer of the cervix with radiotherapy have shown such women to be at little or no increased risk of leukemia subsequent to the radiation exposure. However, women exposed to lower doses of radiation in the pelvic area, in the induction of an artificial menopause, appear to show increased risks of both leukemia and cancers of those sites directly in the radiation field. The studies of these two types of radiation exposure are reviewed. The findings may possibly be reconciled with each other on the basis of the distribution of radiation dose to the bonemore » marrow. Irradiation for cancer of the cervix delivers radiation doses to a small portion of the marrow which are probably lethal for most marrow cells. The mean dose to cells distant from the cervix may be too small to produce a detectable increase in leukemia incidence. The lower and more uniformly distributed radiation dose used to induce an artificial menopause will be less lethal for marrow cells and may consequently deliver a higher ''effective'' marrow dose to surviving cells, resulting in an increased leukemia risk.« less

  15. Tracking of plus-ends reveals microtubule functional diversity in different cell types

    NASA Astrophysics Data System (ADS)

    Shaebani, M. Reza; Pasula, Aravind; Ott, Albrecht; Santen, Ludger

    2016-07-01

    Many cellular processes are tightly connected to the dynamics of microtubules (MTs). While in neuronal axons MTs mainly regulate intracellular trafficking, they participate in cytoskeleton reorganization in many other eukaryotic cells, enabling the cell to efficiently adapt to changes in the environment. We show that the functional differences of MTs in different cell types and regions is reflected in the dynamic properties of MT tips. Using plus-end tracking proteins EB1 to monitor growing MT plus-ends, we show that MT dynamics and life cycle in axons of human neurons significantly differ from that of fibroblast cells. The density of plus-ends, as well as the rescue and catastrophe frequencies increase while the growth rate decreases toward the fibroblast cell margin. This results in a rather stable filamentous network structure and maintains the connection between nucleus and membrane. In contrast, plus-ends are uniformly distributed along the axons and exhibit diverse polymerization run times and spatially homogeneous rescue and catastrophe frequencies, leading to MT segments of various lengths. The probability distributions of the excursion length of polymerization and the MT length both follow nearly exponential tails, in agreement with the analytical predictions of a two-state model of MT dynamics.

  16. New Data on the Presence of Hemocyanin in Plecoptera: Recomposing a Puzzle

    PubMed Central

    Amore, Valentina; Gaetani, Brunella; Angeles Puig, Maria; Fochetti, Romolo

    2011-01-01

    The specific role of hemocyanin in Plecoptera (stoneflies) is still not completely understood, since none of the hypotheses advanced have proven fully convincing. Previous data show that mRNA hemocyanin sequences are not present in all Plecoptera, and that hemocyanin does not seem to be uniformly distributed within the order. All species possess hexamerins, which are multifunction proteins that probably originated from hemocyanin. In order to obtain an increasingly detailed picture on the presence and distribution of hemocyanin across the order, this study presents new data regarding nymphs and adults of selected Plecoptera species. Results confirm that the hemocyanin expression differs among nymphs in the studied stonefly species. Even though previous studies have found hemocyanin in adults of two stonefly species it was not detected in the present study, even in species where nymphs show hemocyanin, suggesting that the physiological need of this protein can change during life cycle. The phylogenetic pattern obtained using hemocyanin sequences matches the accepted scheme of traditional phylogeny based on morphology, anatomy, and biology. It is remarkable to note that the hemocyanin conserved region acts like a phylogenetic molecular marker within Plecoptera. PMID:22236413

  17. Rotational modulation of the chromospheric activity in the young solar-type star, X-1 Orionis

    NASA Technical Reports Server (NTRS)

    Boesgaard, A. M.; Simon, T.

    1982-01-01

    The IUE satellite was used to observe one of the youngest G stars (GO V) for which Duncan (1981) derives an age of 6 x 10 to the 8th power years from the Li abundance. Rotational modulation was looked for in the emission flux in the chromospheric and transition region lines of this star. Variations in the Ca 11 K-lines profile were studied with the CHF telescope at Mauna Kea. Results show that the same modulation of the emission flux of Ca 11 due to stellar rotation is present in the transition region feature of C IV and probably of He II. For other UV lines the modulation is not apparent, due to a more complex surface distribution of the active areas or supergranulation network, or a shorter lifetime of the conditions which give rise to these features, or to the uncertainities in the measured line strengths. The Mg II emission flux is constant to within + or - 3.4% implying a rather uniform distribution of Mg II emission areas. The Ca II emission not only shows a measurable variation in intensity but also variations in detailed line profile shape when observed at high resolution.

  18. Spatial Probability Distribution of Strata's Lithofacies and its Impacts on Land Subsidence in Huairou Emergency Water Resources Region of Beijing

    NASA Astrophysics Data System (ADS)

    Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.

    2016-12-01

    Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.

  19. The exact probability distribution of the rank product statistics for replicated experiments.

    PubMed

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  20. Polymer powder processing of cryomilled polycaprolactone for solvent-free generation of homogeneous bioactive tissue engineering scaffolds.

    PubMed

    Lim, Jing; Chong, Mark Seow Khoon; Chan, Jerry Kok Yen; Teoh, Swee-Hin

    2014-06-25

    Synthetic polymers used in tissue engineering require functionalization with bioactive molecules to elicit specific physiological reactions. These additives must be homogeneously dispersed in order to achieve enhanced composite mechanical performance and uniform cellular response. This work demonstrates the use of a solvent-free powder processing technique to form osteoinductive scaffolds from cryomilled polycaprolactone (PCL) and tricalcium phosphate (TCP). Cryomilling is performed to achieve micrometer-sized distribution of PCL and reduce melt viscosity, thus improving TCP distribution and improving structural integrity. A breakthrough is achieved in the successful fabrication of 70 weight percentage of TCP into a continuous film structure. Following compaction and melting, PCL/TCP composite scaffolds are found to display uniform distribution of TCP throughout the PCL matrix regardless of composition. Homogeneous spatial distribution is also achieved in fabricated 3D scaffolds. When seeded onto powder-processed PCL/TCP films, mesenchymal stem cells are found to undergo robust and uniform osteogenic differentiation, indicating the potential application of this approach to biofunctionalize scaffolds for tissue engineering applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Helical Tomotherapy vs. Intensity-Modulated Proton Therapy for Whole Pelvis Irradiation in High-Risk Prostate Cancer Patients: Dosimetric, Normal Tissue Complication Probability, and Generalized Equivalent Uniform Dose Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widesott, Lamberto, E-mail: widesott@yahoo.it; Pierelli, Alessio; Fiorino, Claudio

    2011-08-01

    Purpose: To compare intensity-modulated proton therapy (IMPT) and helical tomotherapy (HT) treatment plans for high-risk prostate cancer (HRPCa) patients. Methods and Materials: The plans of 8 patients with HRPCa treated with HT were compared with IMPT plans with two quasilateral fields set up (-100{sup o}; 100{sup o}) and optimized with the Hyperion treatment planning system. Both techniques were optimized to simultaneously deliver 74.2 Gy/Gy relative biologic effectiveness (RBE) in 28 fractions on planning target volumes (PTVs)3-4 (P + proximal seminal vesicles), 65.5 Gy/Gy(RBE) on PTV2 (distal seminal vesicles and rectum/prostate overlapping), and 51.8 Gy/Gy(RBE) to PTV1 (pelvic lymph nodes). Normalmore » tissue calculation probability (NTCP) calculations were performed for the rectum, and generalized equivalent uniform dose (gEUD) was estimated for the bowel cavity, penile bulb and bladder. Results: A slightly better PTV coverage and homogeneity of target dose distribution with IMPT was found: the percentage of PTV volume receiving {>=}95% of the prescribed dose (V{sub 95%}) was on average >97% in HT and >99% in IMPT. The conformity indexes were significantly lower for protons than for photons, and there was a statistically significant reduction of the IMPT dosimetric parameters, up to 50 Gy/Gy(RBE) for the rectum and bowel and 60 Gy/Gy(RBE) for the bladder. The NTCP values for the rectum were higher in HT for all the sets of parameters, but the gain was small and in only a few cases statistically significant. Conclusions: Comparable PTV coverage was observed. Based on NTCP calculation, IMPT is expected to allow a small reduction in rectal toxicity, and a significant dosimetric gain with IMPT, both in medium-dose and in low-dose range in all OARs, was observed.« less

  2. A spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an operational earthquake forecast

    USGS Publications Warehouse

    Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.

    2017-01-01

    We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as will any subsequent version or competing model, potential usefulness needs to be considered in the context of actual applications.

  3. Solute concentration at a well in non-Gaussian aquifers under constant and time-varying pumping schedule

    NASA Astrophysics Data System (ADS)

    Libera, Arianna; de Barros, Felipe P. J.; Riva, Monica; Guadagnini, Alberto

    2017-10-01

    Our study is keyed to the analysis of the interplay between engineering factors (i.e., transient pumping rates versus less realistic but commonly analyzed uniform extraction rates) and the heterogeneous structure of the aquifer (as expressed by the probability distribution characterizing transmissivity) on contaminant transport. We explore the joint influence of diverse (a) groundwater pumping schedules (constant and variable in time) and (b) representations of the stochastic heterogeneous transmissivity (T) field on temporal histories of solute concentrations observed at an extraction well. The stochastic nature of T is rendered by modeling its natural logarithm, Y = ln T, through a typical Gaussian representation and the recently introduced Generalized sub-Gaussian (GSG) model. The latter has the unique property to embed scale-dependent non-Gaussian features of the main statistics of Y and its (spatial) increments, which have been documented in a variety of studies. We rely on numerical Monte Carlo simulations and compute the temporal evolution at the well of low order moments of the solute concentration (C), as well as statistics of the peak concentration (Cp), identified as the environmental performance metric of interest in this study. We show that the pumping schedule strongly affects the pattern of the temporal evolution of the first two statistical moments of C, regardless the nature (Gaussian or non-Gaussian) of the underlying Y field, whereas the latter quantitatively influences their magnitude. Our results show that uncertainty associated with C and Cp estimates is larger when operating under a transient extraction scheme than under the action of a uniform withdrawal schedule. The probability density function (PDF) of Cp displays a long positive tail in the presence of time-varying pumping schedule. All these aspects are magnified in the presence of non-Gaussian Y fields. Additionally, the PDF of Cp displays a bimodal shape for all types of pumping schemes analyzed, independent of the type of heterogeneity considered.

  4. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    NASA Astrophysics Data System (ADS)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  5. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  6. Investigation of the Spatiotemporal Responses of Nanoparticles in Tumor Tissues with a Small-Scale Mathematical Model

    PubMed Central

    Chou, Cheng-Ying; Huang, Chih-Kang; Lu, Kuo-Wei; Horng, Tzyy-Leng; Lin, Win-Li

    2013-01-01

    The transport and accumulation of anticancer nanodrugs in tumor tissues are affected by many factors including particle properties, vascular density and leakiness, and interstitial diffusivity. It is important to understand the effects of these factors on the detailed drug distribution in the entire tumor for an effective treatment. In this study, we developed a small-scale mathematical model to systematically study the spatiotemporal responses and accumulative exposures of macromolecular carriers in localized tumor tissues. We chose various dextrans as model carriers and studied the effects of vascular density, permeability, diffusivity, and half-life of dextrans on their spatiotemporal concentration responses and accumulative exposure distribution to tumor cells. The relevant biological parameters were obtained from experimental results previously reported by the Dreher group. The area under concentration-time response curve (AUC) quantified the extent of tissue exposure to a drug and therefore was considered more reliable in assessing the extent of the overall drug exposure than individual concentrations. The results showed that 1) a small macromolecule can penetrate deep into the tumor interstitium and produce a uniform but low spatial distribution of AUC; 2) large macromolecules produce high AUC in the perivascular region, but low AUC in the distal region away from vessels; 3) medium-sized macromolecules produce a relatively uniform and high AUC in the tumor interstitium between two vessels; 4) enhancement of permeability can elevate the level of AUC, but have little effect on its uniformity while enhancement of diffusivity is able to raise the level of AUC and improve its uniformity; 5) a longer half-life can produce a deeper penetration and a higher level of AUC distribution. The numerical results indicate that a long half-life carrier in plasma and a high interstitial diffusivity are the key factors to produce a high and relatively uniform spatial AUC distribution in the interstitium. PMID:23565142

  7. Nonimaging polygonal mirrors achieving uniform irradiance distributions on concentrating photovoltaic cells.

    PubMed

    Schmitz, Max; Dähler, Fabian; Elvinger, François; Pedretti, Andrea; Steinfeld, Aldo

    2017-04-10

    We introduce a design methodology for nonimaging, single-reflection mirrors with polygonal inlet apertures that generate a uniform irradiance distribution on a polygonal outlet aperture, enabling a multitude of applications within the domain of concentrated photovoltaics. Notably, we present single-mirror concentrators of square and hexagonal perimeter that achieve very high irradiance uniformity on a square receiver at concentrations ranging from 100 to 1000 suns. These optical designs can be assembled in compound concentrators with maximized active area fraction by leveraging tessellation. More advanced multi-mirror concentrators, where each mirror individually illuminates the whole area of the receiver, allow for improved performance while permitting greater flexibility for the concentrator shape and robustness against partial shading of the inlet aperture.

  8. Use of Radon for Evaluation of Atmospheric Transport Models: Sensitivity to Emissions

    NASA Technical Reports Server (NTRS)

    Gupta, Mohan L.; Douglass, Anne R.; Kawa, S. Randolph; Pawson, Steven

    2004-01-01

    This paper presents comparative analyses of atmospheric radon (Rn) distributions simulated using different emission scenarios and the observations. Results indicate that the model generally reproduces observed distributions of Rn but there are some biases in the model related to differences in large-scale and convective transport. Simulations presented here use an off-line three-dimensional chemical transport model driven by assimilated winds and two scenarios of Rn fluxes (atom/cm s) from ice-free land surfaces: (A) globally uniform flux of 1.0, and (B) uniform flux of 1.0 between 60 deg. S and 30 deg. N followed by a sharp linear decrease to 0.2 at 70 deg. N. We considered an additional scenario (C) where Rn emissions for case A were uniformly reduced by 28%. Results show that case A overpredicts observed Rn distributions in both hemispheres. Simulated northern hemispheric (NH) Rn distributions from cases B and C compare better with the observations, but are not discernible from each other. In the southern hemisphere, surface Rn distributions from case C compare better with the observations. We performed a synoptic scale source-receptor analysis for surface Rn to locate regions with ratios B/A and B/C less than 0.5. Considering an uncertainty in regional Rn emissions of a factor of two, our analysis indicates that additional measurements of surface Rn particularly during April-October and north of 50 deg. N over the Pacific as well as Atlantic regions would make it possible to determine if the proposed latitude gradient in Rn emissions is superior to a uniform flux scenario.

  9. Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction

    NASA Technical Reports Server (NTRS)

    Cohen, A. C.

    1971-01-01

    A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.

  10. An elusive paleodemography? A comparison of two methods for estimating the adult age distribution of deaths at late Classic Copan, Honduras.

    PubMed

    Storey, Rebecca

    2007-01-01

    Comparison of different adult age estimation methods on the same skeletal sample with unknown ages could forward paleodemographic inference, while researchers sort out various controversies. The original aging method for the auricular surface (Lovejoy et al., 1985a) assigned an age estimation based on several separate characteristics. Researchers have found this original method hard to apply. It is usually forgotten that before assigning an age, there was a seriation, an ordering of all available individuals from youngest to oldest. Thus, age estimation reflected the place of an individual within its sample. A recent article (Buckberry and Chamberlain, 2002) proposed a revised method that scores theses various characteristics into age stages, which can then be used with a Bayesian method to estimate an adult age distribution for the sample. Both methods were applied to the adult auricular surfaces of a Pre-Columbian Maya skeletal population from Copan, Honduras and resulted in age distributions with significant numbers of older adults. However, contrary to the usual paleodemographic distribution, one Bayesian estimation based on uniform prior probabilities yielded a population with 57% of the ages at death over 65, while another based on a high mortality life table still had 12% of the individuals aged over 75 years. The seriation method yielded an age distribution more similar to that known from preindustrial historical situations, without excessive longevity of adults. Paleodemography must still wrestle with its elusive goal of accurate adult age estimation from skeletons, a necessary base for demographic study of past populations. (c) 2006 Wiley-Liss, Inc

  11. A landscape perspective of the stream corridor invasion and habitat characteristics of an exotic (Dioscorea oppositifolia) in a pristine watershed in Illinois

    USGS Publications Warehouse

    Thomas, J.R.; Middleton, B.; Gibson, D.J.

    2006-01-01

    The spatial distribution of exotics across riparian landscapes is not uniform, and research elaborating the environmental constraints and dispersal behavior that underlie these patterns of distribution is warranted. This study examined the spatial distribution, growth patterns, and habitat constraints of populations of the invasive Dioscorea oppositifolia in a forested stream corridor of a tributary of Drury Creek in Giant City State Park, IL. The distribution of D. oppositifolia was determined at the watershed scale mainly by floodplain structure and connectivity. Populations of D. oppositifolia were confined to the floodplain, with overbank flooding from the stream. Dioscorea oppositifolia probably originates in disturbed areas upstream of natural corridors, and subsequently, the species disperses downstream into pristine canyons or ravines via bulbils dispersing in the water. In Giant City State Park, populations of D. oppositifolia were distributed on the floodplain across broad gradients of soil texture, light, slope, and potential radiation. The study also examined the longevity of bulbils in various micro-environments to illuminate strategies for the management of the species in invaded watersheds. After 1 year, the highest percentages of bulbils were viable under leaves, and much lower percentages were viable over leaves, in soil, and in the creek (76.0??6.8, 21.2??9.6, 21.6??3.6, and 5.2??5.2%), respectively. This study suggests that management procedures that reduce leaf litter on the forest floor (e.g., prescribed burning) could reduce the number of bulbils of D. oppositifolia stored in the watershed. ?? Springer 2006.

  12. Heterogeneities in Axonal Structure and Transporter Distribution Lower Dopamine Reuptake Efficiency

    PubMed Central

    Block, Ethan R.; Bartol, Tom M.; Sorkin, Alexander

    2018-01-01

    Abstract Efficient clearance of dopamine (DA) from the synapse is key to regulating dopaminergic signaling. This role is fulfilled by DA transporters (DATs). Recent advances in the structural characterization of DAT from Drosophila (dDAT) and in high-resolution imaging of DA neurons and the distribution of DATs in living cells now permit us to gain a mechanistic understanding of DA reuptake events in silico. Using electron microscopy images and immunofluorescence of transgenic knock-in mouse brains that express hemagglutinin-tagged DAT in DA neurons, we reconstructed a realistic environment for MCell simulations of DA reuptake, wherein the identity, population and kinetics of homology-modeled human DAT (hDAT) substates were derived from molecular simulations. The complex morphology of axon terminals near active zones was observed to give rise to large variations in DA reuptake efficiency, and thereby in extracellular DA density. Comparison of the effect of different firing patterns showed that phasic firing would increase the probability of reaching local DA levels sufficiently high to activate low-affinity DA receptors, mainly owing to high DA levels transiently attained during the burst phase. The experimentally observed nonuniform surface distribution of DATs emerged as a major modulator of DA signaling: reuptake was slower, and the peaks/width of transient DA levels were sharper/wider under nonuniform distribution of DATs, compared with uniform. Overall, the study highlights the importance of accurate descriptions of extrasynaptic morphology, DAT distribution, and conformational kinetics for quantitative evaluation of dopaminergic transmission and for providing deeper understanding of the mechanisms that regulate DA transmission. PMID:29430519

  13. Advanced Technology for Ultra-Low Power System-on-Chip (SoC)

    DTIC Science & Technology

    2017-06-01

    design at IDS=1mA/μm compared with that in experimental 14nm-node FinFET. The redistributed electric field along the channel length direction can... design can result in more uniform electron density and electron velocity distributions compared to a homojunction device. This uniform electron... design at IDS=1mA/μm compared with that in experimental 14nm-node FinFET. 14 Approved for public release, distribution is unlimited. 0 5 10 15 20

  14. World cup soccer players tend to be born with sun and moon in adjacent zodiacal signs

    PubMed Central

    Verhulst, J

    2000-01-01

    The ecliptic elongation of the moon with respect to the sun does not show uniform distribution on the birth dates of the 704 soccer players selected for the 1998 World Cup. However, a uniform distribution is expected on astronomical grounds. The World Cup players show a very pronounced tendency (p = 0.00001) to be born on days when the sun and moon are in adjacent zodiacal signs. Key Words: soccer; World Cup; astrology; moon PMID:11131239

  15. Electromagnetic Fields of a Uniform Sphere in a Uniform Conducting Medium with Application to Dipole Sources

    DTIC Science & Technology

    1991-09-01

    12b. DISTRIBUTION CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (Maximum 200 words) Vector spherical harmonic expansions are...electric and magnetic field vectors from E rand B - r alone. Genural expressions are given relating the scattered field expansion coefficients to the source...Prescnbed by ANSI Std. Z39-18 29W-102 NCSC TR 426-90 CONTENTS Pag o INTRODUCTION 1 BACKGROUND 1 ANGULAR MOMENTUM OPERATOR AND VECTOR SPHERICAL

  16. Electron kinematics in a plasma focus

    NASA Technical Reports Server (NTRS)

    Hohl, F.; Gary, S. P.

    1977-01-01

    The results of numerical integrations of the three-dimensional relativistic equations of motion of electrons subject to given electric and magnetic fields are presented. Fields due to two different models are studied: (1) a circular distribution of current filaments, and (2) a uniform current distribution; both the collapse and the current reduction phases are studied in each model. Decreasing current in the uniform current model yields 100 keV electrons accelerated toward the anode and, as for earlier ion computations, provides general agreement with experimental results.

  17. Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation

    PubMed Central

    Kong, Danyang; Asplund, Christopher L.; Ling, Aiqing; Chee, Michael W.L.

    2015-01-01

    Study Objectives: Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Design: Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Setting: Sleep laboratory. Participants: There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Measurements and Results: Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. Conclusions: The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. Citation: Kong D, Asplund CL, Ling A, Chee MWL. Increased automaticity and altered temporal preparation following sleep deprivation. SLEEP 2015;38(8):1219–1227. PMID:25845689

  18. Indexing the relative abundance of age-0 white sturgeons in an impoundment of the lower Columbia River from highly skewed trawling data

    USGS Publications Warehouse

    Counihan, T.D.; Miller, Allen I.; Parsley, M.J.

    1999-01-01

    The development of recruitment monitoring programs for age-0 white sturgeons Acipenser transmontanus is complicated by the statistical properties of catch-per-unit-effort (CPUE) data. We found that age-0 CPUE distributions from bottom trawl surveys violated assumptions of statistical procedures based on normal probability theory. Further, no single data transformation uniformly satisfied these assumptions because CPUE distribution properties varied with the sample mean (??(CPUE)). Given these analytic problems, we propose that an additional index of age-0 white sturgeon relative abundance, the proportion of positive tows (Ep), be used to estimate sample sizes before conducting age-0 recruitment surveys and to evaluate statistical hypothesis tests comparing the relative abundance of age-0 white sturgeons among years. Monte Carlo simulations indicated that Ep was consistently more precise than ??(CPUE), and because Ep is binomially rather than normally distributed, surveys can be planned and analyzed without violating the assumptions of procedures based on normal probability theory. However, we show that Ep may underestimate changes in relative abundance at high levels and confound our ability to quantify responses to management actions if relative abundance is consistently high. If data suggest that most samples will contain age-0 white sturgeons, estimators of relative abundance other than Ep should be considered. Because Ep may also obscure correlations to climatic and hydrologic variables if high abundance levels are present in time series data, we recommend ??(CPUE) be used to describe relations to environmental variables. The use of both Ep and ??(CPUE) will facilitate the evaluation of hypothesis tests comparing relative abundance levels and correlations to variables affecting age-0 recruitment. Estimated sample sizes for surveys should therefore be based on detecting predetermined differences in Ep, but data necessary to calculate ??(CPUE) should also be collected.

  19. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on understanding the distributional characteristics of such uncertainty. Our approach provides a tool to improve decision making. © 2013 Society for Conservation Biology.

  20. High level continuity for coordinate generation with precise controls

    NASA Technical Reports Server (NTRS)

    Eiseman, P. R.

    1982-01-01

    Coordinate generation techniques with precise local controls have been derived and analyzed for continuity requirements up to both the first and second derivatives, and have been projected to higher level continuity requirements from the established pattern. The desired local control precision was obtained when a family of coordinate surfaces could be uniformly distributed without a consequent creation of flat spots on the coordinate curves transverse to the family. Relative to the uniform distribution, the family could be redistributed from an a priori distribution function or from a solution adaptive approach, both without distortion from the underlying transformation which may be independently chosen to fit a nontrivial geometry and topology.

  1. It's a Holiday!!

    ERIC Educational Resources Information Center

    Ratliff, Michael I.; Mc Shane, Janet M.

    2008-01-01

    This article studies various holiday distributions, the most interesting one being Easter. Gauss' Easter algorithm and Microsoft Excel are used to determine that the Easter distribution can be closely approximated by the convolution of two well-known uniform distributions. (Contains 8 figures.)

  2. Theoretical study of liquid droplet dispersion in a venturi scrubber.

    PubMed

    Fathikalajahi, J; Talaie, M R; Taheri, M

    1995-03-01

    The droplet concentration distribution in an atomizing scrubber was calculated based on droplet eddy diffusion by a three-dimensional dispersion model. This model is also capable of predicting the liquid flowing on the wall. The theoretical distribution of droplet concentration agrees well with experimental data given by Viswanathan et al. for droplet concentration distribution in a venturi-type scrubber. The results obtained by the model show a non-uniform distribution of drops over the cross section of the scrubber, as noted by the experimental data. While the maximum of droplet concentration distribution may depend on many operating parameters of the scrubber, the results of this study show that the highest uniformity of drop distribution will be reached when penetration length is approximately equal to one-fourth of the depth of the scrubber. The results of this study can be applied to evaluate the removal efficiency of a venturi scrubber.

  3. Uniform Si nano-dot fabrication using reconstructed structure of Si(110)

    NASA Astrophysics Data System (ADS)

    Yano, Masahiro; Uozumi, Yuki; Yasuda, Satoshi; Asaoka, Hidehito

    2018-06-01

    Si nano-dot (ND) formation on Si(110) is observed by means of a scanning tunneling microscope (STM). The initial Si-NDs are Si crystals that are continuous from the substrate and grow during the oxide layer desorption. The NDs fabricated on the flat surface of Si(110)-1 × 1 are surrounded by four types of facets with almost identical appearance probabilities. An increase in the size of the NDs increases the variety of its morphology. In contrast, most Si-NDs fabricated on straight-stepped surface of Si(110)-16 × 2 reconstructed structure are surrounded by only a single type of facet, namely the \\text{Si}(17,15,1)-2 × 1 plane. An appearance probability of the facet in which the base line is along the step of Si(110)-16 × 2 exceeds 75%. This finding provides a fabrication technique of uniformed structural Si-NDs by using the reconstructed structure of Si(110).

  4. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models. [probability density function

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1992-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  5. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  6. Probability Distribution of Turbulent Kinetic Energy Dissipation Rate in Ocean: Observations and Approximations

    NASA Astrophysics Data System (ADS)

    Lozovatsky, I.; Fernando, H. J. S.; Planella-Morato, J.; Liu, Zhiyu; Lee, J.-H.; Jinadasa, S. U. P.

    2017-10-01

    The probability distribution of turbulent kinetic energy dissipation rate in stratified ocean usually deviates from the classic lognormal distribution that has been formulated for and often observed in unstratified homogeneous layers of atmospheric and oceanic turbulence. Our measurements of vertical profiles of micro-scale shear, collected in the East China Sea, northern Bay of Bengal, to the south and east of Sri Lanka, and in the Gulf Stream region, show that the probability distributions of the dissipation rate ɛ˜r in the pycnoclines (r ˜ 1.4 m is the averaging scale) can be successfully modeled by the Burr (type XII) probability distribution. In weakly stratified boundary layers, lognormal distribution of ɛ˜r is preferable, although the Burr is an acceptable alternative. The skewness Skɛ and the kurtosis Kɛ of the dissipation rate appear to be well correlated in a wide range of Skɛ and Kɛ variability.

  7. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    NASA Astrophysics Data System (ADS)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  8. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  9. Evaluation of the Three Parameter Weibull Distribution Function for Predicting Fracture Probability in Composite Materials

    DTIC Science & Technology

    1978-03-01

    for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented

  10. A recirculation aerosol wind tunnel for evaluating aerosol samplers and measuring particle penetration through protective clothing materials.

    PubMed

    Jaques, Peter A; Hsiao, Ta-Chih; Gao, Pengfei

    2011-08-01

    A recirculation aerosol wind tunnel was designed to maintain a uniform airflow and stable aerosol size distribution for evaluating aerosol sampler performance and determining particle penetration through protective clothing materials. The oval-shaped wind tunnel was designed to be small enough to fit onto a lab bench, have optimized dimensions for uniformity in wind speed and particle size distributions, sufficient mixing for even distribution of particles, and minimum particle losses. Performance evaluation demonstrates a relatively high level of spatial uniformity, with a coefficient of variation of 1.5-6.2% for wind velocities between 0.4 and 2.8 m s(-1) and, in this range, 0.8-8.5% for particles between 50 and 450 nm. Aerosol concentration stabilized within the first 5-20 min with, approximately, a count median diameter of 135 nm and geometric standard deviation of 2.20. Negligible agglomerate growth and particle loss are suggested. The recirculation design appears to result in unique features as needed for our research.

  11. A method for improving the light intensity distribution in dental light-curing units.

    PubMed

    Arikawa, Hiroyuki; Takahashi, Hideo; Minesaki, Yoshito; Muraguchi, Kouichi; Matsuyama, Takashi; Kanie, Takahito; Ban, Seiji

    2011-01-01

    A method for improving the uniformity of the radiation light from dental light-curing units (LCUs), and the effect on the polymerization of light-activated composite resin are investigated. Quartz-tungsten halogen, plasma-arc, and light-emitting diode LCUs were used, and additional optical elements such as a mixing tube and diffusing screen were employed to reduce the inhomogeneity of the radiation light. The distribution of the light intensity from the light guide tip was measured across the guide tip, as well as the distribution of the surface hardness of the light-activated resin emitted with the LCUs. Although the additional optical elements caused 13.2-25.9% attenuation of the light intensity, the uniformity of the light intensity of the LCUs was significantly improved in the modified LCUs, and the uniformity of the surface hardness of the resin was also improved. Our results indicate that the addition of optical elements to the LCU may be a simple and effective method for reducing inhomogeneity in radiation light from the LCUs.

  12. Synthesis of uniformly distributed single- and double-sided zinc oxide (ZnO) nanocombs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altintas Yildirim, Ozlem; Liu, Yuzi; Petford-Long, Amanda K.

    Uniformly distributed single- and double-sided zinc oxide (ZnO) nanocomb structures have been prepared by a vapor-liquid-solid technique from a mixture of ZnO nanoparticles and graphene nanoplatelets. The ZnO seed nanoparticles were synthesized via a simple precipitation method. The structure of the ZnO nanocombs could easily be controlled by tuning the carrier-gas flow rate during growth. Higher flow rate resulted in the formation of uniformly-distributed single-sided comb structures with nanonail-shaped teeth, as a result of the self-catalysis effect of the catalytically active Zn-terminated polar (0001) surface. Lower gas flow rate was favorable for production of double-sided comb structures with the twomore » sets of teeth at an angle of similar to 110 degrees to each other along the comb ribbon, which was attributed to the formation of a bicrystal nanocomb ribbon. Lastly, the formation of such a double-sided structure with nanonail-shaped teeth has not previously been reported.« less

  13. Synthesis of uniformly distributed single- and double-sided zinc oxide (ZnO) nanocombs

    DOE PAGES

    Altintas Yildirim, Ozlem; Liu, Yuzi; Petford-Long, Amanda K.

    2015-08-21

    Uniformly distributed single- and double-sided zinc oxide (ZnO) nanocomb structures have been prepared by a vapor-liquid-solid technique from a mixture of ZnO nanoparticles and graphene nanoplatelets. The ZnO seed nanoparticles were synthesized via a simple precipitation method. The structure of the ZnO nanocombs could easily be controlled by tuning the carrier-gas flow rate during growth. Higher flow rate resulted in the formation of uniformly-distributed single-sided comb structures with nanonail-shaped teeth, as a result of the self-catalysis effect of the catalytically active Zn-terminated polar (0001) surface. Lower gas flow rate was favorable for production of double-sided comb structures with the twomore » sets of teeth at an angle of similar to 110 degrees to each other along the comb ribbon, which was attributed to the formation of a bicrystal nanocomb ribbon. Lastly, the formation of such a double-sided structure with nanonail-shaped teeth has not previously been reported.« less

  14. Redundancy and Reduction: Speakers Manage Syntactic Information Density

    ERIC Educational Resources Information Center

    Jaeger, T. Florian

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel…

  15. Diagnostic accuracy of a uniform research case definition for TBM in children: a prospective study.

    PubMed

    Solomons, R S; Visser, D H; Marais, B J; Schoeman, J F; van Furth, A M

    2016-07-01

    Bacteriological confirmation of tuberculous meningitis (TBM) is problematic, and rarely guides initial clinical management. A uniform TBM case definition has been proposed for research purposes. We prospectively enrolled patients aged 3 months to 13 years with meningitis confirmed using cerebrospinal fluid analysis at Tygerberg Hospital, Cape Town, South Africa. Criteria that differentiated TBM from other causes were explored and the accuracy of a probable TBM score assessed by comparing bacteriologically confirmed cases to 'non-TBM' controls. Of 139 meningitis patients, 79 were diagnosed with TBM (35 bacteriologically confirmed), 10 with bacterial meningitis and 50 with viral meningitis. Among those with bacteriologically confirmed TBM, 15 were Mycobacterium tuberculosis culture-positive and 20 were culture-negative but positive on GenoType(®) MTBDRplus or Xpert(®) MTB/RIF; 18 were positive on only a single commercial nucleic acid amplification test. A probable TBM score provided a sensitivity of 74% (95%CI 57-88) and a specificity of 97% (95%CI 86-99) compared to bacteriologically confirmed TBM. A probable TBM score demonstrated excellent specificity compared to bacteriological confirmation. However, 26% of children with TBM would be missed due to the limited accuracy of the case definition. Further prospective testing of an algorithm-based approach to TBM is advisable before recommendation for general clinical practice.

  16. Improved high power/high frequency inductor

    NASA Technical Reports Server (NTRS)

    Mclyman, W. T. (Inventor)

    1990-01-01

    A toroidal core is mounted on an alignment disc having uniformly distributed circumferential notches or holes therein. Wire is then wound about the toroidal core in a uniform pattern defined by the notches or holes. Prior to winding, the wire may be placed within shrink tubing. The shrink tubing is then wound about the alignment disc and core and then heat-shrunk to positively retain the wire in the uniform position on the toroidal core.

  17. Improved Zirconia Oxygen-Separation Cell

    NASA Technical Reports Server (NTRS)

    Walsh, John V.; Zwissler, James G.

    1988-01-01

    Cell structure distributes feed gas more evenly for more efficent oxygen production. Multilayer cell structure containing passages, channels, tubes, and pores help distribute pressure evenly over zirconia electrolytic membrane. Resulting more uniform pressure distribution expected to improve efficiency of oxygen production.

  18. On a neutral particle with permanent magnetic dipole moment in a magnetic medium

    NASA Astrophysics Data System (ADS)

    Bakke, K.; Salvador, C.

    2018-03-01

    We investigate quantum effects that stem from the interaction of a permanent magnetic dipole moment of a neutral particle with an electric field in a magnetic medium. We consider a long non-conductor cylinder that possesses a uniform distribution of electric charges and a non-uniform magnetization. We discuss the possibility of achieving this non-uniform magnetization from the experimental point of view. Besides, due to this non-uniform magnetization, the permanent magnetic dipole moment of the neutral particle also interacts with a non-uniform magnetic field. This interaction gives rise to a linear scalar potential. Then, we show that bound states solutions to the Schrödinger-Pauli equation can be achieved.

  19. The influence of patient positioning uncertainties in proton radiotherapy on proton range and dose distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebl, Jakob, E-mail: jakob.liebl@medaustron.at; Francis H. Burr Proton Therapy Center, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114; Department of Therapeutic Radiology and Oncology, Medical University of Graz, 8036 Graz

    2014-09-15

    Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patientmore » positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R{sup 2} < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe.« less

  20. Predictive Modeling and Mapping of Fish Distributions in Small Streams of the Canadian Rocky Mountain Foothills

    NASA Astrophysics Data System (ADS)

    McCleary, R. J.; Hassan, M. A.

    2006-12-01

    An automated procedure was developed to model spatial fish distributions within small streams in the Foothills of Alberta. Native fish populations and their habitats are susceptible to impacts arising from both industrial forestry and rapid development of petroleum resources in the region. Knowledge of fish distributions and the effects of industrial activities on their habitats is required to help conserve native fish populations. Resource selection function (RSF) models were used to explain presence/absence of fish in small streams. Target species were bull trout, rainbow trout and non-native brook trout. Using GIS, the drainage network was divided into reaches with uniform slope and drainage area and then polygons for each reach were created. Predictor variables described stream size, stream energy, climate and land-use. We identified a set of candidate models and selected the best model using a standard Akaike Information Criteria approach. The best models were validated with two external data sets. Drainage area and basin slope parameters were included in all best models. This finding emphasizes the importance of controlling for the energy dimension at the basin scale in investigations into the effects of land-use on aquatic resources in this transitional landscape between the mountains and plains. The best model for bull trout indicated a relation between the presence of artificial migration barriers in downstream areas and the extirpation of the species from headwater reaches. We produced reach-scale maps by species and summarized this information within all small catchments across the 12,000 km2 study area. These maps had included three categories based on predicted probability of capture for individual reaches. The high probability category had a 78 percent accuracy for correctly predicting both fish present and fish not-present reaches. Basin scale maps highlight specific watersheds likely to support both native bull trout and invasive brook trout, while reach-scale maps indicate specific reaches where interactions between these two species are likely to occur. With regional calibration, this automated modeling and mapping procedure could apply in headwater catchments throughout the Rocky Mountain Foothills and other areas where sporadic waterfalls or other natural migration barriers are not an important feature limiting fish distribution.

  1. Moment Analysis Characterizing Water Flow in Repellent Soils from On- and Sub-Surface Point Sources

    NASA Astrophysics Data System (ADS)

    Xiong, Yunwu; Furman, Alex; Wallach, Rony

    2010-05-01

    Water repellency has a significant impact on water flow patterns in the soil profile. Flow tends to become unstable in such soils, which affects the water availability to plants and subsurface hydrology. In this paper, water flow in repellent soils was experimentally studied using the light reflection method. The transient 2D moisture profiles were monitored by CCD camera for tested soils packed in a transparent flow chamber. Water infiltration experiments and subsequent redistribution from on-surface and subsurface point sources with different flow rates were conducted for two soils of different repellency degrees as well as for wettable soil. We used spatio-statistical analysis (moments) to characterize the flow patterns. The zeroth moment is related to the total volume of water inside the moisture plume, and the first and second moments are affinitive to the center of mass and spatial variances of the moisture plume, respectively. The experimental results demonstrate that both the general shape and size of the wetting plume and the moisture distribution within the plume for the repellent soils are significantly different from that for the wettable soil. The wetting plume of the repellent soils is smaller, narrower, and longer (finger-like) than that of the wettable soil compared with that for the wettable soil that tended to roundness. Compared to the wettable soil, where the soil water content decreases radially from the source, moisture content for the water-repellent soils is higher, relatively uniform horizontally and gradually increases with depth (saturation overshoot), indicating that flow tends to become unstable. Ellipses, defined around the mass center and whose semi-axes represented a particular number of spatial variances, were successfully used to simulate the spatial and temporal variation of the moisture distribution in the soil profiles. Cumulative probability functions were defined for the water enclosed in these ellipses. Practically identical cumulative probability functions (beta distribution) were obtained for all soils, all source types, and flow rates. Further, same distributions were obtained for the infiltration and redistribution processes. This attractive result demonstrates the competence and advantage of the moment analysis method.

  2. The influence of patient positioning uncertainties in proton radiotherapy on proton range and dose distributions

    PubMed Central

    Liebl, Jakob; Paganetti, Harald; Zhu, Mingyao; Winey, Brian A.

    2014-01-01

    Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patient positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R2 < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe. PMID:25186386

  3. Dosimetric treatment course simulation based on a statistical model of deformable organ motion

    NASA Astrophysics Data System (ADS)

    Söhn, M.; Sobotta, B.; Alber, M.

    2012-06-01

    We present a method of modeling dosimetric consequences of organ deformation and correlated motion of adjacent organ structures in radiotherapy. Based on a few organ geometry samples and the respective deformation fields as determined by deformable registration, principal component analysis (PCA) is used to create a low-dimensional parametric statistical organ deformation model (Söhn et al 2005 Phys. Med. Biol. 50 5893-908). PCA determines the most important geometric variability in terms of eigenmodes, which represent 3D vector fields of correlated organ deformations around the mean geometry. Weighted sums of a few dominating eigenmodes can be used to simulate synthetic geometries, which are statistically meaningful inter- and extrapolations of the input geometries, and predict their probability of occurrence. We present the use of PCA as a versatile treatment simulation tool, which allows comprehensive dosimetric assessment of the detrimental effects that deformable geometric uncertainties can have on a planned dose distribution. For this, a set of random synthetic geometries is generated by a PCA model for each simulated treatment course, and the dose of a given treatment plan is accumulated in the moving tissue elements via dose warping. This enables the calculation of average voxel doses, local dose variability, dose-volume histogram uncertainties, marginal as well as joint probability distributions of organ equivalent uniform doses and thus of TCP and NTCP, and other dosimetric and biologic endpoints. The method is applied to the example of deformable motion of prostate/bladder/rectum in prostate IMRT. Applications include dosimetric assessment of the adequacy of margin recipes, adaptation schemes, etc, as well as prospective ‘virtual’ evaluation of the possible benefits of new radiotherapy schemes.

  4. Parameterization of backbone flexibility in a coarse-grained force field for proteins (COFFDROP) derived from all-atom explicit-solvent molecular dynamics simulations of all possible two-residue peptides

    PubMed Central

    Frembgen-Kesner, Tamara; Andrews, Casey T.; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A.; Jain, Aakash; Olayiwola, Oluwatoni; Weishaar, Mitch R.; Elcock, Adrian H.

    2015-01-01

    Recently, we reported the parameterization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs, and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downwards in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multi-domain proteins connected by flexible linkers. PMID:26574429

  5. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  6. Dosimetric treatment course simulation based on a statistical model of deformable organ motion.

    PubMed

    Söhn, M; Sobotta, B; Alber, M

    2012-06-21

    We present a method of modeling dosimetric consequences of organ deformation and correlated motion of adjacent organ structures in radiotherapy. Based on a few organ geometry samples and the respective deformation fields as determined by deformable registration, principal component analysis (PCA) is used to create a low-dimensional parametric statistical organ deformation model (Söhn et al 2005 Phys. Med. Biol. 50 5893-908). PCA determines the most important geometric variability in terms of eigenmodes, which represent 3D vector fields of correlated organ deformations around the mean geometry. Weighted sums of a few dominating eigenmodes can be used to simulate synthetic geometries, which are statistically meaningful inter- and extrapolations of the input geometries, and predict their probability of occurrence. We present the use of PCA as a versatile treatment simulation tool, which allows comprehensive dosimetric assessment of the detrimental effects that deformable geometric uncertainties can have on a planned dose distribution. For this, a set of random synthetic geometries is generated by a PCA model for each simulated treatment course, and the dose of a given treatment plan is accumulated in the moving tissue elements via dose warping. This enables the calculation of average voxel doses, local dose variability, dose-volume histogram uncertainties, marginal as well as joint probability distributions of organ equivalent uniform doses and thus of TCP and NTCP, and other dosimetric and biologic endpoints. The method is applied to the example of deformable motion of prostate/bladder/rectum in prostate IMRT. Applications include dosimetric assessment of the adequacy of margin recipes, adaptation schemes, etc, as well as prospective 'virtual' evaluation of the possible benefits of new radiotherapy schemes.

  7. Luminescence imaging of water during uniform-field irradiation by spot scanning proton beams

    NASA Astrophysics Data System (ADS)

    Komori, Masataka; Sekihara, Eri; Yabe, Takuya; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi

    2018-06-01

    Luminescence was found during pencil-beam proton irradiation to water phantom and range could be estimated from the luminescence images. However, it is not yet clear whether the luminescence imaging is applied to the uniform fields made of spot-scanning proton-beam irradiations. For this purpose, imaging was conducted for the uniform fields having spread out Bragg peak (SOBP) made by spot scanning proton beams. We designed six types of the uniform fields with different ranges, SOBP widths and irradiation fields. One of the designed fields was irradiated to water phantom and a cooled charge coupled device camera was used to measure the luminescence image during irradiations. We estimated the ranges, field widths, and luminescence intensities from the luminescence images and compared those with the dose distribution calculated by a treatment planning system. For all types of uniform fields, we could obtain clear images of the luminescence showing the SOBPs. The ranges and field widths evaluated from the luminescence were consistent with those of the dose distribution calculated by a treatment planning system within the differences of  ‑4 mm and  ‑11 mm, respectively. Luminescence intensities were almost proportional to the SOBP widths perpendicular to the beam direction. The luminescence imaging could be applied to uniform fields made of spot scanning proton beam irradiations. Ranges and widths of the uniform fields with SOBP could be estimated from the images. The luminescence imaging is promising for the range and field width estimations in proton therapy.

  8. Aneurysm permeability following coil embolization: packing density and coil distribution.

    PubMed

    Chueh, Ju-Yu; Vedantham, Srinivasan; Wakhloo, Ajay K; Carniato, Sarena L; Puri, Ajit S; Bzura, Conrad; Coffin, Spencer; Bogdanov, Alexei A; Gounis, Matthew J

    2015-09-01

    Rates of durable aneurysm occlusion following coil embolization vary widely, and a better understanding of coil mass mechanics is desired. The goal of this study is to evaluate the impact of packing density and coil uniformity on aneurysm permeability. Aneurysm models were coiled using either Guglielmi detachable coils or Target coils. The permeability was assessed by taking the ratio of microspheres passing through the coil mass to those in the working fluid. Aneurysms containing coil masses were sectioned for image analysis to determine surface area fraction and coil uniformity. All aneurysms were coiled to a packing density of at least 27%. Packing density, surface area fraction of the dome and neck, and uniformity of the dome were significantly correlated (p<0.05). Hence, multivariate principal components-based partial least squares regression models were used to predict permeability. Similar loading vectors were obtained for packing and uniformity measures. Coil mass permeability was modeled better with the inclusion of packing and uniformity measures of the dome (r(2)=0.73) than with packing density alone (r(2)=0.45). The analysis indicates the importance of including a uniformity measure for coil distribution in the dome along with packing measures. A densely packed aneurysm with a high degree of coil mass uniformity will reduce permeability. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  10. Non-axisymmetric flow characteristics in centrifugal compressor

    NASA Astrophysics Data System (ADS)

    Wang, Leilei; Lao, Dazhong; Liu, Yixiong; Yang, Ce

    2015-06-01

    The flow field distribution in centrifugal compressor is significantly affected by the non-axisymmetric geometry structure of the volute. The experimental and numerical simulation methods were adopted in this work to study the compressor flow field distribution with different flow conditions. The results show that the pressure distributionin volute is characterized by the circumferential non-uniform phenomenon and the pressure fluctuation on the high static pressure zone propagates reversely to upstream, which results in the non-axisymmetric flow inside the compressor. The non-uniform level of pressure distribution in large flow condition is higher than that in small flow condition, its effect on the upstream flow field is also stronger. Additionally, the non-uniform circumferential pressure distribution in volute brings the non-axisymmetric flow at impeller outlet. In different flow conditions,the circumferential variation of the absolute flow angle at impeller outlet is also different. Meanwhile, the non-axisymmetric flow characteristics in internal impeller can be also reflected by the distribution of the mass flow. The high static pressure region of the volute corresponds to the decrease of mass flow in upstream blade channel, while the low static pressure zone of the volute corresponds to the increase of the mass flow. In small flow condition, the mass flow difference in the blade channel is bigger than that in the large flow condition.

  11. Streamline similarity method for flow distributions and shock losses at the impeller inlet of the centrifugal pump

    NASA Astrophysics Data System (ADS)

    Zhang, Zh.

    2018-02-01

    An analytical method is presented, which enables the non-uniform velocity and pressure distributions at the impeller inlet of a pump to be accurately computed. The analyses are based on the potential flow theory and the geometrical similarity of the streamline distribution along the leading edge of the impeller blades. The method is thus called streamline similarity method (SSM). The obtained geometrical form of the flow distribution is then simply described by the geometrical variable G( s) and the first structural constant G I . As clearly demonstrated and also validated by experiments, both the flow velocity and the pressure distributions at the impeller inlet are usually highly non-uniform. This knowledge is indispensible for impeller blade designs to fulfill the shockless inlet flow condition. By introducing the second structural constant G II , the paper also presents the simple and accurate computation of the shock loss, which occurs at the impeller inlet. The introduction of two structural constants contributes immensely to the enhancement of the computational accuracies. As further indicated, all computations presented in this paper can also be well applied to the non-uniform exit flow out of an impeller of the Francis turbine for accurately computing the related mean values.

  12. Probability density function shape sensitivity in the statistical modeling of turbulent particle dispersion

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.

  13. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  14. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  15. Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension

    NASA Astrophysics Data System (ADS)

    Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek

    2018-04-01

    We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.

  16. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    ERIC Educational Resources Information Center

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  17. Ion distribution in dry polyelectrolyte multilayers: a neutron reflectometry study

    DOE PAGES

    Ghoussoub, Yara E.; Zerball, Maximilian; Fares, Hadi M.; ...

    2018-02-09

    Counterions were found to be uniformly distributed in polycation-terminated films of poly(diallyldimethylammonium) and poly(styrenesulfonate) prepared on silicon wafers using layer-by-layer adsorption.

  18. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    ,

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  19. Probability distributions of the electroencephalogram envelope of preterm infants.

    PubMed

    Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro

    2015-06-01

    To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higa, Kenneth; Zhao, Hui; Parkinson, Dilworth Y.

    The internal structure of a porous electrode strongly influences battery performance. Understanding the dynamics of electrode slurry drying could aid in engineering electrodes with desired properties. For instance, one might monitor the dynamic, spatially-varying thickness near the edge of a slurry coating, as it should lead to non-uniform thickness of the dried film. This work examines the dynamic behavior of drying slurry drops consisting of SiO x and carbon black particles in a solution of carboxymethylcellulose and deionized water, as an experimental model of drying behavior near the edge of a slurry coating. An X-ray radiography-based procedure is developed tomore » calculate the evolving spatial distribution of active material particles from images of the drying slurry drops. To the authors’ knowledge, this study is the first to use radiography to investigate battery slurry drying, as well as the first to determine particle distributions from radiography images of drying suspensions. The dynamic results are consistent with tomography reconstructions of the static, fully-dried films. It is found that active material particles can rapidly become non-uniformly distributed within the drops. Heating can promote distribution uniformity, but seemingly must be applied very soon after slurry deposition. Higher slurry viscosity is found to strongly restrain particle redistribution.« less

  2. Facile synthesis of concentrated gold nanoparticles with low size-distribution in water: temperature and pH controls

    NASA Astrophysics Data System (ADS)

    Li, Chunfang; Li, Dongxiang; Wan, Gangqiang; Xu, Jie; Hou, Wanguo

    2011-07-01

    The citrate reduction method for the synthesis of gold nanoparticles (GNPs) has known advantages but usually provides the products with low nanoparticle concentration and limits its application. Herein, we report a facile method to synthesize GNPs from concentrated chloroauric acid (2.5 mM) via adding sodium hydroxide and controlling the temperature. It was found that adding a proper amount of sodium hydroxide can produce uniform concentrated GNPs with low size distribution; otherwise, the largely distributed nanoparticles or instable colloids were obtained. The low reaction temperature is helpful to control the nanoparticle formation rate, and uniform GNPs can be obtained in presence of optimized NaOH concentrations. The pH values of the obtained uniform GNPs were found to be very near to neutral, and the pH influence on the particle size distribution may reveal the different formation mechanism of GNPs at high or low pH condition. Moreover, this modified synthesis method can save more than 90% energy in the heating step. Such environmental-friendly synthesis method for gold nanoparticles may have a great potential in large-scale manufacturing for commercial and industrial demand.

  3. IR-camera methods for automotive brake system studies

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Lee, Kwangjin

    1998-03-01

    Automotive brake systems are energy conversion devices that convert kinetic energy into heat energy. Several mechanisms, mostly related to noise and vibration problems, can occur during brake operation and are often related to non-uniform temperature distribution on the brake disk. These problems are of significant cost to the industry and are a quality concern to automotive companies and brake system vendors. One such problem is thermo-elastic instabilities in brake system. During the occurrence of these instabilities several localized hot spots will form around the circumferential direction of the brake disk. The temperature distribution and the time dependence of these hot spots, a critical factor in analyzing this problem and in developing a fundamental understanding of this phenomenon, were recorded. Other modes of non-uniform temperature distributions which include hot banding and extreme localized heating were also observed. All of these modes of non-uniform temperature distributions were observed on automotive brake systems using a high speed IR camera operating in snap-shot mode. The camera was synchronized with the rotation of the brake disk so that the time evolution of hot regions could be studied. This paper discusses the experimental approach in detail.

  4. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    PubMed Central

    Lin, Wei-Chun; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ 50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  5. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  6. Marcus-Hush-Chidsey theory of electron transfer to and from species bound at a non-uniform electrode surface: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Henstridge, Martin C.; Batchelor-McAuley, Christopher; Gusmão, Rui; Compton, Richard G.

    2011-11-01

    Two simple models of electrode surface inhomogeneity based on Marcus-Hush theory are considered; a distribution in formal potentials and a distribution in electron tunnelling distances. Cyclic voltammetry simulated using these models is compared with that simulated using Marcus-Hush theory for a flat, uniform and homogeneous electrode surface, with the two models of surface inhomogeneity yielding broadened peaks with decreased peak-currents. An edge-plane pyrolytic graphite electrode is covalently modified with ferrocene via 'click' chemistry and the resulting voltammetry compared with each of the three previously considered models. The distribution of formal potentials is seen to fit the experimental data most closely.

  7. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  8. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  9. Uniform-related infection control practices of dental students

    PubMed Central

    Aljohani, Yazan; Almutadares, Mohammed; Alfaifi, Khalid; El Madhoun, Mona; Albahiti, Maysoon H; Al-Hazmi, Nadia

    2017-01-01

    Background Uniform-related infection control practices are sometimes overlooked and underemphasized. In Saudi Arabia, personal protective equipment must meet global standards for infection control, but the country’s Islamic legislature also needs to be taken into account. Aim To assess uniform-related infection control practices of a group of dental students in a dental school in Saudi Arabia and compare the results with existing literature related to cross-contamination through uniforms in the dental field. Method A questionnaire was formulated and distributed to dental students at King Abdulaziz University Faculty of Dentistry in Jeddah, Saudi Arabia, which queried the students about their uniform-related infection control practices and their methods and frequency of laundering and sanitizing their uniforms, footwear, and name tags. Results There is a significant difference between genders with regard to daily uniform habits. The frequency of uniform washing was below the standard and almost 30% of students were not aware of how their uniforms are washed. Added to this, there is no consensus on a unified uniform for male and female students. Conclusion Information on preventing cross-contamination through wearing uniforms must be supplied, reinforced, and emphasized while taking into consideration the cultural needs of the Saudi society. PMID:28490894

  10. Geology of the Teakettle Creek watersheds

    Treesearch

    Robert S. LaMotte

    1937-01-01

    The Teakettle Creek Experimental Watersheds lie for the most part on quartzites of probable Triassic age. However one of the triplicate drainages has a considerable acreage developed on weathered granodiorite. Topography is relatively uniform and lends itself to triplicate watershed studies. Locations for dams are suitable if certain engineering precautions...

  11. Stylized facts in internal rates of return on stock index and its derivative transactions

    NASA Astrophysics Data System (ADS)

    Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya

    2007-08-01

    Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.

  12. Probabilistic Reasoning for Robustness in Automated Planning

    NASA Technical Reports Server (NTRS)

    Schaffer, Steven; Clement, Bradley; Chien, Steve

    2007-01-01

    A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.

  13. Development of on-site PAFC stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hotta, K.; Matsumoto, Y.; Horiuchi, H.

    1996-12-31

    PAFC (Phosphoric Acid Fuel Cell) has been researched for commercial use and demonstration plants have been installed in various sites. However, PAFC don`t have a enough stability yet, so more research and development must be required in the future. Especially, cell stack needs a proper state of three phases (liquid, gas and solid) interface. It is very difficult technology to keep this condition for a long time. In the small size cell with the electrode area of 100 cm{sup 2}, gas flow and temperature distributions show uniformity. But in the large size cell with the electrode area of 4000 cm{supmore » 2}, the temperature distributions show non-uniformity. These distributions would cause to be shorten the cell life. Because these distributions make hot-spot and gas poverty in limited parts. So we inserted thermocouples in short-stack for measuring three-dimensional temperature distributions and observed effects of current density and gas utilization on temperature.« less

  14. Some limit theorems for ratios of order statistics from uniform random variables.

    PubMed

    Xu, Shou-Fang; Miao, Yu

    2017-01-01

    In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.

  15. Tensile testing grips ensure uniform loading of bimetal tubing specimens

    NASA Technical Reports Server (NTRS)

    Driscol, S. D.; Hunt, V.

    1968-01-01

    Tensile testing grip uniformly distributes stresses to the internal and external tube of bimetal tubing specimens. The grip is comprised of a slotted external tube grip, a slotted internal tube grip, a machine bolt and nut, an internal grip expansion cone, and an external grip compression nut.

  16. The mixing of rain with near-surface water

    Treesearch

    Dennis F. Houk

    1976-01-01

    Rain experiments were run with various temperature differences between the warm rain and the cool receiving water. The rain intensities were uniform and the raindrop sizes were usually uniform (2.2 mm, 3.6 mm, and 5.5 mm diameter drops). Two drop size distributions were also used.

  17. Anode current density distribution in a cusped field thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Huan, E-mail: wuhuan58@qq.com; Liu, Hui, E-mail: hlying@gmail.com; Meng, Yingchao

    2015-12-15

    The cusped field thruster is a new electric propulsion device that is expected to have a non-uniform radial current density at the anode. To further study the anode current density distribution, a multi-annulus anode is designed to directly measure the anode current density for the first time. The anode current density decreases sharply at larger radii; the magnitude of collected current density at the center is far higher compared with the outer annuli. The anode current density non-uniformity does not demonstrate a significant change with varying working conditions.

  18. Mathematical Model to estimate the wind power using four-parameter Burr distribution

    NASA Astrophysics Data System (ADS)

    Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu

    2018-03-01

    When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.

  19. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  20. Work probability distribution for a ferromagnet with long-ranged and short-ranged correlations

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, J. K.; Kirkpatrick, T. R.; Sengers, J. V.

    2018-04-01

    Work fluctuations and work probability distributions are fundamentally different in systems with short-ranged versus long-ranged correlations. Specifically, in systems with long-ranged correlations the work distribution is extraordinarily broad compared to systems with short-ranged correlations. This difference profoundly affects the possible applicability of fluctuation theorems like the Jarzynski fluctuation theorem. The Heisenberg ferromagnet, well below its Curie temperature, is a system with long-ranged correlations in very low magnetic fields due to the presence of Goldstone modes. As the magnetic field is increased the correlations gradually become short ranged. Hence, such a ferromagnet is an ideal system for elucidating the changes of the work probability distribution as one goes from a domain with long-ranged correlations to a domain with short-ranged correlations by tuning the magnetic field. A quantitative analysis of this crossover behavior of the work probability distribution and the associated fluctuations is presented.

Top