Sample records for exact feature probabilities

  1. Exact transition probabilities in a 6-state Landau–Zener system with path interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinitsyn, Nikolai A.

    2015-04-23

    In this paper, we identify a nontrivial multistate Landau–Zener (LZ) model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. In the semiclassical picture, this model features the possibility of interference of different trajectories that connect the same initial and final states. Hence, transition probabilities are generally not described by the incoherent successive application of the LZ formula. Finally, we discuss reasons for integrability of this system and provide numerical tests of the suggested expression for the transition probability matrix.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curchod, Basile F. E.; Agostini, Federica, E-mail: agostini@mpi-halle.mpg.de; Gross, E. K. U.

    Nonadiabatic quantum interferences emerge whenever nuclear wavefunctions in different electronic states meet and interact in a nonadiabatic region. In this work, we analyze how nonadiabatic quantum interferences translate in the context of the exact factorization of the molecular wavefunction. In particular, we focus our attention on the shape of the time-dependent potential energy surface—the exact surface on which the nuclear dynamics takes place. We use a one-dimensional exactly solvable model to reproduce different conditions for quantum interferences, whose characteristic features already appear in one-dimension. The time-dependent potential energy surface develops complex features when strong interferences are present, in clear contrastmore » to the observed behavior in simple nonadiabatic crossing cases. Nevertheless, independent classical trajectories propagated on the exact time-dependent potential energy surface reasonably conserve a distribution in configuration space that mimics one of the exact nuclear probability densities.« less

  3. Calculating pH-dependent free energy of proteins by using Monte Carlo protonation probabilities of ionizable residues.

    PubMed

    Huang, Qiang; Herrmann, Andreas

    2012-03-01

    Protein folding, stability, and function are usually influenced by pH. And free energy plays a fundamental role in analysis of such pH-dependent properties. Electrostatics-based theoretical framework using dielectric solvent continuum model and solving Poisson-Boltzmann equation numerically has been shown to be very successful in understanding the pH-dependent properties. However, in this approach the exact computation of pH-dependent free energy becomes impractical for proteins possessing more than several tens of ionizable sites (e.g. > 30), because exact evaluation of the partition function requires a summation over a vast number of possible protonation microstates. Here we present a method which computes the free energy using the average energy and the protonation probabilities of ionizable sites obtained by the well-established Monte Carlo sampling procedure. The key feature is to calculate the entropy by using the protonation probabilities. We used this method to examine a well-studied protein (lysozyme) and produced results which agree very well with the exact calculations. Applications to the optimum pH of maximal stability of proteins and protein-DNA interactions have also resulted in good agreement with experimental data. These examples recommend our method for application to the elucidation of the pH-dependent properties of proteins.

  4. Isotopic effects in the collinear reactive FHH system

    NASA Technical Reports Server (NTRS)

    Lepetit, B.; Launay, J. M.; Le Dourneuf, M.

    1986-01-01

    Exact quantum reaction probabilities for a collinear model of the F + HH, HD, DD and DH reactions on the MV potential energy surface have been computed using hyperspherical coordinates. The results, obtained up to a total energy of 1.8 eV, show three main features: (1) resonances, whose positions and widths are analyzed simply in the hyperspherical formalism; (2) a slowly varying background increasing for FHD, decreasing for FDH, and oscillating for FHH and FDD, whose variations are interpreted by classical dynamics; and (3) partial reaction probabilities revealing decreasing vibrational adiabaticity in the order FHH-FDD-FHD-FDH.

  5. Exact one-sided confidence limits for the difference between two correlated proportions.

    PubMed

    Lloyd, Chris J; Moldovan, Max V

    2007-08-15

    We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.

  6. Are there common mathematical structures in economics and physics?

    NASA Astrophysics Data System (ADS)

    Mimkes, Jürgen

    2016-12-01

    Economics is a field that looks into the future. We may know a few things ahead (ex ante), but most things we only know, afterwards (ex post). How can we work in a field, where much of the important information is missing? Mathematics gives two answers: 1. Probability theory leads to microeconomics: the Lagrange function optimizes utility under constraints of economic terms (like costs). The utility function is the entropy, the logarithm of probability. The optimal result is given by a probability distribution and an integrating factor. 2. Calculus leads to macroeconomics: In economics we have two production factors, capital and labour. This requires two dimensional calculus with exact and not-exact differentials, which represent the "ex ante" and "ex post" terms of economics. An integrating factor turns a not-exact term (like income) into an exact term (entropy, the natural production function). The integrating factor is the same as in microeconomics and turns the not-exact field of economics into an exact physical science.

  7. A large class of solvable multistate Landau–Zener models and quantum integrability

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Sinitsyn, Nikolai A.; Sun, Chen

    2018-06-01

    The concept of quantum integrability has been introduced recently for quantum systems with explicitly time-dependent Hamiltonians (Sinitsyn et al 2018 Phys. Rev. Lett. 120 190402). Within the multistate Landau–Zener (MLZ) theory, however, there has been a successful alternative approach to identify and solve complex time-dependent models (Sinitsyn and Chernyak 2017 J. Phys. A: Math. Theor. 50 255203). Here we compare both methods by applying them to a new class of exactly solvable MLZ models. This class contains systems with an arbitrary number of interacting states and shows quick growth with N number of exact adiabatic energy crossing points, which appear at different moments of time. At each N, transition probabilities in these systems can be found analytically and exactly but complexity and variety of solutions in this class also grow with N quickly. We illustrate how common features of solvable MLZ systems appear from quantum integrability and develop an approach to further classification of solvable MLZ problems.

  8. A new exact method for line radiative transfer

    NASA Astrophysics Data System (ADS)

    Elitzur, Moshe; Asensio Ramos, Andrés

    2006-01-01

    We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.

  9. Exact Tests for the Rasch Model via Sequential Importance Sampling

    ERIC Educational Resources Information Center

    Chen, Yuguo; Small, Dylan

    2005-01-01

    Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…

  10. Exact Solution of Mutator Model with Linear Fitness and Finite Genome Length

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2017-08-01

    We considered the infinite population version of the mutator phenomenon in evolutionary dynamics, looking at the uni-directional mutations in the mutator-specific genes and linear selection. We solved exactly the model for the finite genome length case, looking at the quasispecies version of the phenomenon. We calculated the mutator probability both in the statics and dynamics. The exact solution is important for us because the mutator probability depends on the genome length in a highly non-trivial way.

  11. Computing exact bundle compliance control charts via probability generating functions.

    PubMed

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  12. Valuing options in shot noise market

    NASA Astrophysics Data System (ADS)

    Laskin, Nick

    2018-07-01

    A new exactly solvable option pricing model has been introduced and elaborated. It is assumed that a stock price follows a Geometric shot noise process. An arbitrage-free integro-differential option pricing equation has been obtained and solved. The new Greeks have been analytically calculated. It has been shown that in diffusion approximation the developed option pricing model incorporates the well-known Black-Scholes equation and its solution. The stochastic dynamic origin of the Black-Scholes volatility has been uncovered. To model the observed market stock price patterns consisting of high frequency small magnitude and low frequency large magnitude jumps, the superposition of two Geometric shot noises has been implemented. A new generalized option pricing equation has been obtained and its exact solution was found. Merton's jump-diffusion formula for option price was recovered in diffusion approximation. Despite the non-Gaussian nature of probability distributions involved, the new option pricing model has the same degree of analytical tractability as the Black-Scholes model and the Merton jump-diffusion model. This attractive feature allows one to derive exact formulas to value options and option related instruments in the market with jump-like price patterns.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  14. The exact probability distribution of the rank product statistics for replicated experiments.

    PubMed

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  15. Bound state and localization of excitation in many-body open systems

    NASA Astrophysics Data System (ADS)

    Cui, H. T.; Shen, H. Z.; Hou, S. C.; Yi, X. X.

    2018-04-01

    We study the exact bound state and time evolution for single excitations in one-dimensional X X Z spin chains within a non-Markovian reservoir. For the bound state, a common feature is the localization of single excitations, which means the spontaneous emission of excitations into the reservoir is prohibited. Exceptionally, the pseudo-bound state can be found, for which the single excitation has a finite probability of emission into the reservoir. In addition, a critical energy scale for bound states is also identified, below which only one bound state exists, and it is also the pseudo-bound state. The effect of quasirandom disorder in the spin chain is also discussed; such disorder induces the single excitation to locate at some spin sites. Furthermore, to display the effect of bound state and disorder on the preservation of quantum information, the time evolution of single excitations in spin chains is studied exactly. An interesting observation is that the excitation can stay at its initial location with high probability only when the bound state and disorder coexist. In contrast, when either one of them is absent, the information of the initial state can be erased completely or becomes mixed. This finding shows that the combination of bound state and disorder can provide an ideal mechanism for quantum memory.

  16. Does the probability of developing ocular trauma-related visual deficiency differ between genders?

    PubMed

    Blanco-Hernández, Dulce Milagros Razo; Valencia-Aguirre, Jessica Daniela; Lima-Gómez, Virgilio

    2011-01-01

    Ocular trauma affects males more often than females, but the impact of this condition regarding visual prognosis is unknown. We undertook this study to compare the probability of developing ocular trauma-related visual deficiency between genders, as estimated by the ocular trauma score (OTS). We designed an observational, retrospective, comparative, cross-sectional and open-label study. Female patients aged ≥6 years with ocular trauma were included and matched by age and ocular wall status with male patients at a 1:2 male/female ratio. Initial trauma features and the probability of developing visual deficiency (best corrected visual acuity <20/40) 6 months after the injury, as estimated by the OTS, were compared between genders. The proportion and 95% confidence intervals (95% CI) of visual deficiency 6 months after the injury were estimated. Ocular trauma features and the probability of developing visual deficiency were compared between genders (χ(2) and Fisher's exact test); p value <0.05 was considered significant. Included were 399 eyes (133 from females and 266 from males). Mean age of patients was 25.7 ± 14.6 years. Statistical differences existed in the proportion of zone III in closed globe trauma (p = 0.01) and types A (p = 0.04) and type B (p = 0.02) in open globe trauma. The distribution of the OTS categories was similar for both genders (category 5: p = 0.9); the probability of developing visual deficiency was 32.6% (95% CI = 24.6 to 40.5) in females and 33.2% (95% CI = 27.6 to 38.9) in males (p = 0.9). The probability of developing ocular trauma-related visual deficiency was similar for both genders. The same standard is required.

  17. Fast algorithm for probabilistic bone edge detection (FAPBED)

    NASA Astrophysics Data System (ADS)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean value of 0.38 indicating clear identification of surface points on average. The segmentation was also sufficiently crisp, with a full width at half maximum (FWHM) value of 1.51 voxels.

  18. Screening of copy number variants in the 22q11.2 region of congenital heart disease patients from the São Miguel Island, Azores, revealed the second patient with a triplication.

    PubMed

    Pires, Renato; Pires, Luís M; Vaz, Sara O; Maciel, Paula; Anjos, Rui; Moniz, Raquel; Branco, Claudia C; Cabral, Rita; Carreira, Isabel M; Mota-Vieira, Luisa

    2014-11-07

    The rearrangements in the 22q11.2 chromosomal region, responsible for the 22q11.2 deletion and microduplication syndromes, are frequently associated with congenital heart disease (CHD). The present work aimed to identify the genetic basis of CHD in 87 patients from the São Miguel Island, Azores, through the detection of copy number variants (CNVs) in the 22q11.2 region. These structural variants were searched using multiplex ligation-dependent probe amplification (MLPA). In patients with CNVs, we additionally performed fluorescent in situ hybridization (FISH) for the assessment of the exact number of 22q11.2 copies among each chromosome, and array comparative genomic hybridization (array-CGH) for the determination of the exact length of CNVs. We found that four patients (4.6%; A to D) carried CNVs. Patients A and D, both affected with a ventricular septal defect, carried a de novo 2.5 Mb deletion of the 22q11.2 region, which was probably originated by inter-chromosomal (inter-chromatid) non-allelic homologous recombination (NAHR) events in the regions containing low-copy repeats (LCRs). Patient C, with an atrial septal defect, carried a de novo 2.5 Mb duplication of 22q11.2 region, which could have been probably generated during gametogenesis by NAHR or by unequal crossing-over; additionally, this patient presented a benign 288 Kb duplication, which included the TOP3B gene inherited from her healthy mother. Finally, patient B showed a 3 Mb triplication associated with dysmorphic facial features, cognitive deficit and heart defects, a clinical feature not reported in the only case described so far in the literature. The evaluation of patient B's parents revealed a 2.5 Mb duplication in her father, suggesting a paternal inheritance with an extra copy. This report allowed the identification of rare deletion and microduplication syndromes in Azorean CHD patients. Moreover, we report the second patient with a 22q11.2 triplication, and we suggest that patients with triplications of chromosome 22q11.2, although they share some characteristic features with the deletion and microduplication syndromes, present a more severe phenotype probably due to the major dosage of implicated genes.

  19. Analysis of the Westland Data Set

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2001-01-01

    The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.

  20. Quantum work in the Bohmian framework

    NASA Astrophysics Data System (ADS)

    Sampaio, R.; Suomela, S.; Ala-Nissila, T.; Anders, J.; Philbin, T. G.

    2018-01-01

    At nonzero temperature classical systems exhibit statistical fluctuations of thermodynamic quantities arising from the variation of the system's initial conditions and its interaction with the environment. The fluctuating work, for example, is characterized by the ensemble of system trajectories in phase space and, by including the probabilities for various trajectories to occur, a work distribution can be constructed. However, without phase-space trajectories, the task of constructing a work probability distribution in the quantum regime has proven elusive. Here we use quantum trajectories in phase space and define fluctuating work as power integrated along the trajectories, in complete analogy to classical statistical physics. The resulting work probability distribution is valid for any quantum evolution, including cases with coherences in the energy basis. We demonstrate the quantum work probability distribution and its properties with an exactly solvable example of a driven quantum harmonic oscillator. An important feature of the work distribution is its dependence on the initial statistical mixture of pure states, which is reflected in higher moments of the work. The proposed approach introduces a fundamentally different perspective on quantum thermodynamics, allowing full thermodynamic characterization of the dynamics of quantum systems, including the measurement process.

  1. Compact perturbative expressions for neutrino oscillations in matter

    DOE PAGES

    Denton, Peter B.; Minakata, Hisakazu; Parke, Stephen J.

    2016-06-08

    We further develop and extend a recent perturbative framework for neutrino oscillations in uniform matter density so that the resulting oscillation probabilities are accurate for the complete matter potential versus baseline divided by neutrino energy plane. This extension also gives the exact oscillation probabilities in vacuum for all values of baseline divided by neutrino energy. The expansion parameter used is related to the ratio of the solar to the atmosphericmore » $$\\Delta m^2$$ scales but with a unique choice of the atmospheric $$\\Delta m^2$$ such that certain first-order effects are taken into account in the zeroth-order Hamiltonian. Using a mixing matrix formulation, this framework has the exceptional feature that the neutrino oscillation probability in matter has the same structure as in vacuum, to all orders in the expansion parameter. It also contains all orders in the matter potential and $$\\sin\\theta_{13}$$. It facilitates immediate physical interpretation of the analytic results, and makes the expressions for the neutrino oscillation probabilities extremely compact and very accurate even at zeroth order in our perturbative expansion. Furthermore, the first and second order results are also given which improve the precision by approximately two or more orders of magnitude per perturbative order.« less

  2. The Plausibility of a String Quartet Performance in Virtual Reality.

    PubMed

    Bergstrom, Ilias; Azevedo, Sergio; Papiotis, Panos; Saldanha, Nuno; Slater, Mel

    2017-04-01

    We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. 'Plausibility' refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant's movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.

  3. On the degree distribution of horizontal visibility graphs associated with Markov processes and dynamical systems: diagrammatic and variational approaches

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas

    2014-09-01

    Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.

  4. Exact probability distribution functions for Parrondo's games

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Saakian, David B.; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  5. Exact probability distribution functions for Parrondo's games.

    PubMed

    Zadourian, Rubina; Saakian, David B; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  6. Fingerprints of exceptional points in the survival probability of resonances in atomic spectra

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Moiseyev, Nimrod

    2011-07-01

    The unique time signature of the survival probability exactly at the exceptional point parameters is studied here for the hydrogen atom in strong static magnetic and electric fields. We show that indeed the survival probability S(t)=|<ψ(0)|ψ(t)>|2 decays exactly as |1-at|2e-ΓEPt/ℏ, where ΓEP is associated with the decay rate at the exceptional point and a is a complex constant depending solely on the initial wave packet that populates exclusively the two almost degenerate states of the non-Hermitian Hamiltonian. This may open the possibility for a first experimental detection of exceptional points in a quantum system.

  7. Single-molecule stochastic times in a reversible bimolecular reaction

    NASA Astrophysics Data System (ADS)

    Keller, Peter; Valleriani, Angelo

    2012-08-01

    In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn; Ide, Yusuke

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coinmore » and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.« less

  9. A Comparison of the Exact Kruskal-Wallis Distribution to Asymptotic Approximations for All Sample Sizes up to 105

    ERIC Educational Resources Information Center

    Meyer, J. Patrick; Seaman, Michael A.

    2013-01-01

    The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…

  10. Singular solution of the Feller diffusion equation via a spectral decomposition.

    PubMed

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  11. Singular solution of the Feller diffusion equation via a spectral decomposition

    NASA Astrophysics Data System (ADS)

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  12. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures

    PubMed Central

    Sloma, Michael F.; Mathews, David H.

    2016-01-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924

  13. Condition Monitoring for Helicopter Data. Appendix A

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2000-01-01

    In this paper the classical "Westland" set of empirical accelerometer helicopter data is analyzed with the aim of condition monitoring for diagnostic purposes. The goal is to determine features for failure events from these data, via a proprietary signal processing toolbox, and to weigh these according to a variety of classification algorithms. As regards signal processing, it appears that the autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; it has also been found that augmentation of these by harmonic and other parameters can improve classification significantly. As regards classification, several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior on training data and is thus able to quantify probability of error in an exact manner, such that features may be discarded or coarsened appropriately.

  14. Constructing the Exact Significance Level for a Person-Fit Statistic.

    ERIC Educational Resources Information Center

    Liou, Michelle; Chang, Chih-Hsin

    1992-01-01

    An extension is proposed for the network algorithm introduced by C.R. Mehta and N.R. Patel to construct exact tail probabilities for testing the general hypothesis that item responses are distributed according to the Rasch model. A simulation study indicates the efficiency of the algorithm. (SLD)

  15. Exact valence bond entanglement entropy and probability distribution in the XXX spin chain and the potts model.

    PubMed

    Jacobsen, J L; Saleur, H

    2008-02-29

    We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.

  16. Exact joint density-current probability function for the asymmetric exclusion process.

    PubMed

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  17. Burning mouth syndrome: an enigmatic disorder.

    PubMed

    Javali, M A

    2013-01-01

    Burning mouth syndrome (BMS) is a chronic oral pain or burning sensation affecting the oral mucosa, often unaccompanied by mucosal lesions or other evident clinical signs. It is observed principally in middle-aged patients and postmenopausal women and may be accompanied by xerostomia and altered taste. Burning mouth syndrome is characterized by an intense burning or stinging sensation, preferably on the tongue or in other areas of mouth. This disorder is one of the most common, encountered in the clinical practice. This condition is probably of multifactorial origin; however the exact underlying etiology remains uncertain. This article discusses several aspects of BMS, updates current knowledge about the etiopathogenesis and describes the clinical features as well as the diagnosis and management of BMS patients.

  18. Rings in above-threshold ionization: A quasiclassical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewenstein, M.; Kulander, K.C.; Schafer, K.J.

    1995-02-01

    A generalized strong-field approximation is formulated to describe atoms interacting with intense laser fields. We apply it to determine angular distributions of electrons in above-threshold ionization (ATI). The theory treats the effects of an electron rescattering from its parent ion core in a systematic perturbation series. Probability amplitudes for ionization are interpreted in terms of quasiclassical electron trajectories. We demonstrate that contributions from the direct tunneling processes in the absence of rescattering are not sufficient to describe the observed ATI spectra. We show that the high-energy portion of the spectrum, including recently discovered rings (i.e., complex features in the angularmore » distributions of outgoing electrons) are due to rescattering processes. We compare our quasiclassical results with exact numerical solutions.« less

  19. An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems

    PubMed Central

    Dawson, Kevin J.; Belkhir, Khalid

    2009-01-01

    Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306

  20. Performance of cellular frequency-hopped spread-spectrum radio networks

    NASA Astrophysics Data System (ADS)

    Gluck, Jeffrey W.; Geraniotis, Evaggelos

    1989-10-01

    Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.

  1. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures.

    PubMed

    Sloma, Michael F; Mathews, David H

    2016-12-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. © 2016 Sloma and Mathews; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  2. Computer program determines exact two-sided tolerance limits for normal distributions

    NASA Technical Reports Server (NTRS)

    Friedman, H. A.; Webb, S. R.

    1968-01-01

    Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.

  3. Knowing where is different from knowing what: Distinct response time profiles and accuracy effects for target location, orientation, and color probability.

    PubMed

    Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt

    2017-11-01

    When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute.

  4. Performance analysis of OOK-based FSO systems in Gamma-Gamma turbulence with imprecise channel models

    NASA Astrophysics Data System (ADS)

    Feng, Jianfeng; Zhao, Xiaohui

    2017-11-01

    For an FSO communication system with imprecise channel model, we investigate its system performance based on outage probability, average BEP and ergodic capacity. The exact FSO links are modeled as Gamma-Gamma fading channel in consideration of both atmospheric turbulence and pointing errors, and the imprecise channel model is treated as the superposition of exact channel gain and a Gaussian random variable. After we derive the PDF, CDF and nth moment of the imprecise channel gain, and based on these statistics the expressions for the outage probability, the average BEP and the ergodic capacity in terms of the Meijer's G functions are obtained. Both numerical and analytical results are presented. The simulation results show that the communication performance deteriorates in the imprecise channel model, and approaches to the exact performance curves as the channel model becomes accurate.

  5. General Exact Solution to the Problem of the Probability Density for Sums of Random Variables

    NASA Astrophysics Data System (ADS)

    Tribelsky, Michael I.

    2002-07-01

    The exact explicit expression for the probability density pN(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of pN(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  6. General exact solution to the problem of the probability density for sums of random variables.

    PubMed

    Tribelsky, Michael I

    2002-08-12

    The exact explicit expression for the probability density p(N)(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of p(N)(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  7. A short note on probability in clinical medicine.

    PubMed

    Upshur, Ross E G

    2013-06-01

    Probability claims are ubiquitous in clinical medicine, yet exactly how clinical events relate to interpretations of probability has been not been well explored. This brief essay examines the major interpretations of probability and how these interpretations may account for the probabilistic nature of clinical events. It is argued that there are significant problems with the unquestioned application of interpretation of probability to clinical events. The essay concludes by suggesting other avenues to understand uncertainty in clinical medicine. © 2013 John Wiley & Sons Ltd.

  8. Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.

    2017-12-01

    We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cartarius, Holger; Moiseyev, Nimrod; Department of Physics and Minerva Center for Nonlinear Physics of Complex Systems, Technion-Israel Institute of Technology, Haifa, 32000

    The unique time signature of the survival probability exactly at the exceptional point parameters is studied here for the hydrogen atom in strong static magnetic and electric fields. We show that indeed the survival probability S(t)=|<{psi}(0)|{psi}(t)>|{sup 2} decays exactly as |1-at|{sup 2}e{sup -{Gamma}{sub E}{sub P}t/({Dirac_h}/2{pi})}, where {Gamma}{sub EP} is associated with the decay rate at the exceptional point and a is a complex constant depending solely on the initial wave packet that populates exclusively the two almost degenerate states of the non-Hermitian Hamiltonian. This may open the possibility for a first experimental detection of exceptional points in a quantum system.

  10. Faster computation of exact RNA shape probabilities.

    PubMed

    Janssen, Stefan; Giegerich, Robert

    2010-03-01

    Abstract shape analysis allows efficient computation of a representative sample of low-energy foldings of an RNA molecule. More comprehensive information is obtained by computing shape probabilities, accumulating the Boltzmann probabilities of all structures within each abstract shape. Such information is superior to free energies because it is independent of sequence length and base composition. However, up to this point, computation of shape probabilities evaluates all shapes simultaneously and comes with a computation cost which is exponential in the length of the sequence. We device an approach called RapidShapes that computes the shapes above a specified probability threshold T by generating a list of promising shapes and constructing specialized folding programs for each shape to compute its share of Boltzmann probability. This aims at a heuristic improvement of runtime, while still computing exact probability values. Evaluating this approach and several substrategies, we find that only a small proportion of shapes have to be actually computed. For an RNA sequence of length 400, this leads, depending on the threshold, to a 10-138 fold speed-up compared with the previous complete method. Thus, probabilistic shape analysis has become feasible in medium-scale applications, such as the screening of RNA transcripts in a bacterial genome. RapidShapes is available via http://bibiserv.cebitec.uni-bielefeld.de/rnashapes

  11. Some New Twists to Problems Involving the Gaussian Probability Integral

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1997-01-01

    Using an alternate form of the Gaussian probability integral discovered a number of years ago, it is shown that the solution to a number of previously considered communication problems can be simplified and in some cases made more accurate(i.e., exact rather than bounded).

  12. Soft inclusion in a confined fluctuating active gel

    NASA Astrophysics Data System (ADS)

    Singh Vishen, Amit; Rupprecht, J.-F.; Shivashankar, G. V.; Prost, J.; Rao, Madan

    2018-03-01

    We study stochastic dynamics of a point and extended inclusion within a one-dimensional confined active viscoelastic gel. We show that the dynamics of a point inclusion can be described by a Langevin equation with a confining potential and multiplicative noise. Using a systematic adiabatic elimination over the fast variables, we arrive at an overdamped equation with a proper definition of the multiplicative noise. To highlight various features and to appeal to different biological contexts, we treat the inclusion in turn as a rigid extended element, an elastic element, and a viscoelastic (Kelvin-Voigt) element. The dynamics for the shape and position of the extended inclusion can be described by coupled Langevin equations. Deriving exact expressions for the corresponding steady-state probability distributions, we find that the active noise induces an attraction to the edges of the confining domain. In the presence of a competing centering force, we find that the shape of the probability distribution exhibits a sharp transition upon varying the amplitude of the active noise. Our results could help understanding the positioning and deformability of biological inclusions, e.g., organelles in cells, or nucleus and cells within tissues.

  13. Hybrid Approaches and Industrial Applications of Pattern Recognition,

    DTIC Science & Technology

    1980-10-01

    emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the

  14. Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1995-01-01

    Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…

  15. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  16. Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.

    PubMed

    Berry, K J; Mielke, P W

    2000-12-01

    Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.

  17. Dynamical Response of Networks Under External Perturbations: Exact Results

    NASA Astrophysics Data System (ADS)

    Chinellato, David D.; Epstein, Irving R.; Braha, Dan; Bar-Yam, Yaneer; de Aguiar, Marcus A. M.

    2015-04-01

    We give exact statistical distributions for the dynamic response of influence networks subjected to external perturbations. We consider networks whose nodes have two internal states labeled 0 and 1. We let nodes be frozen in state 0, in state 1, and the remaining nodes change by adopting the state of a connected node with a fixed probability per time step. The frozen nodes can be interpreted as external perturbations to the subnetwork of free nodes. Analytically extending and to be smaller than 1 enables modeling the case of weak coupling. We solve the dynamical equations exactly for fully connected networks, obtaining the equilibrium distribution, transition probabilities between any two states and the characteristic time to equilibration. Our exact results are excellent approximations for other topologies, including random, regular lattice, scale-free and small world networks, when the numbers of fixed nodes are adjusted to take account of the effect of topology on coupling to the environment. This model can describe a variety of complex systems, from magnetic spins to social networks to population genetics, and was recently applied as a framework for early warning signals for real-world self-organized economic market crises.

  18. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  19. Influence of the random walk finite step on the first-passage probability

    NASA Astrophysics Data System (ADS)

    Klimenkova, Olga; Menshutin, Anton; Shchur, Lev

    2018-01-01

    A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.

  20. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  1. More on the decoder error probability for Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1987-01-01

    The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.

  2. Solution to a gene divergence problem under arbitrary stable nucleotide transition probabilities

    NASA Technical Reports Server (NTRS)

    Holmquist, R.

    1976-01-01

    A nucleic acid chain, L nucleotides in length, with the specific base sequence B(1)B(2) ... B(L) is defined by the L-dimensional vector B = (B(1), B(2), ..., B(L)). For twelve given constant non-negative transition probabilities that, in a specified position, the base B is replaced by the base B' in a single step, an exact analytical expression is derived for the probability that the position goes from base B to B' in X steps. Assuming that each base mutates independently of the others, an exact expression is derived for the probability that the initial gene sequence B goes to a sequence B' = (B'(1), B'(2), ..., B'(L)) after X = (X(1), X(2), ..., X(L)) base replacements. The resulting equations allow a more precise accounting for the effects of Darwinian natural selection in molecular evolution than does the idealized (biologically less accurate) assumption that each of the four nucleotides is equally likely to mutate to and be fixed as one of the other three. Illustrative applications of the theory to some problems of biological evolution are given.

  3. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks.

    PubMed

    Song, H Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks-complex networks characterized by a combination of high clustering and short path lengths-are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  4. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks

    NASA Astrophysics Data System (ADS)

    Song, H. Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks—complex networks characterized by a combination of high clustering and short path lengths—are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  5. Role of quantum statistics in multi-particle decay dynamics

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'el

    2015-04-01

    The role of quantum statistics in the decay dynamics of a multi-particle state, which is suddenly released from a confining potential, is investigated. For an initially confined double particle state, the exact dynamics is presented for both bosons and fermions. The time-evolution of the probability to measure two-particle is evaluated and some counterintuitive features are discussed. For instance, it is shown that although there is a higher chance of finding the two bosons (as oppose to fermions, and even distinguishable particles) at the initial trap region, there is a higher chance (higher than fermions) of finding them on two opposite sides of the trap as if the repulsion between bosons is higher than the repulsion between fermions. The results are demonstrated by numerical simulations and are calculated analytically in the short-time approximation. Furthermore, experimental validation is suggested.

  6. Validation of the SURE Program, phase 1

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.

  7. Gaussian and Airy wave packets of massive particles with orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Karlovets, Dmitry V.

    2015-01-01

    While wave-packet solutions for relativistic wave equations are oftentimes thought to be approximate (paraxial), we demonstrate, by employing a null-plane- (light-cone-) variable formalism, that there is a family of such solutions that are exact. A scalar Gaussian wave packet in the transverse plane is generalized so that it acquires a well-defined z component of the orbital angular momentum (OAM), while it may not acquire a typical "doughnut" spatial profile. Such quantum states and beams, in contrast to the Bessel states, may have an azimuthal-angle-dependent probability density and finite uncertainty of the OAM, which is determined by the packet's width. We construct a well-normalized Airy wave packet, which can be interpreted as a one-particle state for a relativistic massive boson, show that its center moves along the same quasiclassical straight path, and, which is more important, spreads with time and distance exactly as a Gaussian wave packet does, in accordance with the uncertainty principle. It is explained that this fact does not contradict the well-known "nonspreading" feature of the Airy beams. While the effective OAM for such states is zero, its uncertainty (or the beam's OAM bandwidth) is found to be finite, and it depends on the packet's parameters. A link between exact solutions for the Klein-Gordon equation in the null-plane-variable formalism and the approximate ones in the usual approach is indicated; generalizations of these states for a boson in the external field of a plane electromagnetic wave are also presented.

  8. The probability of being identified as an outlier with commonly used funnel plot control limits for the standardised mortality ratio.

    PubMed

    Seaton, Sarah E; Manktelow, Bradley N

    2012-07-16

    Emphasis is increasingly being placed on the monitoring of clinical outcomes for health care providers. Funnel plots have become an increasingly popular graphical methodology used to identify potential outliers. It is assumed that a provider only displaying expected random variation (i.e. 'in-control') will fall outside a control limit with a known probability. In reality, the discrete count nature of these data, and the differing methods, can lead to true probabilities quite different from the nominal value. This paper investigates the true probability of an 'in control' provider falling outside control limits for the Standardised Mortality Ratio (SMR). The true probabilities of an 'in control' provider falling outside control limits for the SMR were calculated and compared for three commonly used limits: Wald confidence interval; 'exact' confidence interval; probability-based prediction interval. The probability of falling above the upper limit, or below the lower limit, often varied greatly from the nominal value. This was particularly apparent when there were a small number of expected events: for expected events ≤ 50 the median probability of an 'in-control' provider falling above the upper 95% limit was 0.0301 (Wald), 0.0121 ('exact'), 0.0201 (prediction). It is important to understand the properties and probability of being identified as an outlier by each of these different methods to aid the correct identification of poorly performing health care providers. The limits obtained using probability-based prediction limits have the most intuitive interpretation and their properties can be defined a priori. Funnel plot control limits for the SMR should not be based on confidence intervals.

  9. A study of some non-equilibrium driven models and their contribution to the understanding of molecular motors

    NASA Astrophysics Data System (ADS)

    Mazilu, Irina; Gonzalez, Joshua

    2008-03-01

    From the point of view of a physicist, a bio-molecular motor represents an interesting non-equilibrium system and it is directly amenable to an analysis using standard methods of non-equilibrium statistical physics. We conduct a rigorous Monte Carlo study of three different driven lattice gas models that retain the basic behavior of three types of cytoskeletal molecular motors. Our models incorporate novel features such as realistic dynamics rules and complex motor-motor interactions. We are interested to have a deeper understanding of how various parameters influence the macroscopic behavior of these systems, what is the density profile and if the system undergoes a phase transition. On the analytical front, we computed the steady-state probability distributions exactly for the one of the models using the matrix method that was established in 1993 by B. Derrida et al. We also explored the possibilities offered by the ``Bethe ansatz'' method by mapping some well studied spin models into asymmetric simple exclusion models (already analyzed using computer simulations), and to use the results obtained for the spin models in finding an exact solution for our problem. We have exhaustive computational studies of the kinesin and dynein molecular motor models that prove to be very useful in checking our analytical work.

  10. The gravitational law of social interaction

    NASA Astrophysics Data System (ADS)

    Levy, Moshe; Goldenberg, Jacob

    2014-01-01

    While a great deal is known about the topology of social networks, there is much less agreement about the geographical structure of these networks. The fundamental question in this context is: how does the probability of a social link between two individuals depend on the physical distance between them? While it is clear that the probability decreases with the distance, various studies have found different functional forms for this dependence. The exact form of the distance dependence has crucial implications for network searchability and dynamics: Kleinberg (2000) [15] shows that the small-world property holds if the probability of a social link is a power-law function of the distance with power -2, but not with any other power. We investigate the distance dependence of link probability empirically by analyzing four very different sets of data: Facebook links, data from the electronic version of the Small-World experiment, email messages, and data from detailed personal interviews. All four datasets reveal the same empirical regularity: the probability of a social link is proportional to the inverse of the square of the distance between the two individuals, analogously to the distance dependence of the gravitational force. Thus, it seems that social networks spontaneously converge to the exact unique distance dependence that ensures the Small-World property.

  11. The probability of misassociation between neighboring targets

    NASA Astrophysics Data System (ADS)

    Areta, Javier A.; Bar-Shalom, Yaakov; Rothrock, Ronald

    2008-04-01

    This paper presents procedures to calculate the probability that the measurement originating from an extraneous target will be (mis)associated with a target of interest for the cases of Nearest Neighbor and Global association. It is shown that these misassociation probabilities depend, under certain assumptions, on a particular - covariance weighted - norm of the difference between the targets' predicted measurements. For the Nearest Neighbor association, the exact solution, obtained for the case of equal innovation covariances, is based on a noncentral chi-square distribution. An approximate solution is also presented for the case of unequal innovation covariances. For the Global case an approximation is presented for the case of "similar" innovation covariances. In the general case of unequal innovation covariances where this approximation fails, an exact method based on the inversion of the characteristic function is presented. The theoretical results, confirmed by Monte Carlo simulations, quantify the benefit of Global vs. Nearest Neighbor association. These results are applied to problems of single sensor as well as centralized fusion architecture multiple sensor tracking.

  12. A study of the dynamics of multi-player games on small networks using territorial interactions.

    PubMed

    Broom, Mark; Lafaye, Charlotte; Pattni, Karan; Rychtář, Jan

    2015-12-01

    Recently, the study of structured populations using models of evolutionary processes on graphs has begun to incorporate a more general type of interaction between individuals, allowing multi-player games to be played among the population. In this paper, we develop a birth-death dynamics for use in such models and consider the evolution of populations for special cases of very small graphs where we can easily identify all of the population states and carry out exact analyses. To do so, we study two multi-player games, a Hawk-Dove game and a public goods game. Our focus is on finding the fixation probability of an individual from one type, cooperator or defector in the case of the public goods game, within a population of the other type. We compare this value for both games on several graphs under different parameter values and assumptions, and identify some interesting general features of our model. In particular there is a very close relationship between the fixation probability and the mean temperature, with high temperatures helping fitter individuals and punishing unfit ones and so enhancing selection, whereas low temperatures give a levelling effect which suppresses selection.

  13. Statistics of Optical Coherence Tomography Data From Human Retina

    PubMed Central

    de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo

    2010-01-01

    Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733

  14. Potential postwildfire debris-flow hazards - A prewildfire evaluation for the Jemez Mountains, north-central New Mexico

    Treesearch

    Anne C. Tillery; Jessica Haas

    2016-01-01

    Wildfire can substantially increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although the exact location, extent, and severity of wildfire or subsequent rainfall intensity and duration cannot be known, probabilities of fire and debris‑flow...

  15. Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2016-06-01

    Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.

  16. Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento

    Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximatemore » algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.« less

  17. Spread of epidemic disease on networks

    NASA Astrophysics Data System (ADS)

    Newman, M. E.

    2002-07-01

    The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible/infective/removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks.

  18. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  19. Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice

    NASA Astrophysics Data System (ADS)

    Chen, Haiyan; Zhang, Fuji

    2013-08-01

    In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.

  20. A new estimator of the discovery probability.

    PubMed

    Favaro, Stefano; Lijoi, Antonio; Prünster, Igor

    2012-12-01

    Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.

  1. Collisionless tearing instability of a bi-Maxwellian neutral sheet - An integrodifferential treatment with exact particle orbits

    NASA Technical Reports Server (NTRS)

    Burkhart, G. R.; Chen, J.

    1989-01-01

    The integrodifferential equation describing the linear tearing instability in the bi-Maxwellian neutral sheet is solved without approximating the particle orbits or the eigenfunction psi. Results of this calculation are presented. Comparison between the exact solution and the three-region approximation motivates the piecewise-straight-line approximation, a simplification that allows faster solution of the integrodifferential equation, yet retains the important features of the exact solution.

  2. A subcortical inhibitory signal for behavioral arrest in the thalamus

    PubMed Central

    Dugué, Guillaume P.; Bokor, Hajnalka; Rousseau, Charly V.; Maglóczky, Zsófia; Havas, László; Hangya, Balázs; Wildner, Hendrik; Zeilhofer, Hanns Ulrich; Dieudonné, Stéphane; Acsády, László

    2016-01-01

    Organization of behavior requires rapid coordination of brainstem and forebrain activity. The exact mechanisms of effective communication between these regions are presently unclear. The intralaminar thalamus (IL) probably serves as a central hub in this circuit by connecting the critical brainstem and forebrain areas. Here we found that GABAergic/glycinergic fibers ascending from the pontine reticular formation (PRF) of the brainstem evoke fast and reliable inhibition in the IL thalamus via large, multisynaptic terminals. This inhibition was fine-tuned through heterogeneous GABAergic/glycinergic receptor ratios expressed at individual synapses. Optogenetic activation of PRF axons in the IL of freely moving mice led to behavioral arrest and transient interruption of awake cortical activity. An afferent system with comparable morphological features was also found in the human IL. These data reveal an evolutionarily conserved ascending system which gates forebrain activity through fast and powerful synaptic inhibition of the IL thalamus. PMID:25706472

  3. An exact computational method for performance analysis of sequential test algorithms for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Lacy, Fred; Carriere, Patrick

    2015-05-01

    Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.

  4. Exact Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data.

    PubMed

    Lin, Yan; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett; Lipshultz, Steven

    2017-01-01

    Altham (Altham PME. Exact Bayesian analysis of a 2 × 2 contingency table, and Fisher's "exact" significance test. J R Stat Soc B 1969; 31: 261-269) showed that a one-sided p-value from Fisher's exact test of independence in a 2 × 2 contingency table is equal to the posterior probability of negative association in the 2 × 2 contingency table under a Bayesian analysis using an improper prior. We derive an extension of Fisher's exact test p-value in the presence of missing data, assuming the missing data mechanism is ignorable (i.e., missing at random or completely at random). Further, we propose Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data using alternative priors; we also present results from a simulation study exploring the Type I error rate and power of the proposed exact test p-values. An example, using data on the association between blood pressure and a cardiac enzyme, is presented to illustrate the methods.

  5. Constructing exact symmetric informationally complete measurements from numerical solutions

    NASA Astrophysics Data System (ADS)

    Appleby, Marcus; Chien, Tuan-Yow; Flammia, Steven; Waldron, Shayne

    2018-04-01

    Recently, several intriguing conjectures have been proposed connecting symmetric informationally complete quantum measurements (SIC POVMs, or SICs) and algebraic number theory. These conjectures relate the SICs to their minimal defining algebraic number field. Testing or sharpening these conjectures requires that the SICs are expressed exactly, rather than as numerical approximations. While many exact solutions of SICs have been constructed previously using Gröbner bases, this method has probably been taken as far as is possible with current computer technology (except in special cases where there are additional symmetries). Here, we describe a method for converting high-precision numerical solutions into exact ones using an integer relation algorithm in conjunction with the Galois symmetries of an SIC. Using this method, we have calculated 69 new exact solutions, including nine new dimensions, where previously only numerical solutions were known—which more than triples the number of known exact solutions. In some cases, the solutions require number fields with degrees as high as 12 288. We use these solutions to confirm that they obey the number-theoretic conjectures, and address two questions suggested by the previous work.

  6. Potential postwildfire debris-flow hazards - a prewildfire evaluation for the Sandia and Manzano Mountains and surrounding areas, central New Mexico

    Treesearch

    Anne C. Tillery; Jessica R. Haas; Lara W. Miller; Joe H. Scott; Matthew P. Thompson

    2014-01-01

    Wildfire can drastically increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although there is no way to know the exact location, extent, and severity of wildfire, or the subsequent rainfall intensity and duration before it happens, probabilities...

  7. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  8. Renyi Entropy of the Ideal Gas in Finite Momentum Intervals

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.

    2003-06-01

    Coincidence probabilities of multiparticle events, as measured in finite momentum intervals for Bose and Fermi ideal gas, are calculated and compared with the exact expressions given in statistical physics.

  9. Method of self-consistent evaluation of absolute emission probabilities of particles and gamma rays

    NASA Astrophysics Data System (ADS)

    Badikov, Sergei; Chechev, Valery

    2017-09-01

    In assumption of well installed decay scheme the method provides a) exact balance relationships, b) lower (compared to the traditional techniques) uncertainties of recommended absolute emission probabilities of particles and gamma rays, c) evaluation of correlations between the recommended emission probabilities (for the same and different decay modes). Application of the method for the decay data evaluation for even curium isotopes led to paradoxical results. The multidimensional confidence regions for the probabilities of the most intensive alpha transitions constructed on the basis of present and the ENDF/B-VII.1, JEFF-3.1, DDEP evaluations are inconsistent whereas the confidence intervals for the evaluated probabilities of single transitions agree with each other.

  10. Multifractals embedded in short time series: An unbiased estimation of probability moment

    NASA Astrophysics Data System (ADS)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  11. Does really Born Oppenheimer approximation break down in charge transfer processes? An exactly solvable model

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Alexander M.; Medvedev, Igor G.

    2006-05-01

    Effects of deviation from the Born-Oppenheimer approximation (BOA) on the non-adiabatic transition probability for the transfer of a quantum particle in condensed media are studied within an exactly solvable model. The particle and the medium are modeled by a set of harmonic oscillators. The dynamic interaction of the particle with a single local mode is treated explicitly without the use of BOA. Two particular situations (symmetric and non-symmetric systems) are considered. It is shown that the difference between the exact solution and the true BOA is negligibly small at realistic parameters of the model. However, the exact results differ considerably from those of the crude Condon approximation (CCA) which is usually considered in the literature as a reference point for BOA (Marcus-Hush-Dogonadze formula). It is shown that the exact rate constant can be smaller (symmetric system) or larger (non-symmetric one) than that obtained in CCA. The non-Condon effects are also studied.

  12. Memory effects on a resonate-and-fire neuron model subjected to Ornstein-Uhlenbeck noise

    NASA Astrophysics Data System (ADS)

    Paekivi, S.; Mankin, R.; Rekker, A.

    2017-10-01

    We consider a generalized Langevin equation with an exponentially decaying memory kernel as a model for the firing process of a resonate-and-fire neuron. The effect of temporally correlated random neuronal input is modeled as Ornstein-Uhlenbeck noise. In the noise-induced spiking regime of the neuron, we derive exact analytical formulas for the dependence of some statistical characteristics of the output spike train, such as the probability distribution of the interspike intervals (ISIs) and the survival probability, on the parameters of the input stimulus. Particularly, on the basis of these exact expressions, we have established sufficient conditions for the occurrence of memory-time-induced transitions between unimodal and multimodal structures of the ISI density and a critical damping coefficient which marks a dynamical transition in the behavior of the system.

  13. How mutation affects evolutionary games on graphs

    PubMed Central

    Allen, Benjamin; Traulsen, Arne; Tarnita, Corina E.; Nowak, Martin A.

    2011-01-01

    Evolutionary dynamics are affected by population structure, mutation rates and update rules. Spatial or network structure facilitates the clustering of strategies, which represents a mechanism for the evolution of cooperation. Mutation dilutes this effect. Here we analyze how mutation influences evolutionary clustering on graphs. We introduce new mathematical methods to evolutionary game theory, specifically the analysis of coalescing random walks via generating functions. These techniques allow us to derive exact identity-by-descent (IBD) probabilities, which characterize spatial assortment on lattices and Cayley trees. From these IBD probabilities we obtain exact conditions for the evolution of cooperation and other game strategies, showing the dual effects of graph topology and mutation rate. High mutation rates diminish the clustering of cooperators, hindering their evolutionary success. Our model can represent either genetic evolution with mutation, or social imitation processes with random strategy exploration. PMID:21473871

  14. Product of Ginibre matrices: Fuss-Catalan and Raney distributions

    NASA Astrophysics Data System (ADS)

    Penson, Karol A.; Życzkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions Ps(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions Ps(x) in terms of a combination of s hypergeometric functions of the type sFs-1. The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  15. Product of Ginibre matrices: Fuss-Catalan and Raney distributions.

    PubMed

    Penson, Karol A; Zyczkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions P(s)(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions P(s)(x) in terms of a combination of s hypergeometric functions of the type (s)F(s-1). The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  16. An Exactly Soluble Model for Hopping Particles Moving with Correlations Between States due to Exchange Sites

    NASA Astrophysics Data System (ADS)

    Zhao, Xian-Geng; Jia, Sue-Tang

    1992-09-01

    The motion of hopping particles on an infinite chain is investigated. The model is characterized by the correlations between states due to exchange sites. The analytic solutions for this system are discussed in general case. For some special cases, exact results are obtained with the help of explicit calculations of propagators and mean square displacement deviation. Both probability propagators for the creation and annihilation of two particles or for the deformation and formation of Frenkel excitons are indicated.

  17. Exact numerical calculation of fixation probability and time on graphs.

    PubMed

    Hindersin, Laura; Möller, Marius; Traulsen, Arne; Bauer, Benedikt

    2016-12-01

    The Moran process on graphs is a popular model to study the dynamics of evolution in a spatially structured population. Exact analytical solutions for the fixation probability and time of a new mutant have been found for only a few classes of graphs so far. Simulations are time-expensive and many realizations are necessary, as the variance of the fixation times is high. We present an algorithm that numerically computes these quantities for arbitrary small graphs by an approach based on the transition matrix. The advantage over simulations is that the calculation has to be executed only once. Building the transition matrix is automated by our algorithm. This enables a fast and interactive study of different graph structures and their effect on fixation probability and time. We provide a fast implementation in C with this note (Hindersin et al., 2016). Our code is very flexible, as it can handle two different update mechanisms (Birth-death or death-Birth), as well as arbitrary directed or undirected graphs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. A Gleason-Type Theorem for Any Dimension Based on a Gambling Formulation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Benavoli, Alessio; Facchini, Alessandro; Zaffalon, Marco

    2017-07-01

    Based on a gambling formulation of quantum mechanics, we derive a Gleason-type theorem that holds for any dimension n of a quantum system, and in particular for n=2. The theorem states that the only logically consistent probability assignments are exactly the ones that are definable as the trace of the product of a projector and a density matrix operator. In addition, we detail the reason why dispersion-free probabilities are actually not valid, or rational, probabilities for quantum mechanics, and hence should be excluded from consideration.

  19. Diagnosis of combined faults in Rotary Machinery by Non-Naive Bayesian approach

    NASA Astrophysics Data System (ADS)

    Asr, Mahsa Yazdanian; Ettefagh, Mir Mohammad; Hassannejad, Reza; Razavi, Seyed Naser

    2017-02-01

    When combined faults happen in different parts of the rotating machines, their features are profoundly dependent. Experts are completely familiar with individuals faults characteristics and enough data are available from single faults but the problem arises, when the faults combined and the separation of characteristics becomes complex. Therefore, the experts cannot declare exact information about the symptoms of combined fault and its quality. In this paper to overcome this drawback, a novel method is proposed. The core idea of the method is about declaring combined fault without using combined fault features as training data set and just individual fault features are applied in training step. For this purpose, after data acquisition and resampling the obtained vibration signals, Empirical Mode Decomposition (EMD) is utilized to decompose multi component signals to Intrinsic Mode Functions (IMFs). With the use of correlation coefficient, proper IMFs for feature extraction are selected. In feature extraction step, Shannon energy entropy of IMFs was extracted as well as statistical features. It is obvious that most of extracted features are strongly dependent. To consider this matter, Non-Naive Bayesian Classifier (NNBC) is appointed, which release the fundamental assumption of Naive Bayesian, i.e., the independence among features. To demonstrate the superiority of NNBC, other counterpart methods, include Normal Naive Bayesian classifier, Kernel Naive Bayesian classifier and Back Propagation Neural Networks were applied and the classification results are compared. An experimental vibration signals, collected from automobile gearbox, were used to verify the effectiveness of the proposed method. During the classification process, only the features, related individually to healthy state, bearing failure and gear failures, were assigned for training the classifier. But, combined fault features (combined gear and bearing failures) were examined as test data. The achieved probabilities for the test data show that the combined fault can be identified with high success rate.

  20. Suppression effects in feature-based attention

    PubMed Central

    Wang, Yixue; Miller, James; Liu, Taosheng

    2015-01-01

    Attending to a feature enhances visual processing of that feature, but it is less clear what occurs to unattended features. Single-unit recording studies in middle temporal (MT) have shown that neuronal modulation is a monotonic function of the difference between the attended and neuron's preferred direction. Such a relationship should predict a monotonic suppressive effect in psychophysical performance. However, past research on suppressive effects of feature-based attention has remained inconclusive. We investigated the suppressive effect for motion direction, orientation, and color in three experiments. We asked participants to detect a weak signal among noise and provided a partially valid feature cue to manipulate attention. We measured performance as a function of the offset between the cued and signal feature. We also included neutral trials where no feature cues were presented to provide a baseline measure of performance. Across three experiments, we consistently observed enhancement effects when the target feature and cued feature coincided and suppression effects when the target feature deviated from the cued feature. The exact profile of suppression was different across feature dimensions: Whereas the profile for direction exhibited a “rebound” effect, the profiles for orientation and color were monotonic. These results demonstrate that unattended features are suppressed during feature-based attention, but the exact suppression profile depends on the specific feature. Overall, the results are largely consistent with neurophysiological data and support the feature-similarity gain model of attention. PMID:26067533

  1. Exact calculations of survival probability for diffusion on growing lines, disks, and spheres: The role of dimension.

    PubMed

    Simpson, Matthew J; Baker, Ruth E

    2015-09-07

    Unlike standard applications of transport theory, the transport of molecules and cells during embryonic development often takes place within growing multidimensional tissues. In this work, we consider a model of diffusion on uniformly growing lines, disks, and spheres. An exact solution of the partial differential equation governing the diffusion of a population of individuals on the growing domain is derived. Using this solution, we study the survival probability, S(t). For the standard non-growing case with an absorbing boundary, we observe that S(t) decays to zero in the long time limit. In contrast, when the domain grows linearly or exponentially with time, we show that S(t) decays to a constant, positive value, indicating that a proportion of the diffusing substance remains on the growing domain indefinitely. Comparing S(t) for diffusion on lines, disks, and spheres indicates that there are minimal differences in S(t) in the limit of zero growth and minimal differences in S(t) in the limit of fast growth. In contrast, for intermediate growth rates, we observe modest differences in S(t) between different geometries. These differences can be quantified by evaluating the exact expressions derived and presented here.

  2. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  3. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    NASA Astrophysics Data System (ADS)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  4. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-21

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  5. Return probabilities and hitting times of random walks on sparse Erdös-Rényi graphs.

    PubMed

    Martin, O C; Sulc, P

    2010-03-01

    We consider random walks on random graphs, focusing on return probabilities and hitting times for sparse Erdös-Rényi graphs. Using the tree approach, which is expected to be exact in the large graph limit, we show how to solve for the distribution of these quantities and we find that these distributions exhibit a form of self-similarity.

  6. The role of fractional time-derivative operators on anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Tateishi, Angel A.; Ribeiro, Haroldo V.; Lenzi, Ervin K.

    2017-10-01

    The generalized diffusion equations with fractional order derivatives have shown be quite efficient to describe the diffusion in complex systems, with the advantage of producing exact expressions for the underlying diffusive properties. Recently, researchers have proposed different fractional-time operators (namely: the Caputo-Fabrizio and Atangana-Baleanu) which, differently from the well-known Riemann-Liouville operator, are defined by non-singular memory kernels. Here we proposed to use these new operators to generalize the usual diffusion equation. By analyzing the corresponding fractional diffusion equations within the continuous time random walk framework, we obtained waiting time distributions characterized by exponential, stretched exponential, and power-law functions, as well as a crossover between two behaviors. For the mean square displacement, we found crossovers between usual and confined diffusion, and between usual and sub-diffusion. We obtained the exact expressions for the probability distributions, where non-Gaussian and stationary distributions emerged. This former feature is remarkable because the fractional diffusion equation is solved without external forces and subjected to the free diffusion boundary conditions. We have further shown that these new fractional diffusion equations are related to diffusive processes with stochastic resetting, and to fractional diffusion equations with derivatives of distributed order. Thus, our results suggest that these new operators may be a simple and efficient way for incorporating different structural aspects into the system, opening new possibilities for modeling and investigating anomalous diffusive processes.

  7. The transmission process: A combinatorial stochastic process for the evolution of transmission trees over networks.

    PubMed

    Sainudiin, Raazesh; Welch, David

    2016-12-07

    We derive a combinatorial stochastic process for the evolution of the transmission tree over the infected vertices of a host contact network in a susceptible-infected (SI) model of an epidemic. Models of transmission trees are crucial to understanding the evolution of pathogen populations. We provide an explicit description of the transmission process on the product state space of (rooted planar ranked labelled) binary transmission trees and labelled host contact networks with SI-tags as a discrete-state continuous-time Markov chain. We give the exact probability of any transmission tree when the host contact network is a complete, star or path network - three illustrative examples. We then develop a biparametric Beta-splitting model that directly generates transmission trees with exact probabilities as a function of the model parameters, but without explicitly modelling the underlying contact network, and show that for specific values of the parameters we can recover the exact probabilities for our three example networks through the Markov chain construction that explicitly models the underlying contact network. We use the maximum likelihood estimator (MLE) to consistently infer the two parameters driving the transmission process based on observations of the transmission trees and use the exact MLE to characterize equivalence classes over the space of contact networks with a single initial infection. An exploratory simulation study of the MLEs from transmission trees sampled from three other deterministic and four random families of classical contact networks is conducted to shed light on the relation between the MLEs of these families with some implications for statistical inference along with pointers to further extensions of our models. The insights developed here are also applicable to the simplest models of "meme" evolution in online social media networks through transmission events that can be distilled from observable actions such as "likes", "mentions", "retweets" and "+1s" along with any concomitant comments. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Analytical study of nano-scale logical operations

    NASA Astrophysics Data System (ADS)

    Patra, Moumita; Maiti, Santanu K.

    2018-07-01

    A complete analytical prescription is given to perform three basic (OR, AND, NOT) and two universal (NAND, NOR) logic gates at nano-scale level using simple tailor made geometries. Two different geometries, ring-like and chain-like, are taken into account where in each case the bridging conductor is coupled to a local atomic site through a dangling bond whose site energy can be controlled by means of external gate electrode. The main idea is that when injecting electron energy matches with site energy of local atomic site transmission probability drops exactly to zero, whereas the junction exhibits finite transmission for other energies. Utilizing this prescription we perform logical operations, and, we strongly believe that the proposed results can be verified in laboratory. Finally, we numerically compute two-terminal transmission probability considering general models and the numerical results match exactly well with our analytical findings.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinitsyn, Nikolai A.

    In this paper, I identify a nontrivial four-state Landau-Zener model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. The model describes an experimentally accessible system of two interacting qubits, such as a localized state in a Dirac material with both valley and spin degrees of freedom or a singly charged quantum dot (QD) molecule with spin orbit coupling. Application of the linearly time-dependent magnetic field induces a sequence of quantum level crossings with possibility of interference of different trajectories in a semiclassical picture. I argue that this system satisfies the criteria ofmore » integrability in the multistate Landau-Zener theory, which allows one to derive explicit exact analytical expressions for the transition probability matrix. Finally, I also argue that this model is likely a special case of a larger class of solvable systems, and present a six-state generalization as an example.« less

  10. Exact transition probabilities for a linear sweep through a Kramers-Kronig resonance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Chen; Sinitsyn, Nikolai A.

    2015-11-19

    We consider a localized electronic spin controlled by a circularly polarized optical beam and an external magnetic field. When the frequency of the beam is tuned near an optical resonance with a continuum of higher energy states, effective magnetic fields are induced on the two-level system via the inverse Faraday effect. We explore the process in which the frequency of the beam is made linearly time-dependent so that it sweeps through the optical resonance, starting and ending at the values far away from it. In addition to changes of spin states, Kramers-Kronig relations guarantee that a localized electron can alsomore » escape into a continuum of states. We argue that probabilities of transitions between different possible electronic states after such a sweep of the optical frequency can be found exactly, regardless the shape of the resonance. In conclusion, we also discuss extension of our results to multistate systems.« less

  11. Exact Solutions of Burnt-Bridge Models for Molecular Motor Transport

    NASA Astrophysics Data System (ADS)

    Morozov, Alexander; Pronina, Ekaterina; Kolomeisky, Anatoly; Artyomov, Maxim

    2007-03-01

    Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called ``bridges''), is investigated theoretically by analyzing discrete-state stochastic ``burnt-bridge'' models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (``burned'') with a probability p, creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For general case of p<1 a new theoretical method is developed, and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics, periodic and random distribution of bridges and different burning dynamics are analyzed and compared. Theoretical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.

  12. Exact evaluations of some Meijer G-functions and probability of all eigenvalues real for the product of two Gaussian matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2015-11-01

    We provide a proof to a recent conjecture by Forrester (2014 J. Phys. A: Math. Theor. 47 065202) regarding the algebraic and arithmetic structure of Meijer G-functions which appear in the expression for probability of all eigenvalues real for the product of two real Gaussian matrices. In the process we come across several interesting identities involving Meijer G-functions.

  13. Electron number probability distributions for correlated wave functions.

    PubMed

    Francisco, E; Martín Pendás, A; Blanco, M A

    2007-03-07

    Efficient formulas for computing the probability of finding exactly an integer number of electrons in an arbitrarily chosen volume are only known for single-determinant wave functions [E. Cances et al., Theor. Chem. Acc. 111, 373 (2004)]. In this article, an algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes. The derived expressions are applied to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules. Results for a series of test molecules are presented, paying particular attention to the effects of electron correlation and of some numerical approximations on the computed probabilities.

  14. Computational method for exact frequency-dependent rays on the basis of the solution of the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Protasov, M.; Gadylshin, K.

    2017-07-01

    A numerical method is proposed for the calculation of exact frequency-dependent rays when the solution of the Helmholtz equation is known. The properties of frequency-dependent rays are analysed and compared with classical ray theory and with the method of finite-difference modelling for the first time. In this paper, we study the dependence of these rays on the frequency of signals and show the convergence of the exact rays to the classical rays with increasing frequency. A number of numerical experiments demonstrate the distinctive features of exact frequency-dependent rays, in particular, their ability to penetrate into shadow zones that are impenetrable for classical rays.

  15. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  16. The relationship study between image features and detection probability based on psychology experiments

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei

    2011-04-01

    Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.

  17. A linear model of population dynamics

    NASA Astrophysics Data System (ADS)

    Lushnikov, A. A.; Kagan, A. I.

    2016-08-01

    The Malthus process of population growth is reformulated in terms of the probability w(n,t) to find exactly n individuals at time t assuming that both the birth and the death rates are linear functions of the population size. The master equation for w(n,t) is solved exactly. It is shown that w(n,t) strongly deviates from the Poisson distribution and is expressed in terms either of Laguerre’s polynomials or a modified Bessel function. The latter expression allows for considerable simplifications of the asymptotic analysis of w(n,t).

  18. Conserved directed percolation: exact quasistationary distribution of small systems and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    César Mansur Filho, Júlio; Dickman, Ronald

    2011-05-01

    We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\

  19. Evolution and selection of river networks: Statics, dynamics, and complexity

    PubMed Central

    Rinaldo, Andrea; Rigon, Riccardo; Banavar, Jayanth R.; Maritan, Amos; Rodriguez-Iturbe, Ignacio

    2014-01-01

    Moving from the exact result that drainage network configurations minimizing total energy dissipation are stationary solutions of the general equation describing landscape evolution, we review the static properties and the dynamic origins of the scale-invariant structure of optimal river patterns. Optimal channel networks (OCNs) are feasible optimal configurations of a spanning network mimicking landscape evolution and network selection through imperfect searches for dynamically accessible states. OCNs are spanning loopless configurations, however, only under precise physical requirements that arise under the constraints imposed by river dynamics—every spanning tree is exactly a local minimum of total energy dissipation. It is remarkable that dynamically accessible configurations, the local optima, stabilize into diverse metastable forms that are nevertheless characterized by universal statistical features. Such universal features explain very well the statistics of, and the linkages among, the scaling features measured for fluvial landforms across a broad range of scales regardless of geology, exposed lithology, vegetation, or climate, and differ significantly from those of the ground state, known exactly. Results are provided on the emergence of criticality through adaptative evolution and on the yet-unexplored range of applications of the OCN concept. PMID:24550264

  20. Degeneracy of energy levels of pseudo-Gaussian oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iacob, Theodor-Felix; Iacob, Felix, E-mail: felix@physics.uvt.ro; Lute, Marina

    2015-12-07

    We study the main features of the isotropic radial pseudo-Gaussian oscillators spectral properties. This study is made upon the energy levels degeneracy with respect to orbital angular momentum quantum number. In a previous work [6] we have shown that the pseudo-Gaussian oscillators belong to the class of quasi-exactly solvable models and an exact solution has been found.

  1. Exact Solutions for Wind-Driven Coastal Upwelling and Downwelling over Sloping Topography

    NASA Astrophysics Data System (ADS)

    Choboter, P.; Duke, D.; Horton, J.; Sinz, P.

    2009-12-01

    The dynamics of wind-driven coastal upwelling and downwelling are studied using a simplified dynamical model. Exact solutions are examined as a function of time and over a family of sloping topographies. Assumptions in the two-dimensional model include a frictionless ocean interior below the surface Ekman layer, and no alongshore dependence of the variables; however, dependence in the cross-shore and vertical directions is retained. Additionally, density and alongshore momentum are advected by the cross-shore velocity in order to maintain thermal wind. The time-dependent initial-value problem is solved with constant initial stratification and no initial alongshore flow. An alongshore pressure gradient is added to allow the cross-shore flow to be geostrophically balanced far from shore. Previously, this model has been used to study upwelling over flat-bottom and sloping topographies, but the novel feature in this work is the discovery of exact solutions for downwelling. These exact solutions are compared to numerical solutions from a primitive-equation ocean model, based on the Princeton Ocean Model, configured in a similar two-dimensional geometry. Many typical features of the evolution of density and velocity during downwelling are displayed by the analytical model.

  2. Autocorrelation of the susceptible-infected-susceptible process on networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Van Mieghem, Piet

    2018-06-01

    In this paper, we focus on the autocorrelation of the susceptible-infected-susceptible (SIS) process on networks. The N -intertwined mean-field approximation (NIMFA) is applied to calculate the autocorrelation properties of the exact SIS process. We derive the autocorrelation of the infection state of each node and the fraction of infected nodes both in the steady and transient states as functions of the infection probabilities of nodes. Moreover, we show that the autocorrelation can be used to estimate the infection and curing rates of the SIS process. The theoretical results are compared with the simulation of the exact SIS process. Our work fully utilizes the potential of the mean-field method and shows that NIMFA can indeed capture the autocorrelation properties of the exact SIS process.

  3. A Note on Monotonicity Assumptions for Exact Unconditional Tests in Binary Matched-pairs Designs

    PubMed Central

    Li, Xiaochun; Liu, Mengling; Goldberg, Judith D.

    2011-01-01

    Summary Exact unconditional tests have been widely applied to test the difference between two probabilities for 2×2 matched-pairs binary data with small sample size. In this context, Lloyd (2008, Biometrics 64, 716–723) proposed an E + M p-value, that showed better performance than the existing M p-value and C p-value. However, the analytical calculation of the E + M p-value requires that the Barnard convexity condition be satisfied; this can be challenging to prove theoretically. In this paper, by a simple reformulation, we show that a weaker condition, conditional monotonicity, is sufficient to calculate all three p-values (M, C and E + M) and their corresponding exact sizes. Moreover, this conditional monotonicity condition is applicable to non-inferiority tests. PMID:21466507

  4. Exact results for models of multichannel quantum nonadiabatic transitions

    DOE PAGES

    Sinitsyn, N. A.

    2014-12-11

    We consider nonadiabatic transitions in explicitly time-dependent systems with Hamiltonians of the form Hˆ(t)=Aˆ+Bˆt+Cˆ/t, where t is time and Aˆ,Bˆ,Cˆ are Hermitian N × N matrices. We show that in any model of this type, scattering matrix elements satisfy nontrivial exact constraints that follow from the absence of the Stokes phenomenon for solutions with specific conditions at t→–∞. This allows one to continue such solutions analytically to t→+∞, and connect their asymptotic behavior at t→–∞ and t→+∞. This property becomes particularly useful when a model shows additional discrete symmetries. Specifically, we derive a number of simple exact constraints and explicitmore » expressions for scattering probabilities in such systems.« less

  5. Probability mapping of scarred myocardium using texture and intensity features in CMR images

    PubMed Central

    2013-01-01

    Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280

  6. A graph lattice approach to maintaining and learning dense collections of subgraphs as image features.

    PubMed

    Saund, Eric

    2013-10-01

    Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice—a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.

  7. Entanglement dynamics in a non-Markovian environment: An exactly solvable model

    NASA Astrophysics Data System (ADS)

    Wilson, Justin H.; Fregoso, Benjamin M.; Galitski, Victor M.

    2012-05-01

    We study the non-Markovian effects on the dynamics of entanglement in an exactly solvable model that involves two independent oscillators, each coupled to its own stochastic noise source. First, we develop Lie algebraic and functional integral methods to find an exact solution to the single-oscillator problem which includes an analytic expression for the density matrix and the complete statistics, i.e., the probability distribution functions for observables. For long bath time correlations, we see nonmonotonic evolution of the uncertainties in observables. Further, we extend this exact solution to the two-particle problem and find the dynamics of entanglement in a subspace. We find the phenomena of “sudden death” and “rebirth” of entanglement. Interestingly, all memory effects enter via the functional form of the energy and hence the time of death and rebirth is controlled by the amount of noisy energy added into each oscillator. If this energy increases above (decreases below) a threshold, we obtain sudden death (rebirth) of entanglement.

  8. Task probability and report of feature information: what you know about what you 'see' depends on what you expect to need.

    PubMed

    Pilling, Michael; Gellatly, Angus

    2013-07-01

    We investigated the influence of dimensional set on report of object feature information using an immediate memory probe task. Participants viewed displays containing up to 36 coloured geometric shapes which were presented for several hundred milliseconds before one item was abruptly occluded by a probe. A cue presented simultaneously with the probe instructed participants to report either about the colour or shape of the probe item. A dimensional set towards the colour or shape of the presented items was induced by manipulating task probability - the relative probability with which the two feature dimensions required report. This was done across two participant groups: One group was given trials where there was a higher report probability of colour, the other a higher report probability of shape. Two experiments showed that features were reported most accurately when they were of high task probability, though in both cases the effect was largely driven by the colour dimension. Importantly the task probability effect did not interact with display set size. This is interpreted as tentative evidence that this manipulation influences feature processing in a global manner and at a stage prior to visual short term memory. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Growing optimal scale-free networks via likelihood

    NASA Astrophysics Data System (ADS)

    Small, Michael; Li, Yingying; Stemler, Thomas; Judd, Kevin

    2015-04-01

    Preferential attachment, by which new nodes attach to existing nodes with probability proportional to the existing nodes' degree, has become the standard growth model for scale-free networks, where the asymptotic probability of a node having degree k is proportional to k-γ. However, the motivation for this model is entirely ad hoc. We use exact likelihood arguments and show that the optimal way to build a scale-free network is to attach most new links to nodes of low degree. Curiously, this leads to a scale-free network with a single dominant hub: a starlike structure we call a superstar network. Asymptotically, the optimal strategy is to attach each new node to one of the nodes of degree k with probability proportional to 1/N +ζ (γ ) (k+1 ) γ (in a N node network): a stronger bias toward high degree nodes than exhibited by standard preferential attachment. Our algorithm generates optimally scale-free networks (the superstar networks) as well as randomly sampling the space of all scale-free networks with a given degree exponent γ . We generate viable realization with finite N for 1 ≪γ <2 as well as γ >2 . We observe an apparently discontinuous transition at γ ≈2 between so-called superstar networks and more treelike realizations. Gradually increasing γ further leads to reemergence of a superstar hub. To quantify these structural features, we derive a new analytic expression for the expected degree exponent of a pure preferential attachment process and introduce alternative measures of network entropy. Our approach is generic and can also be applied to an arbitrary degree distribution.

  10. A methodology for the transfer of probabilities between accident severity categories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlow, J. D.; Neuhauser, K. S.

    A methodology has been developed which allows the accident probabilities associated with one accident-severity category scheme to be transferred to another severity category scheme. The methodology requires that the schemes use a common set of parameters to define the categories. The transfer of accident probabilities is based on the relationships between probability of occurrence and each of the parameters used to define the categories. Because of the lack of historical data describing accident environments in engineering terms, these relationships may be difficult to obtain directly for some parameters. Numerical models or experienced judgement are often needed to obtain the relationships.more » These relationships, even if they are not exact, allow the accident probability associated with any severity category to be distributed within that category in a manner consistent with accident experience, which in turn will allow the accident probability to be appropriately transferred to a different category scheme.« less

  11. Statistical theory of combinatorial libraries of folding proteins: energetic discrimination of a target structure.

    PubMed

    Zou, J; Saven, J G

    2000-02-11

    A self-consistent theory is presented that can be used to estimate the number and composition of sequences satisfying a predetermined set of constraints. The theory is formulated so as to examine the features of sequences having a particular value of Delta=E(f)-(u), where E(f) is the energy of sequences when in a target structure and (u) is an average energy of non-target structures. The theory yields the probabilities w(i)(alpha) that each position i in the sequence is occupied by a particular monomer type alpha. The theory is applied to a simple lattice model of proteins. Excellent agreement is observed between the theory and the results of exact enumerations. The theory provides a quantitative framework for the design and interpretation of combinatorial experiments involving proteins, where a library of amino acid sequences is searched for sequences that fold to a desired structure. Copyright 2000 Academic Press.

  12. Gem and mineral identification using GL Gem Raman and comparison with other portable instruments

    NASA Astrophysics Data System (ADS)

    Culka, Adam; Hyršl, Jaroslav; Jehlička, Jan

    2016-11-01

    Several mainly silicate minerals in their gemstone varieties have been analysed by the Gem Raman portable system by Gemlab R&T, Vancouver, Canada, in order to ascertain the general performance of this relatively non-expensive tool developed exactly for the purpose of gemstone identification. The Raman spectra of gemstones acquired by this system have been subsequently critically compared with the data obtained by several other portable or handheld Raman instruments. The Raman spectra acquired with the Gem Raman instrument were typically of lesser quality when compared with the spectra taken by other instruments. Characteristic features such as steep baseline probably due to the fluorescence of the minerals, Raman bands much broader and therefore less resolved closely located Raman bands, and generally greater shifts of the band positions from the reference values were encountered. Some gemstone groups such as rubies did not provide useful Raman spectra at all. Nevertheless, general identification of gemstones was possible for a selection of gemstones.

  13. Non Kolmogorov Probability Models Outside Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi

    2009-03-01

    This paper is devoted to analysis of main conceptual problems in the interpretation of QM: reality, locality, determinism, physical state, Heisenberg principle, "deterministic" and "exact" theories, laws of chance, notion of event, statistical invariants, adaptive realism, EPR correlations and, finally, the EPR-chameleon experiment.

  14. [Comments on the use of the "life-table method" in orthopedics].

    PubMed

    Hassenpflug, J; Hahne, H J; Hedderich, J

    1992-01-01

    In the description of long term results, e.g. of joint replacements, survivorship analysis is used increasingly in orthopaedic surgery. The survivorship analysis is more useful to describe the frequency of failure rather than global statements in percentage. The relative probability of failure for fixed intervals is drawn from the number of controlled patients and the frequency of failure. The complementary probabilities of success are linked in their temporal sequence thus representing the probability of survival at a fixed endpoint. Necessary condition for the use of this procedure is the exact definition of moment and manner of failure. It is described how to establish survivorship tables.

  15. 20007: Quantum particle displacement by a moving localized potential trap

    NASA Astrophysics Data System (ADS)

    Granot, E.; Marchewka, A.

    2009-04-01

    We describe the dynamics of a bound state of an attractive δ-well under displacement of the potential. Exact analytical results are presented for the suddenly moved potential. Since this is a quantum system, only a fraction of the initially confined wave function remains confined to the moving potential. However, it is shown that besides the probability to remain confined to the moving barrier and the probability to remain in the initial position, there is also a certain probability for the particle to move at double speed. A quasi-classical interpretation for this effect is suggested. The temporal and spectral dynamics of each one of the scenarios is investigated.

  16. Wave vector modification of the infinite order sudden approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sachs, J.G.; Bowman, J.M.

    1980-10-15

    A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories ismore » run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1..-->..nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when ..delta..n=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.« less

  17. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  18. Wave vector modification of the infinite order sudden approximation

    NASA Astrophysics Data System (ADS)

    Sachs, Judith Grobe; Bowman, Joel M.

    1980-10-01

    A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.

  19. Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface

    NASA Astrophysics Data System (ADS)

    Liu, Tianhui; Fu, Bina; Zhang, Dong H.

    2013-11-01

    The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the title molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.

  20. Zone clearance in an infinite TASEP with a step initial condition

    NASA Astrophysics Data System (ADS)

    Cividini, Julien; Appert-Rolland, Cécile

    2017-06-01

    The TASEP is a paradigmatic model of out-of-equilibrium statistical physics, for which many quantities have been computed, either exactly or by approximate methods. In this work we study two new kinds of observables that have some relevance in biological or traffic models. They represent the probability for a given clearance zone of the lattice to be empty (for the first time) at a given time, starting from a step density profile. Exact expressions are obtained for single-time quantities, while more involved history-dependent observables are studied by Monte Carlo simulation, and partially predicted by a phenomenological approach.

  1. Diffusion of finite-sized hard-core interacting particles in a one-dimensional box: Tagged particle dynamics.

    PubMed

    Lizana, L; Ambjörnsson, T

    2009-11-01

    We solve a nonequilibrium statistical-mechanics problem exactly, namely, the single-file dynamics of N hard-core interacting particles (the particles cannot pass each other) of size Delta diffusing in a one-dimensional system of finite length L with reflecting boundaries at the ends. We obtain an exact expression for the conditional probability density function rhoT(yT,t|yT,0) that a tagged particle T (T=1,...,N) is at position yT at time t given that it at time t=0 was at position yT,0. Using a Bethe ansatz we obtain the N -particle probability density function and, by integrating out the coordinates (and averaging over initial positions) of all particles but particle T , we arrive at an exact expression for rhoT(yT,t|yT,0) in terms of Jacobi polynomials or hypergeometric functions. Going beyond previous studies, we consider the asymptotic limit of large N , maintaining L finite, using a nonstandard asymptotic technique. We derive an exact expression for rhoT(yT,t|yT,0) for a tagged particle located roughly in the middle of the system, from which we find that there are three time regimes of interest for finite-sized systems: (A) for times much smaller than the collision time ttaucoll but times smaller than the equilibrium time ttaue , rhoT(yT,t|yT,0) approaches a polynomial-type equilibrium probability density function. Notably, only regimes (A) and (B) are found in the previously considered infinite systems.

  2. Solvable four-state Landau-Zener model of two interacting qubits with path interference

    DOE PAGES

    Sinitsyn, Nikolai A.

    2015-11-30

    In this paper, I identify a nontrivial four-state Landau-Zener model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. The model describes an experimentally accessible system of two interacting qubits, such as a localized state in a Dirac material with both valley and spin degrees of freedom or a singly charged quantum dot (QD) molecule with spin orbit coupling. Application of the linearly time-dependent magnetic field induces a sequence of quantum level crossings with possibility of interference of different trajectories in a semiclassical picture. I argue that this system satisfies the criteria ofmore » integrability in the multistate Landau-Zener theory, which allows one to derive explicit exact analytical expressions for the transition probability matrix. Finally, I also argue that this model is likely a special case of a larger class of solvable systems, and present a six-state generalization as an example.« less

  3. Quantum entanglement of a harmonic oscillator with an electromagnetic field.

    PubMed

    Makarov, Dmitry N

    2018-05-29

    At present, there are many methods for obtaining quantum entanglement of particles with an electromagnetic field. Most methods have a low probability of quantum entanglement and not an exact theoretical apparatus based on an approximate solution of the Schrodinger equation. There is a need for new methods for obtaining quantum-entangled particles and mathematically accurate studies of such methods. In this paper, a quantum harmonic oscillator (for example, an electron in a magnetic field) interacting with a quantized electromagnetic field is considered. Based on the exact solution of the Schrodinger equation for this system, it is shown that for certain parameters there can be a large quantum entanglement between the electron and the electromagnetic field. Quantum entanglement is analyzed on the basis of a mathematically exact expression for the Schmidt modes and the Von Neumann entropy.

  4. Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers.

    PubMed

    Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred

    2017-01-25

    The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .

  5. Shaping Relations: Exploiting Relational Features for Visuospatial Priming

    ERIC Educational Resources Information Center

    Livins, Katherine A.; Doumas, Leonidas A. A.; Spivey, Michael J.

    2016-01-01

    Although relational reasoning has been described as a process at the heart of human cognition, the exact character of relational representations remains an open debate. Symbolic-connectionist models of relational cognition suggest that relations are structured representations, but that they are ultimately grounded in feature sets; thus, they…

  6. iVPIC: A low-­dispersion, energy-­conserving relativistic PIC solver for LPI simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis

    We have developed a novel low-­dispersion, exactly energy-­conserving PIC algorithm for the relativistic Vlasov-­Maxwell system. The approach features an exact energy conservation theorem while preserving the favorable performance and numerical dispersion properties of explicit PIC. The new algorithm has the potential to enable much longer laser-­plasma-­interaction (LPI) simulations than are currently possible.

  7. Grading system to categorize breast MRI using BI-RADS 5th edition: a statistical study of non-mass enhancement descriptors in terms of probability of malignancy.

    PubMed

    Asada, Tatsunori; Yamada, Takayuki; Kanemaki, Yoshihide; Fujiwara, Keishi; Okamoto, Satoko; Nakajima, Yasuo

    2018-03-01

    To analyze the association of breast non-mass enhancement descriptors in the BI-RADS 5th edition with malignancy, and to establish a grading system and categorization of descriptors. This study was approved by our institutional review board. A total of 213 patients were enrolled. Breast MRI was performed with a 1.5-T MRI scanner using a 16-channel breast radiofrequency coil. Two radiologists determined internal enhancement and distribution of non-mass enhancement by consensus. Corresponding pathologic diagnoses were obtained by either biopsy or surgery. The probability of malignancy by descriptor was analyzed using Fisher's exact test and multivariate logistic regression analysis. The probability of malignancy by category was analyzed using Fisher's exact and multi-group comparison tests. One hundred seventy-eight lesions were malignant. Multivariate model analysis showed that internal enhancement (homogeneous vs others, p < 0.001, heterogeneous and clumped vs clustered ring, p = 0.003) and distribution (focal and linear vs segmental, p < 0.001) were the significant explanatory variables. The descriptors were classified into three grades of suspicion, and the categorization (3, 4A, 4B, 4C, and 5) by sum-up grades showed an incremental increase in the probability of malignancy (p < 0.0001). The three-grade criteria and categorization by sum-up grades of descriptors appear valid for non-mass enhancement.

  8. Using dynamic geometry software for teaching conditional probability with area-proportional Venn diagrams

    NASA Astrophysics Data System (ADS)

    Radakovic, Nenad; McDougall, Douglas

    2012-10-01

    This classroom note illustrates how dynamic visualization can be used to teach conditional probability and Bayes' theorem. There are two features of the visualization that make it an ideal pedagogical tool in probability instruction. The first feature is the use of area-proportional Venn diagrams that, along with showing qualitative relationships, describe the quantitative relationship between two sets. The second feature is the slider and animation component of dynamic geometry software enabling students to observe how the change in the base rate of an event influences conditional probability. A hypothetical instructional sequence using a well-known breast cancer example is described.

  9. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  10. Reserve design to maximize species persistence

    Treesearch

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  11. Return probability after a quench from a domain wall initial state in the spin-1/2 XXZ chain

    NASA Astrophysics Data System (ADS)

    Stéphan, Jean-Marie

    2017-10-01

    We study the return probability and its imaginary (τ) time continuation after a quench from a domain wall initial state in the XXZ spin chain, focusing mainly on the region with anisotropy \\vert Δ\\vert < 1 . We establish exact Fredholm determinant formulas for those, by exploiting a connection to the six-vertex model with domain wall boundary conditions. In imaginary time, we find the expected scaling for a partition function of a statistical mechanical model of area proportional to τ2 , which reflects the fact that the model exhibits the limit shape phenomenon. In real time, we observe that in the region \\vert Δ\\vert <1 the decay for long time t is nowhere continuous as a function of anisotropy: it is Gaussian at roots of unity and exponential otherwise. We also determine that the front moves as x_f(t)=t\\sqrt{1-Δ^2} , by the analytic continuation of known arctic curves in the six-vertex model. Exactly at \\vert Δ\\vert =1 , we find the return probability decays as e-\\zeta(3/2) \\sqrt{t/π}t1/2O(1) . It is argued that this result provides an upper bound on spin transport. In particular, it suggests that transport should be diffusive at the isotropic point for this quench.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donangelo, R.J.

    An integral representation for the classical limit of the quantum mechanical S-matrix is developed and applied to heavy-ion Coulomb excitation and Coulomb-nuclear interference. The method combines the quantum principle of superposition with exact classical dynamics to describe the projectile-target system. A detailed consideration of the classical trajectories and of the dimensionless parameters that characterize the system is carried out. The results are compared, where possible, to exact quantum mechanical calculations and to conventional semiclassical calculations. It is found that in the case of backscattering the classical limit S-matrix method is able to almost exactly reproduce the quantum-mechanical S-matrix elements, andmore » therefore the transition probabilities, even for projectiles as light as protons. The results also suggest that this approach should be a better approximation for heavy-ion multiple Coulomb excitation than earlier semiclassical methods, due to a more accurate description of the classical orbits in the electromagnetic field of the target nucleus. Calculations using this method indicate that the rotational excitation probabilities in the Coulomb-nuclear interference region should be very sensitive to the details of the potential at the surface of the nucleus, suggesting that heavy-ion rotational excitation could constitute a sensitive probe of the nuclear potential in this region. The application to other problems as well as the present limits of applicability of the formalism are also discussed.« less

  13. Bi-Exact Groups, Strongly Ergodic Actions and Group Measure Space Type III Factors with No Central Sequence

    NASA Astrophysics Data System (ADS)

    Houdayer, Cyril; Isono, Yusuke

    2016-12-01

    We investigate the asymptotic structure of (possibly type III) crossed product von Neumann algebras {M = B rtimes Γ} arising from arbitrary actions {Γ \\curvearrowright B} of bi-exact discrete groups (e.g. free groups) on amenable von Neumann algebras. We prove a spectral gap rigidity result for the central sequence algebra {N' \\cap M^ω} of any nonamenable von Neumann subalgebra with normal expectation {N subset M}. We use this result to show that for any strongly ergodic essentially free nonsingular action {Γ \\curvearrowright (X, μ)} of any bi-exact countable discrete group on a standard probability space, the corresponding group measure space factor {L^∞(X) rtimes Γ} has no nontrivial central sequence. Using recent results of Boutonnet et al. (Local spectral gap in simple Lie groups and applications, 2015), we construct, for every {0 < λ ≤ 1}, a type {III_λ} strongly ergodic essentially free nonsingular action {F_∞ \\curvearrowright (X_λ, μ_λ)} of the free group {{F}_∞} on a standard probability space so that the corresponding group measure space type {III_λ} factor {L^∞(X_λ, μ_λ) rtimes F_∞} has no nontrivial central sequence by our main result. In particular, we obtain the first examples of group measure space type {III} factors with no nontrivial central sequence.

  14. The Sequential Probability Ratio Test: An efficient alternative to exact binomial testing for Clean Water Act 303(d) evaluation.

    PubMed

    Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry

    2017-05-01

    The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tianhui; Fu, Bina; Zhang, Dong H., E-mail: zhangdh@dicp.ac.cn

    The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the titlemore » molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.« less

  16. Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension

    NASA Astrophysics Data System (ADS)

    Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek

    2018-04-01

    We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.

  17. Clinical, pathological and thin-section CT features of persistent multiple ground-glass opacity nodules: comparison with solitary ground-glass opacity nodule.

    PubMed

    Kim, Tae Jung; Goo, Jin Mo; Lee, Kyung Won; Park, Chang Min; Lee, Hyun Ju

    2009-05-01

    To retrospectively compare the clinical, pathological, and thin-section CT features of persistent multiple ground-glass opacity (GGO) nodules with those of solitary GGO nodules. Histopathologic specimens were obtained from 193 GGO nodules in 136 patients (87 women, 49 men; mean age, 57; age range 33-81). The clinical data, pathologic findings, and thin-section CT features of multiple and solitary GGO nodules were compared by using t-test or Fisher's exact test. Multiple GGO nodules (n=105) included atypical adenomatous hyperplasia (AAH) (n=31), bronchioloalveolar carcinoma (BAC) (n=33), adenocarcinoma (n=34) and focal interstitial fibrosis (n=7). Solitary GGO nodules included AAH (n=8), BAC (n=15), adenocarcinoma (n=55) and focal interstitial fibrosis (n=10). AAH (P=.001) and BAC (P=.029) were more frequent in multiple GGO nodules, whereas adenocarcinoma (P<.001) was more frequent in solitary GGO nodules. Female sex (P<.001), nonsmoker (P=.012) and multiple primary lung cancers (P<.001) were more frequent for multiple GGO nodules, which were smaller (12 mm+/-7.9) than solitary GGO nodules (17 mm+/-8.1) (P<.001). Air-bronchogram (P=.019), bubble-lucency (P=.004), and pleural retraction (P<.001) were more frequent in solitary GGO nodules. There was no postoperative recurrence except for one patient with multiple GGO nodules and one with solitary GGO nodule. Clinical, pathological, and thin-section CT features of persistent multiple GGO nodules were found to differ from those of solitary GGO nodules. Nevertheless, the two nodule types can probably be followed up and managed in a similar manner because their prognoses were found to be similar.

  18. Neutrino oscillation processes in a quantum-field-theoretical approach

    NASA Astrophysics Data System (ADS)

    Egorov, Vadim O.; Volobuev, Igor P.

    2018-05-01

    It is shown that neutrino oscillation processes can be consistently described in the framework of quantum field theory using only the plane wave states of the particles. Namely, the oscillating electron survival probabilities in experiments with neutrino detection by charged-current and neutral-current interactions are calculated in the quantum field-theoretical approach to neutrino oscillations based on a modification of the Feynman propagator in the momentum representation. The approach is most similar to the standard Feynman diagram technique. It is found that the oscillating distance-dependent probabilities of detecting an electron in experiments with neutrino detection by charged-current and neutral-current interactions exactly coincide with the corresponding probabilities calculated in the standard approach.

  19. Using Dynamic Geometry Software for Teaching Conditional Probability with Area-Proportional Venn Diagrams

    ERIC Educational Resources Information Center

    Radakovic, Nenad; McDougall, Douglas

    2012-01-01

    This classroom note illustrates how dynamic visualization can be used to teach conditional probability and Bayes' theorem. There are two features of the visualization that make it an ideal pedagogical tool in probability instruction. The first feature is the use of area-proportional Venn diagrams that, along with showing qualitative relationships,…

  20. Semi-Local DFT Functionals with Exact-Exchange-Like Features: Beyond the AK13

    NASA Astrophysics Data System (ADS)

    Armiento, Rickard

    The Armiento-Kümmel functional from 2013 (AK13) is a non-empirical semi-local exchange functional on generalized gradient approximation form (GGA) in Kohn-Sham (KS) density functional theory (DFT). Recent works have established that AK13 gives improved electronic-structure exchange features over other semi-local methods, with a qualitatively improved orbital description and band structure. For example, the Kohn-Sham band gap is greatly extended, as it is for exact exchange. This talk outlines recent efforts towards new exchange-correlation functionals based on, and extending, the AK13 design ideas. The aim is to improve the quantitative accuracy, the description of energetics, and to address other issues found with the original formulation. Swedish e-Science Research Centre (SeRC).

  1. Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation

    ERIC Educational Resources Information Center

    Ayre, Colin; Scally, Andrew John

    2014-01-01

    The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.

  2. Optimal Keno Strategies and the Central Limit Theorem

    ERIC Educational Resources Information Center

    Johnson, Roger W.

    2006-01-01

    For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…

  3. Data Interpretation: Using Probability

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2011-01-01

    Experimental data are analysed statistically to allow researchers to draw conclusions from a limited set of measurements. The hard fact is that researchers can never be certain that measurements from a sample will exactly reflect the properties of the entire group of possible candidates available to be studied (although using a sample is often the…

  4. Exact probability distribution function for the volatility of cumulative production

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Băloi, Mihaela-Andreea, E-mail: mihaela.baloi88@e-uvt.ro; Crucean, Cosmin

    The production of fermions in dipolar electric fields on de Sitter universe is studied. The amplitude and probability of pair production are computed using the exact solution of the Dirac equation in de Sitter spacetime. The form of the dipolar fields is established using the conformal invariance of the Maxwell equations. We obtain that the momentum conservation law is broken in the process of pair production in dipolar electric fields. Also we establish that there are nonvanishing probabilities for processes in which the helicity is conserved/nonconserved. The Minkowski limit is recovered when the expansion factor becomes zero.

  6. Capturing Students' Abstraction While Solving Organic Reaction Mechanism Problems across a Semester

    ERIC Educational Resources Information Center

    Weinrich, M. L.; Sevian, H.

    2017-01-01

    Students often struggle with solving mechanism problems in organic chemistry courses. They frequently focus on surface features, have difficulty attributing meaning to symbols, and do not recognize tasks that are different from the exact tasks practiced. To be more successful, students need to be able to extract salient features, map similarities…

  7. Spontaneous light emission by atomic hydrogen: Fermi's golden rule without cheating

    NASA Astrophysics Data System (ADS)

    Debierre, V.; Durt, T.; Nicolet, A.; Zolla, F.

    2015-10-01

    Focusing on the 2 p- 1 s transition in atomic hydrogen, we investigate through first order perturbation theory the time evolution of the survival probability of an electron initially taken to be in the excited (2 p) state. We examine both the results yielded by the standard dipole approximation for the coupling between the atom and the electromagnetic field - for which we propose a cutoff-independent regularisation - and those yielded by the exact coupling function. In both cases, Fermi's golden rule is shown to be an excellent approximation for the system at hand: we found its maximal deviation from the exact behaviour of the system to be of order 10-8 /10-7. Our treatment also yields a rigorous prescription for the choice of the optimal cutoff frequency in the dipole approximation. With our cutoff, the predictions of the dipole approximation are almost indistinguishable at all times from the exact dynamics of the system.

  8. Exact PDF equations and closure approximations for advective-reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venturi, D.; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.

    2013-06-01

    Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recentlymore » proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty.« less

  9. Exact Solution of a Two-Species Quantum Dimer Model for Pseudogap Metals

    NASA Astrophysics Data System (ADS)

    Feldmeier, Johannes; Huber, Sebastian; Punk, Matthias

    2018-05-01

    We present an exact ground state solution of a quantum dimer model introduced by Punk, Allais, and Sachdev [Quantum dimer model for the pseudogap metal, Proc. Natl. Acad. Sci. U.S.A. 112, 9552 (2015)., 10.1073/pnas.1512206112], which features ordinary bosonic spin-singlet dimers as well as fermionic dimers that can be viewed as bound states of spinons and holons in a hole-doped resonating valence bond liquid. Interestingly, this model captures several essential properties of the metallic pseudogap phase in high-Tc cuprate superconductors. We identify a line in parameter space where the exact ground state wave functions can be constructed at an arbitrary density of fermionic dimers. At this exactly solvable line the ground state has a huge degeneracy, which can be interpreted as a flat band of fermionic excitations. Perturbing around the exactly solvable line, this degeneracy is lifted and the ground state is a fractionalized Fermi liquid with a small pocket Fermi surface in the low doping limit.

  10. Some Exact Results for the Schroedinger Wave Equation with a Time Dependent Potential

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2009-01-01

    The time dependent Schroedinger equation with a time dependent delta function potential is solved exactly for many special cases. In all other cases the problem can be reduced to an integral equation of the Volterra type. It is shown that by knowing the wave function at the origin, one may derive the wave function everywhere. Thus, the problem is reduced from a PDE in two variables to an integral equation in one. These results are used to compare adiabatic versus sudden changes in the potential. It is shown that adiabatic changes in the p otential lead to conservation of the normalization of the probability density.

  11. Design of an activity landscape view taking compound-based feature probabilities into account.

    PubMed

    Zhang, Bijun; Vogt, Martin; Bajorath, Jürgen

    2014-09-01

    Activity landscapes (ALs) of compound data sets are rationalized as graphical representations that integrate similarity and potency relationships between active compounds. ALs enable the visualization of structure-activity relationship (SAR) information and are thus computational tools of interest for medicinal chemistry. For AL generation, similarity and potency relationships are typically evaluated in a pairwise manner and major AL features are assessed at the level of compound pairs. In this study, we add a conditional probability formalism to AL design that makes it possible to quantify the probability of individual compounds to contribute to characteristic AL features. Making this information graphically accessible in a molecular network-based AL representation is shown to further increase AL information content and helps to quickly focus on SAR-informative compound subsets. This feature probability-based AL variant extends the current spectrum of AL representations for medicinal chemistry applications.

  12. Two statistical mechanics aspects of complex networks

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Biely, Christoly

    2006-12-01

    By adopting an ensemble interpretation of non-growing rewiring networks, network theory can be reduced to a counting problem of possible network states and an identification of their associated probabilities. We present two scenarios of how different rewirement schemes can be used to control the state probabilities of the system. In particular, we review how by generalizing the linking rules of random graphs, in combination with superstatistics and quantum mechanical concepts, one can establish an exact relation between the degree distribution of any given network and the nodes’ linking probability distributions. In a second approach, we control state probabilities by a network Hamiltonian, whose characteristics are motivated by biological and socio-economical statistical systems. We demonstrate that a thermodynamics of networks becomes a fully consistent concept, allowing to study e.g. ‘phase transitions’ and computing entropies through thermodynamic relations.

  13. Applications of finite-size scaling for atomic and non-equilibrium systems

    NASA Astrophysics Data System (ADS)

    Antillon, Edwin A.

    We apply the theory of Finite-size scaling (FSS) to an atomic and a non-equilibrium system in order to extract critical parameters. In atomic systems, we look at the energy dependence on the binding charge near threshold between bound and free states, where we seek the critical nuclear charge for stability. We use different ab initio methods, such as Hartree-Fock, Density Functional Theory, and exact formulations implemented numerically with the finite-element method (FEM). Using Finite-size scaling formalism, where in this case the size of the system is related to the number of elements used in the basis expansion of the wavefunction, we predict critical parameters in the large basis limit. Results prove to be in good agreement with previous Slater-basis set calculations and demonstrate that this combined approach provides a promising first-principles approach to describe quantum phase transitions for materials and extended systems. In the second part we look at non-equilibrium one-dimensional model known as the raise and peel model describing a growing surface which grows locally and has non-local desorption. For a specific values of adsorption ( ua) and desorption (ud) the model shows interesting features. At ua = ud, the model is described by a conformal field theory (with conformal charge c = 0) and its stationary probability can be mapped to the ground state of a quantum chain and can also be related a two dimensional statistical model. For ua ≥ ud, the model shows a scale invariant phase in the avalanche distribution. In this work we study the surface dynamics by looking at avalanche distributions using FSS formalism and explore the effect of changing the boundary conditions of the model. The model shows the same universality for the cases with and with our the wall for an odd number of tiles removed, but we find a new exponent in the presence of a wall for an even number of avalanches released. We provide new conjecture for the probability distribution of avalanches with a wall obtained by using exact diagonalization of small lattices and Monte-Carlo simulations.

  14. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  15. Exact Turbulence Law in Collisionless Plasmas: Hybrid Simulations

    NASA Astrophysics Data System (ADS)

    Hellinger, P.; Verdini, A.; Landi, S.; Franci, L.; Matteini, L.

    2017-12-01

    An exact vectorial law for turbulence in homogeneous incompressible Hall-MHD is derived and tested in two-dimensional hybrid simulations of plasma turbulence. The simulations confirm the validity of the MHD exact law in the kinetic regime, the simulated turbulence exhibits a clear inertial range on large scales where the MHD cascade flux dominates. The simulation results also indicate that in the sub-ion range the cascade continues via the Hall term and that the total cascade rate tends to decrease at around the ion scales, especially in high-beta plasmas. This decrease is like owing to formation of non-thermal features, such as collisionless ion energization, that can not be retained in the Hall MHD approximation.

  16. Probability 1/e

    ERIC Educational Resources Information Center

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  17. The One Micron Fe II Lines in Active Galaxies and Emission Line Stars

    NASA Astrophysics Data System (ADS)

    Rudy, R. J.; Mazuk, S.; Puetter, R. C.; Hamann, F. W.

    1999-05-01

    The infrared multiplet of Fe II lines at 0.9997, 1.0501, 1.0863, and 1.1126 microns are particularly strong relative to other red and infrared Fe II features. They reach their greatest strength, relative to the hydrogen lines, in the Seyfert 1 galaxy I Zw 1, and are a common, although not ubiquitous feature, in the broad line regions of active galaxies. In addition, they are seen in a diverse assortment of Galactic sources including young stars, Herbig Ae and Be stars, luminous blue variables, proto-planetary nebulae, and symbiotic novae. They are probably excited by Lyman alpha florescence but the exact path of the cascade to their upper levels is uncertain. They arise in dense, sheltered regions of low ionization and are frequently observed together with the infrared Ca II triplet and the Lyman beta excited O I lines 8446 and 11287. The strengths of the four Fe II features, relative to each other, are nearly constant from object to object suggesting a statistical population of their common upper multiplet. Their intensities, in comparison to the Paschen lines, indicate that they can be important coolants for regions with high optical depths in the hydrogen lines. In addition to I Zw 1 and other active galaxies, we present spectra for the Galactic sources MWC 17, MWC 84, MWC 340, MWC 922, PU Vul, and M 1-92. We review the status of the Fe II observations and discuss the excitation process and possible implications. This work was supported by the IR&D program of the Aerospace Corporation. RCP and FWH acknowledge support from NASA.

  18. New S control chart using skewness correction method for monitoring process dispersion of skewed distributions

    NASA Astrophysics Data System (ADS)

    Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha

    2017-11-01

    Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.

  19. Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potentialmore » composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.« less

  20. Sensing and perception research for space telerobotics at JPL

    NASA Technical Reports Server (NTRS)

    Gennery, Donald B.; Litwin, Todd; Wilcox, Brian; Bon, Bruce

    1987-01-01

    PIFLEX is a pipelined-image processor that can perform elaborate computations whose exact nature is not fixed in the hardware, and that can handle multiple images. A wire-wrapped prototype PIFEX module has been produced and debugged, using a version of the convolver composed of three custom VLSI chips (plus the line buffers). A printed circuit layout is being designed for use with a single-chip convolver, leading to production of a PIFEX with about 120 modules. A high-level language for programming PIFEX has been designed, and a compiler will be written for it. The camera calibration software has been completed and tested. Two more terms in the camera model, for lens distortion, probably will be added later. The acquisition and tracking system has been designed and most of it has been coded in Pascal for the MicroVAX-II. The feature tracker, motion stereo module and stereo matcher have executed successfully. The model matcher is still under development, and coding has begun on the tracking initializer. The object tracker was running on a different computer from the VAX, and preliminary runs on real images have been performed there. Once all modules are working, optimization and integration will begin. Finally, when a sufficiently large PIFEX is available, appropriate parts of acquisition and tracking, including much of the feature tracker, will be programmed into PIFEX, thus increasing the speed and robustness of the system.

  1. Clinical features and natural history of von Hippel-Lindau disease.

    PubMed

    Maher, E R; Yates, J R; Harries, R; Benjamin, C; Harris, R; Moore, A T; Ferguson-Smith, M A

    1990-11-01

    The clinical features, age at onset and survival of 152 patients with von Hippel-Lindau disease were studied. Mean age at onset was 26.3 years and 97 per cent of patients had presented by aged 60 years. Retinal angioma was the first manifestation in 65 patients (43 per cent), followed by cerebellar haemangioblastoma (n = 60, 39 per cent) and renal cell carcinoma (n = 15, 10 per cent). Overall, 89 patients (59 per cent) developed a cerebellar haemangioblastoma, 89 (59 per cent) a retinal angioma, 43 (28 per cent) renal cell carcinoma, 20 (13 per cent) spinal haemangioblastoma and 11 (7 per cent) a phaeochromocytoma. Renal, pancreatic and epididymal cysts were frequent findings but their exact incidence was not accurately assessed. Mean age at diagnosis of renal cell carcinoma (44.0 +/- 10.9 years) was significantly older than that for cerebellar haemangioblastoma (29.0 +/- 10.0 years) and retinal angioma (25.4 +/- 12.7 years). The probability of a patient with von Hippel-Lindan disease developing a cerebellar haemangioblastoma, retinal angioma or renal cell carcinoma by age 60 years was 0.84, 0.7 and 0.69, respectively. A comprehensive screening protocol for affected patients and at-risk relatives is presented, based on detailed analysis of age at onset data for each of the major complications. Median actuarial survival was 49 years, with renal cell carcinoma the leading cause of death.

  2. Two-state theory of binned photon statistics for a large class of waiting time distributions and its application to quantum dot blinking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkán-Kacsó, Sándor

    2014-06-14

    A theoretical method is proposed for the calculation of the photon counting probability distribution during a bin time. Two-state fluorescence and steady excitation are assumed. A key feature is a kinetic scheme that allows for an extensive class of stochastic waiting time distribution functions, including power laws, expanded as a sum of weighted decaying exponentials. The solution is analytic in certain conditions, and an exact and simple expression is found for the integral contribution of “bright” and “dark” states. As an application for power law kinetics, theoretical results are compared with experimental intensity histograms from a number of blinking CdSe/ZnSmore » quantum dots. The histograms are consistent with distributions of intensity states around a “bright” and a “dark” maximum. A gap of states is also revealed in the more-or-less flat inter-peak region. The slope and to some extent the flatness of the inter-peak feature are found to be sensitive to the power-law exponents. Possible models consistent with these findings are discussed, such as the combination of multiple charging and fluctuating non-radiative channels or the multiple recombination center model. A fitting of the latter to experiment provides constraints on the interaction parameter between the recombination centers. Further extensions and applications of the photon counting theory are also discussed.« less

  3. Estimating Independent Locally Shifted Random Utility Models for Ranking Data

    ERIC Educational Resources Information Center

    Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans

    2011-01-01

    We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…

  4. Theory and analysis of statistical discriminant techniques as applied to remote sensing data

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1973-01-01

    Classification of remote earth resources sensing data according to normed exponential density statistics is reported. The use of density models appropriate for several physical situations provides an exact solution for the probabilities of classifications associated with the Bayes discriminant procedure even when the covariance matrices are unequal.

  5. Energy Distributions in Small Populations: Pascal versus Boltzmann

    ERIC Educational Resources Information Center

    Kugel, Roger W.; Weiner, Paul A.

    2010-01-01

    The theoretical distributions of a limited amount of energy among small numbers of particles with discrete, evenly-spaced quantum levels are examined systematically. The average populations of energy states reveal the pattern of Pascal's triangle. An exact formula for the probability that a particle will be in any given energy state is derived.…

  6. Mathematical Analysis of a Multiple-Look Concept Identification Model.

    ERIC Educational Resources Information Center

    Cotton, John W.

    The behavior of focus samples central to the multiple-look model of Trabasso and Bower is examined by three methods. First, exact probabilities of success conditional upon a certain brief history of stimulation are determined. Second, possible states of the organism during the experiment are defined and a transition matrix for those states…

  7. Asymptotics of small deviations of the Bogoliubov processes with respect to a quadratic norm

    NASA Astrophysics Data System (ADS)

    Pusev, R. S.

    2010-10-01

    We obtain results on small deviations of Bogoliubov’s Gaussian measure occurring in the theory of the statistical equilibrium of quantum systems. For some random processes related to Bogoliubov processes, we find the exact asymptotic probability of their small deviations with respect to a Hilbert norm.

  8. In Defense of the Chi-Square Continuity Correction.

    ERIC Educational Resources Information Center

    Veldman, Donald J.; McNemar, Quinn

    Published studies of the sampling distribution of chi-square with and without Yates' correction for continuity have been interpreted as discrediting the correction. Yates' correction actually produces a biased chi-square value which in turn yields a better estimate of the exact probability of the discrete event concerned when used in conjunction…

  9. Review of probabilistic analysis of dynamic response of systems with random parameters

    NASA Technical Reports Server (NTRS)

    Kozin, F.; Klosner, J. M.

    1989-01-01

    The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agostini, Federica; Abedi, Ali; Suzuki, Yasumitsu

    The decomposition of electronic and nuclear motion presented in Abedi et al. [Phys. Rev. Lett. 105, 123002 (2010)] yields a time-dependent potential that drives the nuclear motion and fully accounts for the coupling to the electronic subsystem. Here, we show that propagation of an ensemble of independent classical nuclear trajectories on this exact potential yields dynamics that are essentially indistinguishable from the exact quantum dynamics for a model non-adiabatic charge transfer problem. We point out the importance of step and bump features in the exact potential that are critical in obtaining the correct splitting of the quasiclassical nuclear wave packetmore » in space after it passes through an avoided crossing between two Born-Oppenheimer surfaces and analyze their structure. Finally, an analysis of the exact potentials in the context of trajectory surface hopping is presented, including preliminary investigations of velocity-adjustment and the force-induced decoherence effect.« less

  11. Exact Identification of a Quantum Change Point

    NASA Astrophysics Data System (ADS)

    Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Ramon

    2017-10-01

    The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty—naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behavior. We also discuss local (online) protocols and compare them with the optimal procedure.

  12. Exact Identification of a Quantum Change Point.

    PubMed

    Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Ramon

    2017-10-06

    The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty-naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behavior. We also discuss local (online) protocols and compare them with the optimal procedure.

  13. An Unconditional Test for Change Point Detection in Binary Sequences with Applications to Clinical Registries.

    PubMed

    Ellenberger, David; Friede, Tim

    2016-08-05

    Methods for change point (also sometimes referred to as threshold or breakpoint) detection in binary sequences are not new and were introduced as early as 1955. Much of the research in this area has focussed on asymptotic and exact conditional methods. Here we develop an exact unconditional test. An unconditional exact test is developed which assumes the total number of events as random instead of conditioning on the number of observed events. The new test is shown to be uniformly more powerful than Worsley's exact conditional test and means for its efficient numerical calculations are given. Adaptions of methods by Berger and Boos are made to deal with the issue that the unknown event probability imposes a nuisance parameter. The methods are compared in a Monte Carlo simulation study and applied to a cohort of patients undergoing traumatic orthopaedic surgery involving external fixators where a change in pin site infections is investigated. The unconditional test controls the type I error rate at the nominal level and is uniformly more powerful than (or to be more precise uniformly at least as powerful as) Worsley's exact conditional test which is very conservative for small sample sizes. In the application a beneficial effect associated with the introduction of a new treatment procedure for pin site care could be revealed. We consider the new test an effective and easy to use exact test which is recommended in small sample size change point problems in binary sequences.

  14. Structural Features of Algebraic Quantum Notations

    ERIC Educational Resources Information Center

    Gire, Elizabeth; Price, Edward

    2015-01-01

    The formalism of quantum mechanics includes a rich collection of representations for describing quantum systems, including functions, graphs, matrices, histograms of probabilities, and Dirac notation. The varied features of these representations affect how computations are performed. For example, identifying probabilities of measurement outcomes…

  15. Exercise and immunity

    MedlinePlus

    ... medlineplus.gov/ency/article/007165.htm Exercise and immunity To use the sharing features on this page, ... know exactly if or how exercise increases your immunity to certain illnesses. There are several theories. However, ...

  16. Delphi definition of the EADC-ADNI Harmonized Protocol for hippocampal segmentation on magnetic resonance

    PubMed Central

    Boccardi, Marina; Bocchetta, Martina; Apostolova, Liana G.; Barnes, Josephine; Bartzokis, George; Corbetta, Gabriele; DeCarli, Charles; deToledo-Morrell, Leyla; Firbank, Michael; Ganzola, Rossana; Gerritsen, Lotte; Henneman, Wouter; Killiany, Ronald J.; Malykhin, Nikolai; Pasqualetti, Patrizio; Pruessner, Jens C.; Redolfi, Alberto; Robitaille, Nicolas; Soininen, Hilkka; Tolomeo, Daniele; Wang, Lei; Watson, Craig; Wolf, Henrike; Duvernoy, Henri; Duchesne, Simon; Jack, Clifford R.; Frisoni, Giovanni B.

    2015-01-01

    Background This study aimed to have international experts converge on a harmonized definition of whole hippocampus boundaries and segmentation procedures, to define standard operating procedures for magnetic resonance (MR)-based manual hippocampal segmentation. Methods The panel received a questionnaire regarding whole hippocampus boundaries and segmentation procedures. Quantitative information was supplied to allow evidence-based answers. A recursive and anonymous Delphi procedure was used to achieve convergence. Significance of agreement among panelists was assessed by exact probability on Fisher’s and binomial tests. Results Agreement was significant on the inclusion of alveus/fimbria (P =.021), whole hippocampal tail (P =.013), medial border of the body according to visible morphology (P =.0006), and on this combined set of features (P =.001). This definition captures 100% of hippocampal tissue, 100% of Alzheimer’s disease-related atrophy, and demonstrated good reliability on preliminary intrarater (0.98) and inter-rater (0.94) estimates. Discussion Consensus was achieved among international experts with respect to hippocampal segmentation using MR resulting in a harmonized segmentation protocol. PMID:25130658

  17. Inhomogeneous diffusion and ergodicity breaking induced by global memory effects

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2016-11-01

    We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.

  18. Delphi definition of the EADC-ADNI Harmonized Protocol for hippocampal segmentation on magnetic resonance.

    PubMed

    Boccardi, Marina; Bocchetta, Martina; Apostolova, Liana G; Barnes, Josephine; Bartzokis, George; Corbetta, Gabriele; DeCarli, Charles; deToledo-Morrell, Leyla; Firbank, Michael; Ganzola, Rossana; Gerritsen, Lotte; Henneman, Wouter; Killiany, Ronald J; Malykhin, Nikolai; Pasqualetti, Patrizio; Pruessner, Jens C; Redolfi, Alberto; Robitaille, Nicolas; Soininen, Hilkka; Tolomeo, Daniele; Wang, Lei; Watson, Craig; Wolf, Henrike; Duvernoy, Henri; Duchesne, Simon; Jack, Clifford R; Frisoni, Giovanni B

    2015-02-01

    This study aimed to have international experts converge on a harmonized definition of whole hippocampus boundaries and segmentation procedures, to define standard operating procedures for magnetic resonance (MR)-based manual hippocampal segmentation. The panel received a questionnaire regarding whole hippocampus boundaries and segmentation procedures. Quantitative information was supplied to allow evidence-based answers. A recursive and anonymous Delphi procedure was used to achieve convergence. Significance of agreement among panelists was assessed by exact probability on Fisher's and binomial tests. Agreement was significant on the inclusion of alveus/fimbria (P = .021), whole hippocampal tail (P = .013), medial border of the body according to visible morphology (P = .0006), and on this combined set of features (P = .001). This definition captures 100% of hippocampal tissue, 100% of Alzheimer's disease-related atrophy, and demonstrated good reliability on preliminary intrarater (0.98) and inter-rater (0.94) estimates. Consensus was achieved among international experts with respect to hippocampal segmentation using MR resulting in a harmonized segmentation protocol. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  19. Integrable perturbed magnetic fields in toroidal geometry: An exact analytical flux surface label for large aspect ratio

    NASA Astrophysics Data System (ADS)

    Kallinikos, N.; Isliker, H.; Vlahos, L.; Meletlidou, E.

    2014-06-01

    An analytical description of magnetic islands is presented for the typical case of a single perturbation mode introduced to tokamak plasma equilibrium in the large aspect ratio approximation. Following the Hamiltonian structure directly in terms of toroidal coordinates, the well known integrability of this system is exploited, laying out a precise and practical way for determining the island topology features, as required in various applications, through an analytical and exact flux surface label.

  20. Exact analytic solutions of Maxwell's equations describing propagating nonparaxial electromagnetic beams.

    PubMed

    Garay-Avendaño, Roger L; Zamboni-Rached, Michel

    2014-07-10

    In this paper, we propose a method that is capable of describing in exact and analytic form the propagation of nonparaxial scalar and electromagnetic beams. The main features of the method presented here are its mathematical simplicity and the fast convergence in the cases of highly nonparaxial electromagnetic beams, enabling us to obtain high-precision results without the necessity of lengthy numerical simulations or other more complex analytical calculations. The method can be used in electromagnetism (optics, microwaves) as well as in acoustics.

  1. Integrable perturbed magnetic fields in toroidal geometry: An exact analytical flux surface label for large aspect ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallinikos, N.; Isliker, H.; Vlahos, L.

    2014-06-15

    An analytical description of magnetic islands is presented for the typical case of a single perturbation mode introduced to tokamak plasma equilibrium in the large aspect ratio approximation. Following the Hamiltonian structure directly in terms of toroidal coordinates, the well known integrability of this system is exploited, laying out a precise and practical way for determining the island topology features, as required in various applications, through an analytical and exact flux surface label.

  2. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  3. Incidental learning of probability information is differentially affected by the type of visual working memory representation.

    PubMed

    van Lamsweerde, Amanda E; Beck, Melissa R

    2015-12-01

    In this study, we investigated whether the ability to learn probability information is affected by the type of representation held in visual working memory. Across 4 experiments, participants detected changes to displays of coloured shapes. While participants detected changes in 1 dimension (e.g., colour), a feature from a second, nonchanging dimension (e.g., shape) predicted which object was most likely to change. In Experiments 1 and 3, items could be grouped by similarity in the changing dimension across items (e.g., colours and shapes were repeated in the display), while in Experiments 2 and 4 items could not be grouped by similarity (all features were unique). Probability information from the predictive dimension was learned and used to increase performance, but only when all of the features within a display were unique (Experiments 2 and 4). When it was possible to group by feature similarity in the changing dimension (e.g., 2 blue objects appeared within an array), participants were unable to learn probability information and use it to improve performance (Experiments 1 and 3). The results suggest that probability information can be learned in a dimension that is not explicitly task-relevant, but only when the probability information is represented with the changing dimension in visual working memory. (c) 2015 APA, all rights reserved).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinitsyn, N. A.

    We consider nonadiabatic transitions in explicitly time-dependent systems with Hamiltonians of the form Hˆ(t)=Aˆ+Bˆt+Cˆ/t, where t is time and Aˆ,Bˆ,Cˆ are Hermitian N × N matrices. We show that in any model of this type, scattering matrix elements satisfy nontrivial exact constraints that follow from the absence of the Stokes phenomenon for solutions with specific conditions at t→–∞. This allows one to continue such solutions analytically to t→+∞, and connect their asymptotic behavior at t→–∞ and t→+∞. This property becomes particularly useful when a model shows additional discrete symmetries. Specifically, we derive a number of simple exact constraints and explicitmore » expressions for scattering probabilities in such systems.« less

  5. Exact solutions for mass-dependent irreversible aggregations.

    PubMed

    Son, Seung-Woo; Christensen, Claire; Bizhani, Golnoosh; Grassberger, Peter; Paczuski, Maya

    2011-10-01

    We consider the mass-dependent aggregation process (k+1)X→X, given a fixed number of unit mass particles in the initial state. One cluster is chosen proportional to its mass and is merged into one, either with k neighbors in one dimension, or--in the well-mixed case--with k other clusters picked randomly. We find the same combinatorial exact solutions for the probability to find any given configuration of particles on a ring or line, and in the well-mixed case. The mass distribution of a single cluster exhibits scaling laws and the finite-size scaling form is given. The relation to the classical sum kernel of irreversible aggregation is discussed.

  6. Exact computation of the maximum-entropy potential of spiking neural-network models.

    PubMed

    Cofré, R; Cessac, B

    2014-05-01

    Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.

  7. Concise calculation of the scaling function, exponents, and probability functional of the Edwards-Wilkinson equation with correlated noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y.; Pang, N.; Halpin-Healy, T.

    1994-12-01

    The linear Langevin equation proposed by Edwards and Wilkinson [Proc. R. Soc. London A 381, 17 (1982)] is solved in closed form for noise of arbitrary space and time correlation. Furthermore, the temporal development of the full probability functional describing the height fluctuations is derived exactly, exhibiting an interesting evolution between two distinct Gaussian forms. We determine explicitly the dynamic scaling function for the interfacial width for any given initial condition, isolate the early-time behavior, and discover an invariance that was unsuspected in this problem of arbitrary spatiotemporal noise.

  8. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable.

  9. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  10. An extended car-following model considering random safety distance with different probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi

    2018-02-01

    Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.

  11. Clinical Phenotype of Dementia after Traumatic Brain Injury

    PubMed Central

    Sayed, Nasreen; Culver, Carlee; Dams-O'Connor, Kristen; Hammond, Flora

    2013-01-01

    Abstract Traumatic brain injury (TBI) in early to mid-life is associated with an increased risk of dementia in late life. It is unclear whether TBI results in acceleration of Alzheimer's disease (AD)-like pathology or has features of another dementing condition, such as chronic traumatic encephalopathy, which is associated with more-prominent mood, behavior, and motor disturbances than AD. Data from the National Alzheimer's Coordinating Center (NACC) Uniform Data Set was obtained over a 5-year period. Categorical data were analyzed using Fisher's exact test. Continuous parametric data were analyzed using the Student's t-test. Nonparametric data were analyzed using Mann-Whitney's test. Overall, 877 individuals with dementia who had sustained TBI were identified in the NACC database. Only TBI with chronic deficit or dysfunction was associated with increased risk of dementia. Patients with dementia after TBI (n=62) were significantly more likely to experience depression, anxiety, irritability, and motor disorders than patients with probable AD. Autopsy data were available for 20 of the 62 TBI patients. Of the patients with TBI, 62% met National Institute of Aging-Reagan Institute “high likelihood” criteria for AD. We conclude that TBI with chronic deficit or dysfunction is associated with an increased odds ratio for dementia. Clinically, patients with dementia associated with TBI were more likely to have symptoms of depression, agitation, irritability, and motor dysfunction than patients with probable AD. These findings suggest that dementia in individuals with a history of TBI may be distinct from AD. PMID:23374007

  12. Land use and land cover classification for rural residential areas in China using soft-probability cascading of multifeatures

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin

    2017-10-01

    A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.

  13. Traveling wavefront solutions to nonlinear reaction-diffusion-convection equations

    NASA Astrophysics Data System (ADS)

    Indekeu, Joseph O.; Smets, Ruben

    2017-08-01

    Physically motivated modified Fisher equations are studied in which nonlinear convection and nonlinear diffusion is allowed for besides the usual growth and spread of a population. It is pointed out that in a large variety of cases separable functions in the form of exponentially decaying sharp wavefronts solve the differential equation exactly provided a co-moving point source or sink is active at the wavefront. The velocity dispersion and front steepness may differ from those of some previously studied exact smooth traveling wave solutions. For an extension of the reaction-diffusion-convection equation, featuring a memory effect in the form of a maturity delay for growth and spread, also smooth exact wavefront solutions are obtained. The stability of the solutions is verified analytically and numerically.

  14. Exact axisymmetric solutions of the Maxwell equations in a nonlinear nondispersive medium.

    PubMed

    Petrov, E Yu; Kudrin, A V

    2010-05-14

    The features of propagation of intense waves are of great interest for theory and experiment in electrodynamics and acoustics. The behavior of nonlinear waves in a bounded volume is of special importance and, at the same time, is an extremely complicated problem. It seems almost impossible to find a rigorous solution to such a problem even for any model of nonlinearity. We obtain the first exact solution of this type. We present a new method for deriving exact solutions of the Maxwell equations in a nonlinear medium without dispersion and give examples of the obtained solutions that describe propagation of cylindrical electromagnetic waves in a nonlinear nondispersive medium and free electromagnetic oscillations in a cylindrical cavity resonator filled with such a medium.

  15. Quantum decay model with exact explicit analytical solution

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  16. Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models

    NASA Astrophysics Data System (ADS)

    Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido

    2016-06-01

    We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.

  17. Large deviations of a long-time average in the Ehrenfest urn model

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  18. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.

  19. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.

  20. Formal properties of the probability of fixation: identities, inequalities and approximations.

    PubMed

    McCandlish, David M; Epstein, Charles L; Plotkin, Joshua B

    2015-02-01

    The formula for the probability of fixation of a new mutation is widely used in theoretical population genetics and molecular evolution. Here we derive a series of identities, inequalities and approximations for the exact probability of fixation of a new mutation under the Moran process (equivalent results hold for the approximate probability of fixation under the Wright-Fisher process, after an appropriate change of variables). We show that the logarithm of the fixation probability has particularly simple behavior when the selection coefficient is measured as a difference of Malthusian fitnesses, and we exploit this simplicity to derive inequalities and approximations. We also present a comprehensive comparison of both existing and new approximations for the fixation probability, highlighting those approximations that induce a reversible Markov chain when used to describe the dynamics of evolution under weak mutation. To demonstrate the power of these results, we consider the classical problem of determining the total substitution rate across an ensemble of biallelic loci and prove that, at equilibrium, a strict majority of substitutions are due to drift rather than selection. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Predicting Quarantine Failure Rates

    PubMed Central

    2004-01-01

    Preemptive quarantine through contact-tracing effectively controls emerging infectious diseases. Occasionally this quarantine fails, however, and infected persons are released. The probability of quarantine failure is typically estimated from disease-specific data. Here a simple, exact estimate of the failure rate is derived that does not depend on disease-specific parameters. This estimate is universally applicable to all infectious diseases. PMID:15109418

  2. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  3. On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1984-01-01

    Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)

  4. Studies in Mathematics, Volume IV. Geometry.

    ERIC Educational Resources Information Center

    Kutuzov, B. V.

    This book is a translation of a Russian text. The translation is exact, and the language used by the author has not been brought up to date. The volume is probably most useful as a source of supplementary materials for high school mathematics. It is also useful for teachers to broaden their mathematical background. Chapters included in the text…

  5. The role of chemical transport in the brown-rot decay resistance of modified wood

    Treesearch

    Samuel Zelinka; R. Ringman; A. Pilgard; E. E. Thybring; Joseph Jakes; K. Richter

    2016-01-01

    Chemical modification of wood increases decay resistance but the exact mechanisms remain poorly understood. Recently, Ringman and coauthors examined established theories addressing why modified wood has increased decay resistance and concluded that the most probable cause of inhibition and/or delay of initiation of brown-rot decay is lowering the equilibrium moisture...

  6. ALARA: The next link in a chain of activation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, P.P.H.; Henderson, D.L.

    1996-12-31

    The Adaptive Laplace and Analytic Radioactivity Analysis [ALARA] code has been developed as the next link in the chain of DKR radioactivity codes. Its methods address the criticisms of DKR while retaining its best features. While DKR ignored loops in the transmutation/decay scheme to preserve the exactness of the mathematical solution, ALARA incorporates new computational approaches without jeopardizing the most important features of DKR`s physical modelling and mathematical methods. The physical model uses `straightened-loop, linear chains` to achieve the same accuracy in the loop solutions as is demanded in the rest of the scheme. In cases where a chain hasmore » no loops, the exact DKR solution is used. Otherwise, ALARA adaptively chooses between a direct Laplace inversion technique and a Laplace expansion inversion technique to optimize the accuracy and speed of the solution. All of these methods result in matrix solutions which allow the fastest and most accurate solution of exact pulsing histories. Since the entire history is solved for each chain as it is created, ALARA achieves the optimum combination of high accuracy, high speed and low memory usage. 8 refs., 2 figs.« less

  7. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines

    PubMed Central

    Press, William H.

    2006-01-01

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N × N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude. PMID:17159155

  8. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines.

    PubMed

    Press, William H

    2006-12-19

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.

  9. Potential clinical applications of photoacoustics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosencwaig, A.

    1982-09-01

    Photoacoustic spectroscopy offers the opportunity for extending the exact science of noninvasive spectral analysis to intact medical substances such as tissues. Thermal-wave imaging offers the potential for microscopic imaging of thermal features in biological matter.

  10. Multiple-solution problems in a statistics classroom: an example

    NASA Astrophysics Data System (ADS)

    Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing

    2017-11-01

    The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact probability mass distribution for the sum of face values. Four different ways of solving the problem are discussed. The solutions span various basic concepts in different mathematical disciplines (sample space in probability theory, the probability generating function in statistics, integer partition in basic combinatorics and individual risk model in actuarial science) and thus promotes upper undergraduate students' awareness of knowledge connections between their courses. All solutions of the example are implemented using the R statistical software package.

  11. Photon escape probabilities in a semi-infinite plane-parallel medium. [from electron plasma surrounding galactic X-ray sources

    NASA Technical Reports Server (NTRS)

    Williams, A. C.; Elsner, R. F.; Weisskopf, M. C.; Darbro, W.

    1984-01-01

    It is shown in this work how to obtain the probabilities of photons escaping from a cold electron plasma environment after having undergone an arbitrary number of scatterings. This is done by retaining the exact differential cross section for Thomson scattering as opposed to using its polarization and angle averaged form. The results are given in the form of recursion relations. The geometry used is the semi-infinite plane-parallel geometry witlh a photon source located on a plane at an arbitrary optical depth below the surface. Analytical expressions are given for the probabilities which are accurate over a wide range of initial optical depth. These results can be used to model compact X-ray galactic sources which are surrounded by an electron-rich plasma.

  12. Characteristics of a Two-Dimensional Hydrogenlike Atom

    NASA Astrophysics Data System (ADS)

    Skobelev, V. V.

    2018-06-01

    Using the customary and well-known representation of the radiation probability of a hydrogen-like atom in the three-dimensional case, a general expression for the probability of single-photon emission of a twodimensional atom has been obtained along with an expression for the particular case of the transition from the first excited state to the ground state, in the latter case in comparison with corresponding expressions for the three-dimensional atom and the one-dimensional atom. Arguments are presented in support of the claim that this method of calculation gives a value of the probability that is identical to the value given by exact methods of QED extended to the subspace {0, 1, 2}. Relativistic corrections (Zα)4 to the usual Schrödinger value of the energy ( (Zα)2) are also discussed.

  13. Ice Flow in Debris Aprons and Central Peaks, and the Application of Crater Counts

    NASA Astrophysics Data System (ADS)

    Hartmann, W. K.; Quantin, C.; Werner, S. C.; Popova, O.

    2009-03-01

    We apply studies of decameter-scale craters to studies of probable ice-flow-related features on Mars, to interpret both chronometry and geological processes among the features. We find losses of decameter-scale craters relative to nearby plains, probably due to sublimation.

  14. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  15. Statistics on continuous IBD data: Exact distribution evaluation for a pair of full(half)-sibs and a pair of a (great-) grandchild with a (great-) grandparent

    PubMed Central

    Stefanov, Valeri T

    2002-01-01

    Background Pairs of related individuals are widely used in linkage analysis. Most of the tests for linkage analysis are based on statistics associated with identity by descent (IBD) data. The current biotechnology provides data on very densely packed loci, and therefore, it may provide almost continuous IBD data for pairs of closely related individuals. Therefore, the distribution theory for statistics on continuous IBD data is of interest. In particular, distributional results which allow the evaluation of p-values for relevant tests are of importance. Results A technology is provided for numerical evaluation, with any given accuracy, of the cumulative probabilities of some statistics on continuous genome data for pairs of closely related individuals. In the case of a pair of full-sibs, the following statistics are considered: (i) the proportion of genome with 2 (at least 1) haplotypes shared identical-by-descent (IBD) on a chromosomal segment, (ii) the number of distinct pieces (subsegments) of a chromosomal segment, on each of which exactly 2 (at least 1) haplotypes are shared IBD. The natural counterparts of these statistics for the other relationships are also considered. Relevant Maple codes are provided for a rapid evaluation of the cumulative probabilities of such statistics. The genomic continuum model, with Haldane's model for the crossover process, is assumed. Conclusions A technology, together with relevant software codes for its automated implementation, are provided for exact evaluation of the distributions of relevant statistics associated with continuous genome data on closely related individuals. PMID:11996673

  16. An exact solution for a thick domain wall in general relativity

    NASA Technical Reports Server (NTRS)

    Goetz, Guenter; Noetzold, Dirk

    1989-01-01

    An exact solution of the Einstein equations for a static, planar domain wall with finite thickness is presented. At infinity, density and pressure vanish and the space-time tends to the Minkowski vacuum on one side of the wall and to the Taub vacuum on the other side. A surprising feature of this solution is that the density and pressure distribution are symmetric about the central plane of the wall whereas the space-time metric and therefore also the gravitational field experienced by a test particle is asymmetric.

  17. Towards an exact correlated orbital theory for electrons

    NASA Astrophysics Data System (ADS)

    Bartlett, Rodney J.

    2009-12-01

    The formal and computational attraction of effective one-particle theories like Hartree-Fock and density functional theory raise the question of how far such approaches can be taken to offer exact results for selected properties of electrons in atoms, molecules, and solids. Some properties can be exactly described within an effective one-particle theory, like principal ionization potentials and electron affinities. This fact can be used to develop equations for a correlated orbital theory (COT) that guarantees a correct one-particle energy spectrum. They are built upon a coupled-cluster based frequency independent self-energy operator presented here, which distinguishes the approach from Dyson theory. The COT also offers an alternative to Kohn-Sham density functional theory (DFT), whose objective is to represent the electronic density exactly as a single determinant, while paying less attention to the energy spectrum. For any estimate of two-electron terms COT offers a litmus test of its accuracy for principal Ip's and Ea's. This feature for approximating the COT equations is illustrated numerically.

  18. Recovery time in quantum dynamics of wave packets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strekalov, M. L., E-mail: strekalov@kinetics.nsc.ru

    2017-01-15

    A wave packet formed by a linear superposition of bound states with an arbitrary energy spectrum returns arbitrarily close to the initial state after a quite long time. A method in which quantum recovery times are calculated exactly is developed. In particular, an exact analytic expression is derived for the recovery time in the limiting case of a two-level system. In the general case, the reciprocal recovery time is proportional to the Gauss distribution that depends on two parameters (mean value and variance of the return probability). The dependence of the recovery time on the mean excitation level of themore » system is established. The recovery time is the longest for the maximal excitation level.« less

  19. Spin-related origin of the magnetotransport feature at filling factor 7/11

    NASA Astrophysics Data System (ADS)

    Gamez, Gerardo; Muraki, Koji

    2010-03-01

    Experiments by Pan et al. disclosed quantum Hall (QH) effect-like features at unconventional filling fractions, such as 4/11 and 7/11, not included in the Jain sequence [1]. These features were considered as evidence for a new class of fractional quantum Hall (FQH) states whose origin, unlike ordinary FQH states, is linked to interactions between composite fermions (CFs). However, the exact origin of these features is not well established yet. Here we focus on 7/11, where a minimum in the longitudinal resistance and a plateau-like structure in the Hall resistance are observed at a much higher field, 11.4 T, in a 30-nm quantum well (QW). Our density-dependent studies show that at this field, the FQH states flanking 7/11, viz. the 2/3 and 3/5 states, are both fully spin polarized. Despite of this fact, tilted-field experiments reveal that the 7/11 feature weakens and then disappears upon tilting. Using a CF model, we show that the spin degree of freedom may not be completely frozen in the region between the 2/3 and 3/5 states even when both states are fully polarized. Systematic studies unveil that the exact location of the 7/11 feature depends on the electron density and the QW width, in accordance with the model. Our model can also account for the reported contrasting behavior upon tilting of 7/11 and its electron-hole counterpart 4/11. [1] Pan et al., Phys. Rev. Lett. 90, 016801 (2003).

  20. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  1. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  2. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  3. Model Selection for the Multiple Model Adaptive Algorithm for In-Flight Simulation.

    DTIC Science & Technology

    1987-12-01

    of the two models, while the other model was given a probability of approximately zero. If the probabilties were exactly one and zero for the...Figures 6-103 through 6-107. From Figure 6-103, it can be seen that the probabilty of the model associated with the 10,000 ft, 0.35 Mach flight con

  4. Measuring Forest Area Loss Over Time Using FIA Plots and Satellite Imagery

    Treesearch

    Michael L. Hoppus; Andrew J. Lister

    2005-01-01

    How accurately can FIA plots, scattered at 1 per 6,000 acres, identify often rare forest land loss, estimated at less than 1 percent per year in the Northeast? Here we explore this question mathematically, empirically, and by comparing FIA plot estimates of forest change with satellite image based maps of forest loss. The mathematical probability of exactly estimating...

  5. Mechanisms of stochastic focusing and defocusing in biological reaction networks: insight from accurate chemical master equation (ACME) solutions.

    PubMed

    Gursoy, Gamze; Terebus, Anna; Youfang Cao; Jie Liang

    2016-08-01

    Stochasticity plays important roles in regulation of biochemical reaction networks when the copy numbers of molecular species are small. Studies based on Stochastic Simulation Algorithm (SSA) has shown that a basic reaction system can display stochastic focusing (SF) by increasing the sensitivity of the network as a result of the signal noise. Although SSA has been widely used to study stochastic networks, it is ineffective in examining rare events and this becomes a significant issue when the tails of probability distributions are relevant as is the case of SF. Here we use the ACME method to solve the exact solution of the discrete Chemical Master Equations and to study a network where SF was reported. We showed that the level of SF depends on the degree of the fluctuations of signal molecule. We discovered that signaling noise under certain conditions in the same reaction network can lead to a decrease in the system sensitivities, thus the network can experience stochastic defocusing. These results highlight the fundamental role of stochasticity in biological reaction networks and the need for exact computation of probability landscape of the molecules in the system.

  6. Column generation algorithms for virtual network embedding in flexi-grid optical networks.

    PubMed

    Lin, Rongping; Luo, Shan; Zhou, Jingwei; Wang, Sheng; Chen, Bin; Zhang, Xiaoning; Cai, Anliang; Zhong, Wen-De; Zukerman, Moshe

    2018-04-16

    Network virtualization provides means for efficient management of network resources by embedding multiple virtual networks (VNs) to share efficiently the same substrate network. Such virtual network embedding (VNE) gives rise to a challenging problem of how to optimize resource allocation to VNs and to guarantee their performance requirements. In this paper, we provide VNE algorithms for efficient management of flexi-grid optical networks. We provide an exact algorithm aiming to minimize the total embedding cost in terms of spectrum cost and computation cost for a single VN request. Then, to achieve scalability, we also develop a heuristic algorithm for the same problem. We apply these two algorithms for a dynamic traffic scenario where many VN requests arrive one-by-one. We first demonstrate by simulations for the case of a six-node network that the heuristic algorithm obtains very close blocking probabilities to exact algorithm (about 0.2% higher). Then, for a network of realistic size (namely, USnet) we demonstrate that the blocking probability of our new heuristic algorithm is about one magnitude lower than a simpler heuristic algorithm, which was a component of an earlier published algorithm.

  7. Distribution of the largest aftershocks in branching models of triggered seismicity: theory of the universal Båth law.

    PubMed

    Saichev, A; Sornette, D

    2005-05-01

    Using the epidemic-type aftershock sequence (ETAS) branching model of triggered seismicity, we apply the formalism of generating probability functions to calculate exactly the average difference between the magnitude of a mainshock and the magnitude of its largest aftershock over all generations. This average magnitude difference is found empirically to be independent of the mainshock magnitude and equal to 1.2, a universal behavior known as Båth's law. Our theory shows that Båth's law holds only sufficiently close to the critical regime of the ETAS branching process. Allowing for error bars +/- 0.1 for Båth's constant value around 1.2, our exact analytical treatment of Båth's law provides new constraints on the productivity exponent alpha and the branching ratio n: 0.9 approximately < alpha < or =1. We propose a method for measuring alpha based on the predicted renormalization of the Gutenberg-Richter distribution of the magnitudes of the largest aftershock. We also introduce the "second Båth law for foreshocks:" the probability that a main earthquake turns out to be the foreshock does not depend on its magnitude rho.

  8. Reconciling uncertain costs and benefits in bayes nets for invasive species management

    USGS Publications Warehouse

    Burgman, M.A.; Wintle, B.A.; Thompson, C.A.; Moilanen, A.; Runge, M.C.; Ben-Haim, Y.

    2010-01-01

    Bayes nets are used increasingly to characterize environmental systems and formalize probabilistic reasoning to support decision making. These networks treat probabilities as exact quantities. Sensitivity analysis can be used to evaluate the importance of assumptions and parameter estimates. Here, we outline an application of info-gap theory to Bayes nets that evaluates the sensitivity of decisions to possibly large errors in the underlying probability estimates and utilities. We apply it to an example of management and eradication of Red Imported Fire Ants in Southern Queensland, Australia and show how changes in management decisions can be justified when uncertainty is considered. ?? 2009 Society for Risk Analysis.

  9. Decision theory for computing variable and value ordering decisions for scheduling problems

    NASA Technical Reports Server (NTRS)

    Linden, Theodore A.

    1993-01-01

    Heuristics that guide search are critical when solving large planning and scheduling problems, but most variable and value ordering heuristics are sensitive to only one feature of the search state. One wants to combine evidence from all features of the search state into a subjective probability that a value choice is best, but there has been no solid semantics for merging evidence when it is conceived in these terms. Instead, variable and value ordering decisions should be viewed as problems in decision theory. This led to two key insights: (1) The fundamental concept that allows heuristic evidence to be merged is the net incremental utility that will be achieved by assigning a value to a variable. Probability distributions about net incremental utility can merge evidence from the utility function, binary constraints, resource constraints, and other problem features. The subjective probability that a value is the best choice is then derived from probability distributions about net incremental utility. (2) The methods used for rumor control in Bayesian Networks are the primary way to prevent cycling in the computation of probable net incremental utility. These insights lead to semantically justifiable ways to compute heuristic variable and value ordering decisions that merge evidence from all available features of the search state.

  10. Probability and the changing shape of response distributions for orientation.

    PubMed

    Anderson, Britt

    2014-11-18

    Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.

  11. Weak localization of electromagnetic waves and opposition phenomena exhibited by high-albedo atmosphereless solar system objects.

    PubMed

    Mishchenko, Michael I; Rosenbush, Vera K; Kiselev, Nikolai N

    2006-06-20

    The totality of new and previous optical observations of a class of high-albedo solar system objects at small phase angles reveals a unique combination of extremely narrow brightness and polarization features centered at exactly the opposition. The specific morphological parameters of these features provide an almost unequivocal evidence that they are caused by the renowned effect of coherent backscattering.

  12. First-passage and risk evaluation under stochastic volatility

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Perelló, Josep

    2009-07-01

    We solve the first-passage problem for the Heston random diffusion model. We obtain exact analytical expressions for the survival and the hitting probabilities to a given level of return. We study several asymptotic behaviors and obtain approximate forms of these probabilities which prove, among other interesting properties, the nonexistence of a mean-first-passage time. One significant result is the evidence of extreme deviations—which implies a high risk of default—when certain dimensionless parameter, related to the strength of the volatility fluctuations, increases. We confront the model with empirical daily data and we observe that it is able to capture a very broad domain of the hitting probability. We believe that this may provide an effective tool for risk control which can be readily applicable to real markets both for portfolio management and trading strategies.

  13. Finite-size scaling of survival probability in branching processes

    NASA Astrophysics Data System (ADS)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G (y ) =2 y ey /(ey-1 ) , with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  14. Knot probabilities in random diagrams

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Chapman, Harrison; Mastin, Matt

    2016-10-01

    We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.

  15. The probabilistic convolution tree: efficient exact Bayesian inference for faster LC-MS/MS protein inference.

    PubMed

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.

  16. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    PubMed Central

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  17. Impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach.

    PubMed

    Chandrasekar, A; Rakkiyappan, R; Cao, Jinde

    2015-10-01

    This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Parabolic features and the erosion rate on Venus

    NASA Technical Reports Server (NTRS)

    Strom, Robert G.

    1993-01-01

    The impact cratering record on Venus consists of 919 craters covering 98 percent of the surface. These craters are remarkably well preserved, and most show pristine structures including fresh ejecta blankets. Only 35 craters (3.8 percent) have had their ejecta blankets embayed by lava and most of these occur in the Atla-Beta Regio region; an area thought to be recently active. parabolic features are associated with 66 of the 919 craters. These craters range in size from 6 to 105 km diameter. The parabolic features are thought to be the result of the deposition of fine-grained ejecta by winds in the dense venusian atmosphere. The deposits cover about 9 percent of the surface and none appear to be embayed by younger volcanic materials. However, there appears to be a paucity of these deposits in the Atla-Beta Regio region, and this may be due to the more recent volcanism in this area of Venus. Since parabolic features are probably fine-grain, wind-deposited ejecta, then all impact craters on Venus probably had these deposits at some time in the past. The older deposits have probably been either eroded or buried by eolian processes. Therefore, the present population of these features is probably associated with the most recent impact craters on the planet. Furthermore, the size/frequency distribution of craters with parabolic features is virtually identical to that of the total crater population. This suggests that there has been little loss of small parabolic features compared to large ones, otherwise there should be a significant and systematic paucity of craters with parabolic features with decreasing size compared to the total crater population. Whatever is erasing the parabolic features apparently does so uniformly regardless of the areal extent of the deposit. The lifetime of parabolic features and the eolian erosion rate on Venus can be estimated from the average age of the surface and the present population of parabolic features.

  19. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  20. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  1. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  2. Ictal semiology in hippocampal versus extrahippocampal temporal lobe epilepsy.

    PubMed

    Gil-Nagel, A; Risinger, M W

    1997-01-01

    We have analysed retrospectively the clinical features and electroencephalograms in 35 patients with complex partial seizures of temporal lobe origin who were seizure-free after epilepsy surgery. Two groups were differentiated for statistical analysis: 16 patients had hippocampal temporal lobe seizures (HTS) and 19 patients had extrahippocampal temporal lobe seizures (ETS) associated with a small tumour of the lateral or inferior temporal cortex. All patients in the HTS group had ictal onset verified with intracranial recordings (depth or subdural electrodes). In the ETS group, extrahippocampal onset was verified with intracranial recordings in eight patients and assumed, because of failure of a previous amygdalohippocampectomy, in one patient. Historical information, ictal semiology and ictal EEG of typical seizures were analysed in each patient. The occurrence of early and late oral automatisms and dystonic posturing of an upper extremity was analysed separately. A prior history of febrile convulsions was obtained in 13 HTS patients (81.3%) but in none with ETS (P < 0.0001, Fisher's exact test). An epigastric aura preceded seizures in five patients with HTS (31.3%) and none with ETS (P = 0.0135, Fisher's exact test), while an aura with experiential content was recalled by nine patients with ETS (47.4%) and none with HTS (P = 0.0015), Fisher's exact test). Early oral automatisms occurred in 11 patients with HTS (68.8%) and in two with ETS (10.5%) (P = 0.0005, Fisher's exact test). Early motor involvement of the contralateral upper extremity without oral automatisms occurred in three patients with HTS (18.8%) and in 10 with ETS (52.6%) (P = 0.0298, Fisher's exact test). Arrest reaction, vocalization, speech, facial grimace, postictal cough, late oral automatisms and late motor involvement of the contralateral arm and hand occurred with similar frequency in both groups. These observations show that the early clinical features of HTS and ETS are different.

  3. A model of jam formation in congested traffic

    NASA Astrophysics Data System (ADS)

    Bunzarova, N. Zh; Pesheva, N. C.; Priezzhev, V. B.; Brankov, J. G.

    2017-12-01

    We study a model of irreversible jam formation in congested vehicular traffic on an open segment of a single-lane road. The vehicles obey a stochastic discrete-time dynamics which is a limiting case of the generalized Totally Asymmetric Simple Exclusion Process. Its characteristic features are: (a) the existing clusters of jammed cars cannot break into parts; (b) when the leading vehicle of a cluster hops to the right, the whole cluster follows it deterministically, and (c) any two clusters of vehicles, occupying consecutive positions on the chain, may become nearest-neighbors and merge irreversibly into a single cluster. The above dynamics was used in a one-dimensional model of irreversible aggregation by Bunzarova and Pesheva [Phys. Rev. E 95, 052105 (2017)]. The model has three stationary non-equilibrium phases, depending on the probabilities of injection (α), ejection (β), and hopping (p) of particles: a many-particle one, MP, a completely jammed phase CF, and a mixed MP+CF phase. An exact expression for the stationary probability P(1) of a completely jammed configuration in the mixed MP+CF phase is obtained. The gap distribution between neighboring clusters of jammed cars at large lengths L of the road is studied. Three regimes of evolution of the width of a single gap are found: (i) growing gaps with length of the order O(L) when β > p; (ii) shrinking gaps with length of the order O(1) when β < p; and (iii) critical gaps at β = p, of the order O(L 1/2). These results are supported by extensive Monte Carlo calculations.

  4. Camera trap placement and the potential for bias due to trails and other features

    PubMed Central

    Forrester, Tavis D.

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11–33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9–38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design. PMID:29045478

  5. Camera trap placement and the potential for bias due to trails and other features.

    PubMed

    Kolowski, Joseph M; Forrester, Tavis D

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11-33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9-38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design.

  6. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  7. Observations of running penumbral waves.

    NASA Technical Reports Server (NTRS)

    Zirin, H.; Stein, A.

    1972-01-01

    Quiet sunspots with well-developed penumbrae show running intensity waves with period running around 300 sec. The waves appear connected with umbral flashes of exactly half the period. Waves are concentric, regular, with velocity constant around 10 km/sec. They are probably sound waves and show intensity fluctuation in H alpha centerline or wing of 10 to 20%. The energy is tiny compared to the heat deficit of the umbra.

  8. A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure

    DTIC Science & Technology

    1989-04-14

    element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial

  9. The transition probability and the probability for the left-most particle's position of the q-totally asymmetric zero range process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korhonen, Marko; Lee, Eunghyun

    2014-01-15

    We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle'smore » position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.« less

  10. Comparison of νμ->νe Oscillation calculations with matter effects

    NASA Astrophysics Data System (ADS)

    Gordon, Michael; Toki, Walter

    2013-04-01

    An introduction to neutrino oscillations in vacuum is presented, followed by a survey of various techniques for obtaining either exact or approximate expressions for νμ->νe oscillations in matter. The method devised by Mann, Kafka, Schneps, and Altinok produces an exact expression for the oscillation by determining explicitely the evolution operator. The method used by Freund yields an approximate oscillation probability by diagonalizing the Hamiltonian, finding the eigenvalues and eigenvectors, and then using those to find modified mixing angles with the matter effect taken into account. The method developed by Arafune, Koike, and Sato uses an alternate method to find an approximation of the evolution operator. These methods are compared to each other using parameters from both the T2K and LBNE experiments.

  11. Maps on statistical manifolds exactly reduced from the Perron-Frobenius equations for solvable chaotic maps

    NASA Astrophysics Data System (ADS)

    Goto, Shin-itiro; Umeno, Ken

    2018-03-01

    Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.

  12. Exact solutions for rate and synchrony in recurrent networks of coincidence detectors.

    PubMed

    Mikula, Shawn; Niebur, Ernst

    2008-11-01

    We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity, with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations.

  13. Feasibility of streamlining an interactive Bayesian-based diagnostic support tool designed for clinical practice

    NASA Astrophysics Data System (ADS)

    Chen, Po-Hao; Botzolakis, Emmanuel; Mohan, Suyash; Bryan, R. N.; Cook, Tessa

    2016-03-01

    In radiology, diagnostic errors occur either through the failure of detection or incorrect interpretation. Errors are estimated to occur in 30-35% of all exams and contribute to 40-54% of medical malpractice litigations. In this work, we focus on reducing incorrect interpretation of known imaging features. Existing literature categorizes cognitive bias leading a radiologist to an incorrect diagnosis despite having correctly recognized the abnormal imaging features: anchoring bias, framing effect, availability bias, and premature closure. Computational methods make a unique contribution, as they do not exhibit the same cognitive biases as a human. Bayesian networks formalize the diagnostic process. They modify pre-test diagnostic probabilities using clinical and imaging features, arriving at a post-test probability for each possible diagnosis. To translate Bayesian networks to clinical practice, we implemented an entirely web-based open-source software tool. In this tool, the radiologist first selects a network of choice (e.g. basal ganglia). Then, large, clearly labeled buttons displaying salient imaging features are displayed on the screen serving both as a checklist and for input. As the radiologist inputs the value of an extracted imaging feature, the conditional probabilities of each possible diagnosis are updated. The software presents its level of diagnostic discrimination using a Pareto distribution chart, updated with each additional imaging feature. Active collaboration with the clinical radiologist is a feasible approach to software design and leads to design decisions closely coupling the complex mathematics of conditional probability in Bayesian networks with practice.

  14. A three-dimensional autonomous nonlinear dynamical system modelling equatorial ocean flows

    NASA Astrophysics Data System (ADS)

    Ionescu-Kruse, Delia

    2018-04-01

    We investigate a nonlinear three-dimensional model for equatorial flows, finding exact solutions that capture the most relevant geophysical features: depth-dependent currents, poleward or equatorial surface drift and a vertical mixture of upward and downward motions.

  15. Features of sound propagation through and stability of a finite shear layer

    NASA Technical Reports Server (NTRS)

    Koutsoyannis, S. P.

    1976-01-01

    The plane wave propagation, the stability and the rectangular duct mode problems of a compressible inviscid linearly sheared parallel, but otherwise homogeneous flow, are shown to be governed by Whittaker's equation. The exact solutions for the perturbation quantities are essentially Whittaker M-functions. A number of known results are obtained as limiting cases of exact solutions. For the compressible finite thickness shear layer it is shown that no resonances and no critical angles exist for all Mach numbers, frequencies and shear layer velocity profile slopes except in the singular case of the vortex sheet.

  16. Exact coupling threshold for structural transition reveals diversified behaviors in interconnected networks.

    PubMed

    Darabi Sahneh, Faryad; Scoglio, Caterina; Van Mieghem, Piet

    2015-10-01

    An interconnected network features a structural transition between two regimes [F. Radicchi and A. Arenas, Nat. Phys. 9, 717 (2013)]: one where the network components are structurally distinguishable and one where the interconnected network functions as a whole. Our exact solution for the coupling threshold uncovers network topologies with unexpected behaviors. Specifically, we show conditions that superdiffusion, introduced by Gómez et al. [Phys. Rev. Lett. 110, 028701 (2013)], can occur despite the network components functioning distinctly. Moreover, we find that components of certain interconnected network topologies are indistinguishable despite very weak coupling between them.

  17. Rapid extraction of image texture by co-occurrence using a hybrid data structure

    NASA Astrophysics Data System (ADS)

    Clausi, David A.; Zhao, Yongping

    2002-07-01

    Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.

  18. Spanish Jesuits in the Philippines: geophysical research and synergies between science, education and trade, 1865-1898.

    PubMed

    Anduaga, Aitor

    2014-10-01

    In 1865, Spanish Jesuits founded the Manila Observatory, the earliest of the Far East centres devoted to typhoon and earthquake studies. Also on Philippine soil and under the direction of the Jesuits, in 1884 the Madrid government inaugurated the first Meteorological Service in the Spanish Kingdom, and most probably in the Far East. Nevertheless, these achievements not only went practically unnoticed in the historiography of science, but neither does the process of geophysical dissemination that unfolded fit in with the two types of transmitter of knowledge identified by historians in the missionary diffusion of the exact sciences in colonial contexts. Rather than regarding science as merely a stimulus to their functionary and missionary tasks, Spanish Jesuits used their overseas posting to produce and publish original research--feature that would place them within the typology of the 'seeker' rather than the 'functionary' (in stark contrast to what the standard typology sustains). This paper also analyses examples of synergies between science, education and trade, which denotes, inter alia, the existence of a broad and solid educational structure in the Manila Mission that sustained the strength of research enterprise.

  19. Poster - 18: New features in EGSnrc for photon cross sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Elsayed; Mainegra-Hing, Ernesto; Rogers, Davi

    2016-08-15

    Purpose: To implement two new features in the EGSnrc Monte Carlo system. The first is an option to account for photonuclear attenuation, which can contribute a few percent to the total cross section at the higher end of the energy range of interest to medical physics. The second is an option to use exact NIST XCOM photon cross sections. Methods: For the first feature, the photonuclear total cross sections are generated from the IAEA evaluated data. In the current, first-order implementation, after a photonuclear event, there is no energy deposition or secondary particle generation. The implementation is validated against deterministicmore » calculations and experimental measurements of transmission signals. For the second feature, before this work, if the user explicitly requested XCOM photon cross sections, EGSnrc still used its own internal incoherent scattering cross sections. These differ by up to 2% from XCOM data between 30 keV and 40 MeV. After this work, exact XCOM incoherent scattering cross sections are an available option. Minor interpolation artifacts in pair and triplet XCOM cross sections are also addressed. The default for photon cross section in EGSnrc is XCOM except for the new incoherent scattering cross sections, which have to be explicitly requested. The photonuclear, incoherent, pair and triplet data from this work are available for elements and compounds for photon energies from 1 keV to 100 GeV. Results: Both features are implemented and validated in EGSnrc.Conclusions: The two features are part of the standard EGSnrc distribution as of version 4.2.3.2.« less

  20. Coastal Asia as seen from the ISS

    NASA Image and Video Library

    2001-03-30

    ISS01-E-5082 (December 2000) --- This image of coastal Asia was taken from the International Space Station with a digital still camera and a 400mm lens with a very narrow field of view. Early in the Space Station Program, communications with the crew are less direct, and the exact time that this image was taken could not be determined. Because there are relatively few photograph of Earth taken with this long lens, and because the times are not available to calculate the exact position of the Station over the Earth when the photograph was taken, the exact location of the photograph cannot be determined. Many of these logistical problems will be resolved as camera equipment is replaced and communications with the crew improve. Catalogers believe the coast most resembles Indonesia, and this determination will be maintained until future images allow correction and refinement of the location. The photograph is a striking example of the degree to which humans modify coastal environments. The large green squares in the image probably represent a combination of rice cultivation and aquaculture.

  1. Harnessing Computational Biology for Exact Linear B-Cell Epitope Prediction: A Novel Amino Acid Composition-Based Feature Descriptor.

    PubMed

    Saravanan, Vijayakumar; Gautham, Namasivayam

    2015-10-01

    Proteins embody epitopes that serve as their antigenic determinants. Epitopes occupy a central place in integrative biology, not to mention as targets for novel vaccine, pharmaceutical, and systems diagnostics development. The presence of T-cell and B-cell epitopes has been extensively studied due to their potential in synthetic vaccine design. However, reliable prediction of linear B-cell epitope remains a formidable challenge. Earlier studies have reported discrepancy in amino acid composition between the epitopes and non-epitopes. Hence, this study proposed and developed a novel amino acid composition-based feature descriptor, Dipeptide Deviation from Expected Mean (DDE), to distinguish the linear B-cell epitopes from non-epitopes effectively. In this study, for the first time, only exact linear B-cell epitopes and non-epitopes have been utilized for developing the prediction method, unlike the use of epitope-containing regions in earlier reports. To evaluate the performance of the DDE feature vector, models have been developed with two widely used machine-learning techniques Support Vector Machine and AdaBoost-Random Forest. Five-fold cross-validation performance of the proposed method with error-free dataset and dataset from other studies achieved an overall accuracy between nearly 61% and 73%, with balance between sensitivity and specificity metrics. Performance of the DDE feature vector was better (with accuracy difference of about 2% to 12%), in comparison to other amino acid-derived features on different datasets. This study reflects the efficiency of the DDE feature vector in enhancing the linear B-cell epitope prediction performance, compared to other feature representations. The proposed method is made as a stand-alone tool available freely for researchers, particularly for those interested in vaccine design and novel molecular target development for systems therapeutics and diagnostics: https://github.com/brsaran/LBEEP.

  2. Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex

    PubMed Central

    Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire

    2015-01-01

    Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269

  3. Exact analytical solution of irreversible binary dynamics on networks.

    PubMed

    Laurence, Edward; Young, Jean-Gabriel; Melnik, Sergey; Dubé, Louis J

    2018-03-01

    In binary cascade dynamics, the nodes of a graph are in one of two possible states (inactive, active), and nodes in the inactive state make an irreversible transition to the active state, as soon as their precursors satisfy a predetermined condition. We introduce a set of recursive equations to compute the probability of reaching any final state, given an initial state, and a specification of the transition probability function of each node. Because the naive recursive approach for solving these equations takes factorial time in the number of nodes, we also introduce an accelerated algorithm, built around a breath-first search procedure. This algorithm solves the equations as efficiently as possible in exponential time.

  4. Exact analytical solution of irreversible binary dynamics on networks

    NASA Astrophysics Data System (ADS)

    Laurence, Edward; Young, Jean-Gabriel; Melnik, Sergey; Dubé, Louis J.

    2018-03-01

    In binary cascade dynamics, the nodes of a graph are in one of two possible states (inactive, active), and nodes in the inactive state make an irreversible transition to the active state, as soon as their precursors satisfy a predetermined condition. We introduce a set of recursive equations to compute the probability of reaching any final state, given an initial state, and a specification of the transition probability function of each node. Because the naive recursive approach for solving these equations takes factorial time in the number of nodes, we also introduce an accelerated algorithm, built around a breath-first search procedure. This algorithm solves the equations as efficiently as possible in exponential time.

  5. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  6. Quantization of the Szekeres system

    NASA Astrophysics Data System (ADS)

    Paliathanasis, A.; Zampeli, Adamantia; Christodoulakis, T.; Mustafa, M. T.

    2018-06-01

    We study the quantum corrections on the Szekeres system in the context of canonical quantization in the presence of symmetries. We start from an effective point-like Lagrangian with two integrals of motion, one corresponding to the Hamiltonian and the other to a second rank killing tensor. Imposing their quantum version on the wave function results to a solution which is then interpreted in the context of Bohmian mechanics. In this semiclassical approach, it is shown that there is no quantum corrections, thus the classical trajectories of the Szekeres system are not affected at this level. Finally, we define a probability function which shows that a stationary surface of the probability corresponds to a classical exact solution.

  7. Performance of synchronous optical receivers using atmospheric compensation techniques.

    PubMed

    Belmonte, Aniceto; Khan, Joseph

    2008-09-01

    We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.

  8. Modeling H2O and CO2 in Optically Thick Comets Using Asymmetric Spherical Coupled Escape Probability and Application to Comet C/2009 P1 Garradd Observations of CO, H2O, and CO2

    NASA Astrophysics Data System (ADS)

    Gersch, Alan M.; Feaga, Lori M.; A’Hearn, Michael F.

    2018-02-01

    We have adapted Coupled Escape Probability, a new exact method of solving radiative transfer problems, for use in asymmetrical spherical situations for use in modeling optically thick cometary comae. Here we present the extension of our model and corresponding results for two additional primary volatile species of interest, H2O and CO2, in purely theoretical comets. We also present detailed modeling and results for the specific examples of CO, H2O, and CO2 observations of C/2009 P1 Garradd by the Deep Impact flyby spacecraft.

  9. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  10. Analytical expressions for the closure probability of a stiff wormlike chain for finite capture radius.

    PubMed

    Guérin, T

    2017-08-01

    Estimating the probability that two monomers of the same polymer chain are close together is a key ingredient to characterize intramolecular reactions and polymer looping. In the case of stiff wormlike polymers (rigid fluctuating elastic rods), for which end-to-end encounters are rare events, we derive an explicit analytical formula for the probability η(r_{c}) that the distance between the chain extremities is smaller than some capture radius r_{c}. The formula is asymptotically exact in the limit of stiff chains, and it leads to the identification of two distinct scaling regimes for the closure factor, originating from a strong variation of the fluctuations of the chain orientation at closure. Our theory is compatible with existing analytical results from the literature that cover the cases of a vanishing capture radius and of nearly fully extended chains.

  11. Diffusion in shear flow

    NASA Astrophysics Data System (ADS)

    Dufty, J. W.

    1984-09-01

    Diffusion of a tagged particle in a fluid with uniform shear flow is described. The continuity equation for the probability density describing the position of the tagged particle is considered. The diffusion tensor is identified by expanding the irreversible part of the probability current to first order in the gradient of the probability density, but with no restriction on the shear rate. The tensor is expressed as the time integral of a nonequilibrium autocorrelation function for the velocity of the tagged particle in its local fluid rest frame, generalizing the Green-Kubo expression to the nonequilibrium state. The tensor is evaluated from results obtained previously for the velocity autocorrelation function that are exact for Maxwell molecules in the Boltzmann limit. The effects of viscous heating are included and the dependence on frequency and shear rate is displayed explicitly. The mode-coupling contributions to the frequency and shear-rate dependent diffusion tensor are calculated.

  12. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  13. New Image-Based Techniques for Prostate Biopsy and Treatment

    DTIC Science & Technology

    2012-04-01

    C-arm fluoroscopy, MICCAI 2011, Toronto, Canada, 2011. 4) Poster Presentation: Prostate Cancer Probability Estimation Based on DCE- DTI Features...and P. Kozlowski, “Prostate Cancer Probability Estimation Based on DCE- DTI Features and Support Vector Machine Classification,” Annual Meeting of... DTI ), which characterize the de-phasing of the MR signal caused by molecular diffusion. Prostate cancer causes a pathological change in the tissue

  14. Human Papilloma Virus Infection Does Not Predict Response to Interferon Therapy in Ocular Surface Squamous Neoplasia.

    PubMed

    Galor, Anat; Garg, Nisha; Nanji, Afshan; Joag, Madhura; Nuovo, Gerard; Palioura, Sotiria; Wang, Gaofeng; Karp, Carol L

    2015-11-01

    To identify the frequency of human papilloma virus (HPV) in ocular surface squamous neoplasia (OSSN) and to evaluate differences in clinical features and treatment response of tumors with positive versus negative HPV results. Retrospective case series. Twenty-seven patients with OSSN. Ocular surface squamous neoplasia specimens were analyzed for the presence of HPV. Clinical features and response to interferon were determined retrospectively and linked to the presence (versus absence) of HPV. Clinical characteristics of OSSN by HPV status. Twenty-one of 27 tumors (78%) demonstrated positive HPV results. The HPV genotypes identified included HPV-16 in 10 tumors (48%), HPV-31 in 5 tumors, HPV-33 in 1 tumor, HPV-35 in 2 tumors, HPV-51 in 2 tumors, and a novel HPV in 3 tumors (total of 23 tumors because 1 tumor had 3 identified genotypes). Tumors found in the superior limbus were more likely to show positive HPV results (48% vs. 0%; P=0.06, Fisher exact test). Tumors with positive HPV-16 results were larger (68 vs. 34 mm2; P=0.08, Mann-Whitney U test) and were more likely to have papillomatous morphologic features (50% vs. 12%; P=0.07, Fisher exact test) compared with tumors showing negative results for HPV-16. Human papilloma virus status was not found to be associated with response to interferon therapy (P=1.0, Fisher exact test). Metrics found to be associated with a nonfavorable response to interferon were male gender and tumors located in the superior conjunctivae. The presence of HPV in OSSN seems to be more common in lesions located in the nonexposed, superior limbus. Human papilloma virus presence does not seem to be required for a favorable response to interferon therapy. Copyright © 2015 American Academy of Ophthalmology. All rights reserved.

  15. Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner.

    PubMed

    Gordon, H R; Brown, J W; Evans, R H

    1988-03-01

    For improved analysis of Coastal Zone Color Scanner (CZCS) imagery, the radiance reflected from a planeparallel atmosphere and flat sea surface in the absence of aerosols (Rayleigh radiance) has been computed with an exact multiple scattering code, i.e., including polarization. The results indicate that the single scattering approximation normally used to compute this radiance can cause errors of up to 5% for small and moderate solar zenith angles. At large solar zenith angles, such as encountered in the analysis of high-latitude imagery, the errors can become much larger, e.g.,>10% in the blue band. The single scattering error also varies along individual scan lines. Comparison with multiple scattering computations using scalar transfer theory, i.e., ignoring polarization, show that scalar theory can yield errors of approximately the same magnitude as single scattering when compared with exact computations at small to moderate values of the solar zenith angle. The exact computations can be easily incorporated into CZCS processing algorithms, and, for application to future instruments with higher radiometric sensitivity, a scheme is developed with which the effect of variations in the surface pressure could be easily and accurately included in the exact computation of the Rayleigh radiance. Direct application of these computations to CZCS imagery indicates that accurate atmospheric corrections can be made with solar zenith angles at least as large as 65 degrees and probably up to at least 70 degrees with a more sensitive instrument. This suggests that the new Rayleigh radiance algorithm should produce more consistent pigment retrievals, particularly at high latitudes.

  16. Worldline approach to helicity flip in plane waves

    NASA Astrophysics Data System (ADS)

    Ilderton, Anton; Torgrimsson, Greger

    2016-04-01

    We apply worldline methods to the study of vacuum polarization effects in plane wave backgrounds, in both scalar and spinor QED. We calculate helicity-flip probabilities to one loop order and treated exactly in the background field, and provide a toolkit of methods for use in investigations of higher-order processes. We also discuss the connections between the worldline, S-matrix, and lightfront approaches to vacuum polarization effects.

  17. Wide localized solutions of the parity-time-symmetric nonautonomous nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Meza, L. E. Arroyo; Dutra, A. de Souza; Hott, M. B.; Roy, P.

    2015-01-01

    By using canonical transformations we obtain localized (in space) exact solutions of the nonlinear Schrödinger equation (NLSE) with cubic and quintic space and time modulated nonlinearities and in the presence of time-dependent and inhomogeneous external potentials and amplification or absorption (source or drain) coefficients. We obtain a class of wide localized exact solutions of NLSE in the presence of a number of non-Hermitian parity-time (PT )-symmetric external potentials, which are constituted by a mixing of external potentials and source or drain terms. The exact solutions found here can be applied to theoretical studies of ultrashort pulse propagation in optical fibers with focusing and defocusing nonlinearities. We show that, even in the presence of gain or loss terms, stable solutions can be found and that the PT symmetry is an important feature to guarantee the conservation of the average energy of the system.

  18. An algorithm for computing the gene tree probability under the multispecies coalescent and its application in the inference of population tree

    PubMed Central

    2016-01-01

    Motivation: Gene tree represents the evolutionary history of gene lineages that originate from multiple related populations. Under the multispecies coalescent model, lineages may coalesce outside the species (population) boundary. Given a species tree (with branch lengths), the gene tree probability is the probability of observing a specific gene tree topology under the multispecies coalescent model. There are two existing algorithms for computing the exact gene tree probability. The first algorithm is due to Degnan and Salter, where they enumerate all the so-called coalescent histories for the given species tree and the gene tree topology. Their algorithm runs in exponential time in the number of gene lineages in general. The second algorithm is the STELLS algorithm (2012), which is usually faster but also runs in exponential time in almost all the cases. Results: In this article, we present a new algorithm, called CompactCH, for computing the exact gene tree probability. This new algorithm is based on the notion of compact coalescent histories: multiple coalescent histories are represented by a single compact coalescent history. The key advantage of our new algorithm is that it runs in polynomial time in the number of gene lineages if the number of populations is fixed to be a constant. The new algorithm is more efficient than the STELLS algorithm both in theory and in practice when the number of populations is small and there are multiple gene lineages from each population. As an application, we show that CompactCH can be applied in the inference of population tree (i.e. the population divergence history) from population haplotypes. Simulation results show that the CompactCH algorithm enables efficient and accurate inference of population trees with much more haplotypes than a previous approach. Availability: The CompactCH algorithm is implemented in the STELLS software package, which is available for download at http://www.engr.uconn.edu/ywu/STELLS.html. Contact: ywu@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307621

  19. The Q Exactive HF, a Benchtop Mass Spectrometer with a Pre-filter, High-performance Quadrupole and an Ultra-high-field Orbitrap Analyzer*

    PubMed Central

    Scheltema, Richard Alexander; Hauschild, Jan-Peter; Lange, Oliver; Hornburg, Daniel; Denisov, Eduard; Damoc, Eugen; Kuehn, Andreas; Makarov, Alexander; Mann, Matthias

    2014-01-01

    The quadrupole Orbitrap mass spectrometer (Q Exactive) made a powerful proteomics instrument available in a benchtop format. It significantly boosted the number of proteins analyzable per hour and has now evolved into a proteomics analysis workhorse for many laboratories. Here we describe the Q Exactive Plus and Q Exactive HF mass spectrometers, which feature several innovations in comparison to the original Q Exactive instrument. A low-resolution pre-filter has been implemented within the injection flatapole, preventing unwanted ions from entering deep into the system, and thereby increasing its robustness. A new segmented quadrupole, with higher fidelity of isolation efficiency over a wide range of isolation windows, provides an almost 2-fold improvement of transmission at narrow isolation widths. Additionally, the Q Exactive HF has a compact Orbitrap analyzer, leading to higher field strength and almost doubling the resolution at the same transient times. With its very fast isolation and fragmentation capabilities, the instrument achieves overall cycle times of 1 s for a top 15 to 20 higher energy collisional dissociation method. We demonstrate the identification of 5000 proteins in standard 90-min gradients of tryptic digests of mammalian cell lysate, an increase of over 40% for detected peptides and over 20% for detected proteins. Additionally, we tested the instrument on peptide phosphorylation enriched samples, for which an improvement of up to 60% class I sites was observed. PMID:25360005

  20. Ponder This

    ERIC Educational Resources Information Center

    Yevdokimov, Oleksiy

    2009-01-01

    This article presents a problem set which includes a selection of probability problems. Probability theory started essentially as an empirical science and developed on the mathematical side later. The problems featured in this article demonstrate diversity of ideas and different concepts of probability, in particular, they refer to Laplace and…

  1. Therapeutic approaches against common structural features of toxic oligomers shared by multiple amyloidogenic proteins.

    PubMed

    Guerrero-Muñoz, Marcos J; Castillo-Carranza, Diana L; Kayed, Rakez

    2014-04-15

    Impaired proteostasis is one of the main features of all amyloid diseases, which are associated with the formation of insoluble aggregates from amyloidogenic proteins. The aggregation process can be caused by overproduction or poor clearance of these proteins. However, numerous reports suggest that amyloid oligomers are the most toxic species, rather than insoluble fibrillar material, in Alzheimer's, Parkinson's, and Prion diseases, among others. Although the exact protein that aggregates varies between amyloid disorders, they all share common structural features that can be used as therapeutic targets. In this review, we focus on therapeutic approaches against shared features of toxic oligomeric structures and future directions. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Anisotropic strange star with Tolman V potential

    NASA Astrophysics Data System (ADS)

    Shee, Dibyendu; Deb, Debabrata; Ghosh, Shounak; Ray, Saibal; Guha, B. K.

    In this paper, we present a strange stellar model using Tolman V-type metric potential employing simplest form of the MIT bag equation of state (EOS) for the quark matter. We consider that the stellar system is spherically symmetric, compact and made of an anisotropic fluid. Choosing different values of n we obtain exact solutions of the Einstein field equations and finally conclude that for a specific value of the parameter n = 1/2, we find physically acceptable features of the stellar object. Further, we conduct different physical tests, viz., the energy condition, generalized Tolman-Oppeheimer-Volkoff (TOV) equation, Herrera’s cracking concept, etc., to confirm the physical validity of the presented model. Matching conditions provide expressions for different constants whereas maximization of the anisotropy parameter provides bag constant. By using the observed data of several compact stars, we derive exact values of some of the physical parameters and exhibit their features in tabular form. It is to note that our predicted value of the bag constant satisfies the report of CERN-SPS and RHIC.

  3. The double slit experiment and the time reversed fire alarm

    NASA Astrophysics Data System (ADS)

    Halabi, Tarek

    2011-03-01

    When both slits of the double slit experiment are open, closing one paradoxically increases the detection rate at some points on the detection screen. Feynman famously warned that temptation to "understand" such a puzzling feature only draws us into blind alleys. Nevertheless, we gain insight into this feature by drawing an analogy between the double slit experiment and a time reversed fire alarm. Much as closing the slit increases probability of a future detection, ruling out fire drill scenarios, having heard the fire alarm, increases probability of a past fire (using Bayesian inference). Classically, Bayesian inference is associated with computing probabilities of past events. We therefore identify this feature of the double slit experiment with a time reversed thermodynamic arrow. We believe that much of the enigma of quantum mechanics is simply due to some variation of time's arrow.

  4. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  5. Bayesian statistical inference enhances the interpretation of contemporary randomized controlled trials.

    PubMed

    Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas

    2009-01-01

    Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.

  6. Exact results in the large system size limit for the dynamics of the chemical master equation, a one dimensional chain of equations.

    PubMed

    Martirosyan, A; Saakian, David B

    2011-08-01

    We apply the Hamilton-Jacobi equation (HJE) formalism to solve the dynamics of the chemical master equation (CME). We found exact analytical expressions (in large system-size limit) for the probability distribution, including explicit expression for the dynamics of variance of distribution. We also give the solution for some simple cases of the model with time-dependent rates. We derived the results of the Van Kampen method from the HJE approach using a special ansatz. Using the Van Kampen method, we give a system of ordinary differential equations (ODEs) to define the variance in a two-dimensional case. We performed numerics for the CME with stationary noise. We give analytical criteria for the disappearance of bistability in the case of stationary noise in one-dimensional CMEs.

  7. Black holes are almost optimal quantum cloners

    NASA Astrophysics Data System (ADS)

    Adami, Christoph; Ver Steeg, Greg

    2015-06-01

    If black holes were able to clone quantum states, a number of paradoxes in black hole physics would disappear. However, the linearity of quantum mechanics forbids exact cloning of quantum states. Here we show that black holes indeed clone incoming quantum states with a fidelity that depends on the black hole’s absorption coefficient, without violating the no-cloning theorem because the clones are only approximate. Perfectly reflecting black holes are optimal universal ‘quantum cloning machines’ and operate on the principle of stimulated emission, exactly as their quantum optical counterparts. In the limit of perfect absorption, the fidelity of clones is only equal to what can be obtained via quantum state estimation methods. But for any absorption probability less than one, the cloning fidelity is nearly optimal as long as ω /T≥slant 10, a common parameter for modest-sized black holes.

  8. Exact Solutions for Rate and Synchrony in Recurrent Networks of Coincidence Detectors

    PubMed Central

    Mikula, Shawn; Niebur, Ernst

    2009-01-01

    We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations. PMID:18439133

  9. Exact renormalization group equations: an introductory review

    NASA Astrophysics Data System (ADS)

    Bagnuls, C.; Bervillier, C.

    2001-07-01

    We critically review the use of the exact renormalization group equations (ERGE) in the framework of the scalar theory. We lay emphasis on the existence of different versions of the ERGE and on an approximation method to solve it: the derivative expansion. The leading order of this expansion appears as an excellent textbook example to underline the nonperturbative features of the Wilson renormalization group theory. We limit ourselves to the consideration of the scalar field (this is why it is an introductory review) but the reader will find (at the end of the review) a set of references to existing studies on more complex systems.

  10. Features of sound propagation through and stability of a finite shear layer

    NASA Technical Reports Server (NTRS)

    Koutsoyannis, S. P.

    1977-01-01

    The plane wave propagation, the stability, and the rectangular duct mode problems of a compressible, inviscid, linearly sheared, parallel, homogeneous flow are shown to be governed by Whittaker's equation. The exact solutions for the perturbation quantities are essentially the Whittaker M-functions where the nondimensional quantities have precise physical meanings. A number of known results are obtained as limiting cases of the exact solutions. For the compressible finite thickness shear layer it is shown that no resonances and no critical angles exist for all Mach numbers, frequencies, and shear layer velocity profile slopes except in the singular case of the vortex sheet.

  11. Functional mechanisms of probabilistic inference in feature- and space-based attentional systems.

    PubMed

    Dombert, Pascasie L; Kuhns, Anna; Mengotti, Paola; Fink, Gereon R; Vossel, Simone

    2016-11-15

    Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Rapidly assessing the probability of exceptionally high natural hazard losses

    NASA Astrophysics Data System (ADS)

    Gollini, Isabella; Rougier, Jonathan

    2014-05-01

    One of the objectives in catastrophe modeling is to assess the probability distribution of losses for a specified period, such as a year. From the point of view of an insurance company, the whole of the loss distribution is interesting, and valuable in determining insurance premiums. But the shape of the righthand tail is critical, because it impinges on the solvency of the company. A simple measure of the risk of insolvency is the probability that the annual loss will exceed the company's current operating capital. Imposing an upper limit on this probability is one of the objectives of the EU Solvency II directive. If a probabilistic model is supplied for the loss process, then this tail probability can be computed, either directly, or by simulation. This can be a lengthy calculation for complex losses. Given the inevitably subjective nature of quantifying loss distributions, computational resources might be better used in a sensitivity analysis. This requires either a quick approximation to the tail probability or an upper bound on the probability, ideally a tight one. We present several different bounds, all of which can be computed nearly instantly from a very general event loss table. We provide a numerical illustration, and discuss the conditions under which the bound is tight. Although we consider the perspective of insurance and reinsurance companies, exactly the same issues concern the risk manager, who is typically very sensitive to large losses.

  13. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  14. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data.

    PubMed

    Lobach, Iryna; Mallick, Bani; Carroll, Raymond J

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.

  15. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  16. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  17. On the probability of extinction of the Haiti cholera epidemic

    NASA Astrophysics Data System (ADS)

    Bertuzzo, Enrico; Finger, Flavio; Mari, Lorenzo; Gatto, Marino; Rinaldo, Andrea

    2014-05-01

    Nearly 3 years after its appearance in Haiti, cholera has already exacted more than 8,200 deaths and 670,000 reported cases and it is feared to become endemic. However, no clear evidence of a stable environmental reservoir of pathogenic Vibrio cholerae, the infective agent of the disease, has emerged so far, suggesting that the transmission cycle of the disease is being maintained by bacteria freshly shed by infected individuals. Thus in principle cholera could possibly be eradicated from Haiti. Here, we develop a framework for the estimation of the probability of extinction of the epidemic based on current epidemiological dynamics and health-care practice. Cholera spreading is modelled by an individual-based spatially-explicit stochastic model that accounts for the dynamics of susceptible, infected and recovered individuals hosted in different local communities connected through hydrologic and human mobility networks. Our results indicate that the probability that the epidemic goes extinct before the end of 2016 is of the order of 1%. This low probability of extinction highlights the need for more targeted and effective interventions to possibly stop cholera in Haiti.

  18. q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.; Tian, Li

    2013-10-01

    We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1

  19. Quantum return probability of a system of N non-interacting lattice fermions

    NASA Astrophysics Data System (ADS)

    Krapivsky, P. L.; Luck, J. M.; Mallick, K.

    2018-02-01

    We consider N non-interacting fermions performing continuous-time quantum walks on a one-dimensional lattice. The system is launched from a most compact configuration where the fermions occupy neighboring sites. We calculate exactly the quantum return probability (sometimes referred to as the Loschmidt echo) of observing the very same compact state at a later time t. Remarkably, this probability depends on the parity of the fermion number—it decays as a power of time for even N, while for odd N it exhibits periodic oscillations modulated by a decaying power law. The exponent also slightly depends on the parity of N, and is roughly twice smaller than what it would be in the continuum limit. We also consider the same problem, and obtain similar results, in the presence of an impenetrable wall at the origin constraining the particles to remain on the positive half-line. We derive closed-form expressions for the amplitudes of the power-law decay of the return probability in all cases. The key point in the derivation is the use of Mehta integrals, which are limiting cases of the Selberg integral.

  20. Probability: A Matter of Life and Death

    ERIC Educational Resources Information Center

    Hassani, Mehdi; Kippen, Rebecca; Mills, Terence

    2016-01-01

    Life tables are mathematical tables that document probabilities of dying and life expectancies at different ages in a society. Thus, the life table contains some essential features of the health of a population. Probability is often regarded as a difficult branch of mathematics. Life tables provide an interesting approach to introducing concepts…

  1. On the origin of heavy-tail statistics in equations of the Nonlinear Schrödinger type

    NASA Astrophysics Data System (ADS)

    Onorato, Miguel; Proment, Davide; El, Gennady; Randoux, Stephane; Suret, Pierre

    2016-09-01

    We study the formation of extreme events in incoherent systems described by the Nonlinear Schrödinger type of equations. We consider an exact identity that relates the evolution of the normalized fourth-order moment of the probability density function of the wave envelope to the rate of change of the width of the Fourier spectrum of the wave field. We show that, given an initial condition characterized by some distribution of the wave envelope, an increase of the spectral bandwidth in the focusing/defocusing regime leads to an increase/decrease of the probability of formation of rogue waves. Extensive numerical simulations in 1D+1 and 2D+1 are also performed to confirm the results.

  2. Graph-theoretic approach to quantum correlations.

    PubMed

    Cabello, Adán; Severini, Simone; Winter, Andreas

    2014-01-31

    Correlations in Bell and noncontextuality inequalities can be expressed as a positive linear combination of probabilities of events. Exclusive events can be represented as adjacent vertices of a graph, so correlations can be associated to a subgraph. We show that the maximum value of the correlations for classical, quantum, and more general theories is the independence number, the Lovász number, and the fractional packing number of this subgraph, respectively. We also show that, for any graph, there is always a correlation experiment such that the set of quantum probabilities is exactly the Grötschel-Lovász-Schrijver theta body. This identifies these combinatorial notions as fundamental physical objects and provides a method for singling out experiments with quantum correlations on demand.

  3. Outage Analysis of Dual-hop Cognitive Networks with Relay Selection over Nakagami-m Fading Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Zongsheng; Pi, Xurong

    2014-09-01

    In this paper, we investigate the outage performance of decode-and-forward cognitive relay networks for Nakagami-m fading channels, with considering both best relay selection and interference constraints. Focusing on the relay selection and making use of the underlay cognitive approach, an exact closed-form outage probability expression is derived in an independent, non-identical distributed Nakagami-m environment. The closed-form outage probability provides an efficient means to evaluate the effects of the maximum allowable interference power, number of cognitive relays, and channel conditions between the primary user and cognitive users. Finally, we present numerical results to validate the theory analysis. Moreover, from the simulation results, we obtain that the system can obtain the full diversity.

  4. The Pearson walk with shrinking steps in two dimensions

    NASA Astrophysics Data System (ADS)

    Serino, C. A.; Redner, S.

    2010-01-01

    We study the shrinking Pearson random walk in two dimensions and greater, in which the direction of the Nth step is random and its length equals λN-1, with λ<1. As λ increases past a critical value λc, the endpoint distribution in two dimensions, P(r), changes from having a global maximum away from the origin to being peaked at the origin. The probability distribution for a single coordinate, P(x), undergoes a similar transition, but exhibits multiple maxima on a fine length scale for λ close to λc. We numerically determine P(r) and P(x) by applying a known algorithm that accurately inverts the exact Bessel function product form of the Fourier transform for the probability distributions.

  5. Frozen into stripes: fate of the critical Ising model after a quench.

    PubMed

    Blanchard, T; Picco, M

    2013-09-01

    In this article we study numerically the final state of the two-dimensional ferromagnetic critical Ising model after a quench to zero temperature. Beginning from equilibrium at T_{c}, the system can be blocked in a variety of infinitely long lived stripe states in addition to the ground state. Similar results have already been obtained for an infinite temperature initial condition and an interesting connection to exact percolation crossing probabilities has emerged. Here we complete this picture by providing an example of stripe states precisely related to initial crossing probabilities for various boundary conditions. We thus show that this is not specific to percolation but rather that it depends on the properties of spanning clusters in the initial state.

  6. Target annihilation by diffusing particles in inhomogeneous geometries

    NASA Astrophysics Data System (ADS)

    Cassi, Davide

    2009-09-01

    The survival probability of immobile targets annihilated by a population of random walkers on inhomogeneous discrete structures, such as disordered solids, glasses, fractals, polymer networks, and gels, is analytically investigated. It is shown that, while it cannot in general be related to the number of distinct visited points as in the case of homogeneous lattices, in the case of bounded coordination numbers its asymptotic behavior at large times can still be expressed in terms of the spectral dimension d˜ and its exact analytical expression is given. The results show that the asymptotic survival probability is site-independent of recurrent structures (d˜≤2) , while on transient structures (d˜>2) it can strongly depend on the target position, and such dependence is explicitly calculated.

  7. Occupation probabilities and fluctuations in the asymmetric simple inclusion process

    NASA Astrophysics Data System (ADS)

    Reuveni, Shlomi; Hirschberg, Ori; Eliazar, Iddo; Yechiali, Uri

    2014-04-01

    The asymmetric simple inclusion process (ASIP), a lattice-gas model of unidirectional transport and aggregation, was recently proposed as an "inclusion" counterpart of the asymmetric simple exclusion process. In this paper we present an exact closed-form expression for the probability that a given number of particles occupies a given set of consecutive lattice sites. Our results are expressed in terms of the entries of Catalan's trapezoids—number arrays which generalize Catalan's numbers and Catalan's triangle. We further prove that the ASIP is asymptotically governed by the following: (i) an inverse square-root law of occupation, (ii) a square-root law of fluctuation, and (iii) a Rayleigh law for the distribution of interexit times. The universality of these results is discussed.

  8. Monolayer phosphorene under time-dependent magnetic field

    NASA Astrophysics Data System (ADS)

    Nascimento, J. P. G.; Aguiar, V.; Guedes, I.

    2018-02-01

    We obtain the exact wave function of a monolayer phosphorene under a low-intensity time-dependent magnetic field using the dynamical invariant method. We calculate the quantum-mechanical energy expectation value and the transition probability for a constant and an oscillatory magnetic field. For the former we observe that the Landau level energy varies linearly with the quantum numbers n and m and the magnetic field intensity B0. No transition takes place. For the latter, we observe that the energy oscillates in time, increasing linearly with the Landau level n and m and nonlinearly with the magnetic field. The (k , l) →(n , m) transitions take place only for l = m. We investigate the (0,0) →(n , 0) and (1 , l) and (2 , l) probability transitions.

  9. Development of Commercially Useable Codes to Simulate Aluminized Propellant Combustion and Related Issues

    DTIC Science & Technology

    2009-11-03

    functions and the second derivative of Green’s function. We exploit the geometrical characteristics of our integrand, i.e., we use spherical coordinates...statistically equivalent medium. Both the fully resolved probability spectrum and the geometrically exact particle shapes are considered in this...18 Buckmaster Research A1-18 FA9550-07-C-0123 References [1] B.D. Lubachevsky and F.H. Stillinger. “ Geometric properties of random disk packings”, J

  10. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  11. Bayesian feature selection for high-dimensional linear regression via the Ising approximation with applications to genomics.

    PubMed

    Fisher, Charles K; Mehta, Pankaj

    2015-06-01

    Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. What Do We Learn from Binding Features? Evidence for Multilevel Feature Integration

    ERIC Educational Resources Information Center

    Colzato, Lorenza S.; Raffone, Antonino; Hommel, Bernhard

    2006-01-01

    Four experiments were conducted to investigate the relationship between the binding of visual features (as measured by their after-effects on subsequent binding) and the learning of feature-conjunction probabilities. Both binding and learning effects were obtained, but they did not interact. Interestingly, (shape-color) binding effects…

  13. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  14. The DeBakey classification exactly reflects late outcome and re-intervention probability in acute aortic dissection with a slightly modified type II definition.

    PubMed

    Tsagakis, Konstantinos; Tossios, Paschalis; Kamler, Markus; Benedik, Jaroslav; Natour, Dorgam; Eggebrecht, Holger; Piotrowski, Jarowit; Jakob, Heinz

    2011-11-01

    The DeBakey classification was used to discriminate the extent of acute aortic dissection (AD) and was correlated to long-term outcome and re-intervention rate. A slight modification of type II subgroup definition was applied by incorporating the aortic arch, when full resectability of the dissection process was given. Between January 2001 and March 2010, 118 patients (64% male, mean age 59 years) underwent surgery for acute AD. As many as 74 were operated on for type I and 44 for type II AD. Complete resection of all entry sites was performed, including antegrade stent grafting for proximal descending lesions. Patients were comparable with respect to demographics and preoperative hemodynamic status. They underwent isolated ascending replacement, hemiarch, or total arch replacement in 7%, 26%, and 67% in type I, versus 27%, 37%, and 36% in type II, respectively. Additional descending stent grafting was performed in 33/74 (45%) type I patients. In-hospital mortality was 14%, 16% (12/74) in type I versus 9% (4/44, type II), p=0.405. After 5 years, the estimated survival rate was 63% in type I versus 80% in type II, p=0.135. In type II, no distal aortic re-intervention was required. In type I, the freedom of distal re-interventions was 82% in patients with additional stent grafting versus 53% in patients without, p=0.022. The slightly modified DeBakey classification exactly reflects late outcome and aortic re-intervention probability. Thus, in type II patients, the aorta seems to be healed without any probability of later re-operation or re-intervention. Copyright © 2011 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  15. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  16. Exploring the complexity of quantum control optimization trajectories.

    PubMed

    Nanduri, Arun; Shir, Ofer M; Donovan, Ashley; Ho, Tak-San; Rabitz, Herschel

    2015-01-07

    The control of quantum system dynamics is generally performed by seeking a suitable applied field. The physical objective as a functional of the field forms the quantum control landscape, whose topology, under certain conditions, has been shown to contain no critical point suboptimal traps, thereby enabling effective searches for fields that give the global maximum of the objective. This paper addresses the structure of the landscape as a complement to topological critical point features. Recent work showed that landscape structure is highly favorable for optimization of state-to-state transition probabilities, in that gradient-based control trajectories to the global maximum value are nearly straight paths. The landscape structure is codified in the metric R ≥ 1.0, defined as the ratio of the length of the control trajectory to the Euclidean distance between the initial and optimal controls. A value of R = 1 would indicate an exactly straight trajectory to the optimal observable value. This paper extends the state-to-state transition probability results to the quantum ensemble and unitary transformation control landscapes. Again, nearly straight trajectories predominate, and we demonstrate that R can take values approaching 1.0 with high precision. However, the interplay of optimization trajectories with critical saddle submanifolds is found to influence landscape structure. A fundamental relationship necessary for perfectly straight gradient-based control trajectories is derived, wherein the gradient on the quantum control landscape must be an eigenfunction of the Hessian. This relation is an indicator of landscape structure and may provide a means to identify physical conditions when control trajectories can achieve perfect linearity. The collective favorable landscape topology and structure provide a foundation to understand why optimal quantum control can be readily achieved.

  17. Biased and greedy random walks on two-dimensional lattices with quenched randomness: The greedy ant within a disordered environment

    NASA Astrophysics Data System (ADS)

    Mitran, T. L.; Melchert, O.; Hartmann, A. K.

    2013-12-01

    The main characteristics of biased greedy random walks (BGRWs) on two-dimensional lattices with real-valued quenched disorder on the lattice edges are studied. Here the disorder allows for negative edge weights. In previous studies, considering the negative-weight percolation (NWP) problem, this was shown to change the universality class of the existing, static percolation transition. In the presented study, four different types of BGRWs and an algorithm based on the ant colony optimization heuristic were considered. Regarding the BGRWs, the precise configurations of the lattice walks constructed during the numerical simulations were influenced by two parameters: a disorder parameter ρ that controls the amount of negative edge weights on the lattice and a bias strength B that governs the drift of the walkers along a certain lattice direction. The random walks are “greedy” in the sense that the local optimal choice of the walker is to preferentially traverse edges with a negative weight (associated with a net gain of “energy” for the walker). Here, the pivotal observable is the probability that, after termination, a lattice walk exhibits a total negative weight, which is here considered as percolating. The behavior of this observable as function of ρ for different bias strengths B is put under scrutiny. Upon tuning ρ, the probability to find such a feasible lattice walk increases from zero to 1. This is the key feature of the percolation transition in the NWP model. Here, we address the question how well the transition point ρc, resulting from numerically exact and “static” simulations in terms of the NWP model, can be resolved using simple dynamic algorithms that have only local information available, one of the basic questions in the physics of glassy systems.

  18. Oceanic migration and spawning of anguillid eels.

    PubMed

    Tsukamoto, K

    2009-06-01

    Many aspects of the life histories of anguillid eels have been revealed in recent decades, but the spawning migrations of their silver eels in the open ocean still remains poorly understood. This paper overviews what is known about the migration and spawning of anguillid species in the ocean. The factors that determine exactly when anguillid eels will begin their migrations are not known, although environmental influences such as lunar cycle, rainfall and river discharge seem to affect their patterns of movement as they migrate towards the ocean. Once in the ocean on their way to the spawning area, silver eels probably migrate in the upper few hundred metres, while reproductive maturation continues. Although involvement of a magnetic sense or olfactory cues seems probable, how they navigate or what routes they take are still a matter of speculation. There are few landmarks in the open ocean to define their spawning areas, other than oceanographic or geological features such as oceanic fronts or seamounts in some cases. Spawning of silver eels in the ocean has never been observed, but artificially matured eels of several species have exhibited similar spawning behaviours in the laboratory. Recent collections of mature adults and newly spawned preleptocephali in the spawning area of the Japanese eel Anguilla japonica have shown that spawning occurs during new moon periods in the North Equatorial Current region near the West Mariana Ridge. These data, however, show that the latitude of the spawning events can change among months and years depending on oceanographic conditions. Changes in spawning location of this and other anguillid species may affect their larval transport and survival, and appear to have the potential to influence recruitment success. A greater understanding of the spawning migration and the choice of spawning locations by silver eels is needed to help conserve declining anguillid species.

  19. Risk Management in Complex Construction Projects that Apply Renewable Energy Sources: A Case Study of the Realization Phase of the Energis Educational and Research Intelligent Building

    NASA Astrophysics Data System (ADS)

    Krechowicz, Maria

    2017-10-01

    Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.

  20. Wave packet and statistical quantum calculations for the He + NeH⁺ → HeH⁺ + Ne reaction on the ground electronic state.

    PubMed

    Koner, Debasish; Barrios, Lizandra; González-Lezana, Tomás; Panda, Aditya N

    2014-09-21

    A real wave packet based time-dependent method and a statistical quantum method have been used to study the He + NeH(+) (v, j) reaction with the reactant in various ro-vibrational states, on a recently calculated ab initio ground state potential energy surface. Both the wave packet and statistical quantum calculations were carried out within the centrifugal sudden approximation as well as using the exact Hamiltonian. Quantum reaction probabilities exhibit dense oscillatory pattern for smaller total angular momentum values, which is a signature of resonances in a complex forming mechanism for the title reaction. Significant differences, found between exact and approximate quantum reaction cross sections, highlight the importance of inclusion of Coriolis coupling in the calculations. Statistical results are in fairly good agreement with the exact quantum results, for ground ro-vibrational states of the reactant. Vibrational excitation greatly enhances the reaction cross sections, whereas rotational excitation has relatively small effect on the reaction. The nature of the reaction cross section curves is dependent on the initial vibrational state of the reactant and is typical of a late barrier type potential energy profile.

  1. An exact and efficient first passage time algorithm for reaction-diffusion processes on a 2D-lattice

    NASA Astrophysics Data System (ADS)

    Bezzola, Andri; Bales, Benjamin B.; Alkire, Richard C.; Petzold, Linda R.

    2014-01-01

    We present an exact and efficient algorithm for reaction-diffusion-nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for large ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.

  2. An exact and efficient first passage time algorithm for reaction–diffusion processes on a 2D-lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezzola, Andri, E-mail: andri.bezzola@gmail.com; Bales, Benjamin B., E-mail: bbbales2@gmail.com; Alkire, Richard C., E-mail: r-alkire@uiuc.edu

    2014-01-01

    We present an exact and efficient algorithm for reaction–diffusion–nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for largemore » ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.« less

  3. Random matrix theory of singular values of rectangular complex matrices I: Exact formula of one-body distribution function in fixed-trace ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adachi, Satoshi; Toda, Mikito; Kubotani, Hiroto

    The fixed-trace ensemble of random complex matrices is the fundamental model that excellently describes the entanglement in the quantum states realized in a coupled system by its strongly chaotic dynamical evolution [see H. Kubotani, S. Adachi, M. Toda, Phys. Rev. Lett. 100 (2008) 240501]. The fixed-trace ensemble fully takes into account the conservation of probability for quantum states. The present paper derives for the first time the exact analytical formula of the one-body distribution function of singular values of random complex matrices in the fixed-trace ensemble. The distribution function of singular values (i.e. Schmidt eigenvalues) of a quantum state ismore » so important since it describes characteristics of the entanglement in the state. The derivation of the exact analytical formula utilizes two recent achievements in mathematics, which appeared in 1990s. The first is the Kaneko theory that extends the famous Selberg integral by inserting a hypergeometric type weight factor into the integrand to obtain an analytical formula for the extended integral. The second is the Petkovsek-Wilf-Zeilberger theory that calculates definite hypergeometric sums in a closed form.« less

  4. An evaluation of exact matching and propensity score methods as applied in a comparative effectiveness study of inhaled corticosteroids in asthma.

    PubMed

    Burden, Anne; Roche, Nicolas; Miglio, Cristiana; Hillyer, Elizabeth V; Postma, Dirkje S; Herings, Ron Mc; Overbeek, Jetty A; Khalid, Javaria Mona; van Eickels, Daniela; Price, David B

    2017-01-01

    Cohort matching and regression modeling are used in observational studies to control for confounding factors when estimating treatment effects. Our objective was to evaluate exact matching and propensity score methods by applying them in a 1-year pre-post historical database study to investigate asthma-related outcomes by treatment. We drew on longitudinal medical record data in the PHARMO database for asthma patients prescribed the treatments to be compared (ciclesonide and fine-particle inhaled corticosteroid [ICS]). Propensity score methods that we evaluated were propensity score matching (PSM) using two different algorithms, the inverse probability of treatment weighting (IPTW), covariate adjustment using the propensity score, and propensity score stratification. We defined balance, using standardized differences, as differences of <10% between cohorts. Of 4064 eligible patients, 1382 (34%) were prescribed ciclesonide and 2682 (66%) fine-particle ICS. The IPTW and propensity score-based methods retained more patients (96%-100%) than exact matching (90%); exact matching selected less severe patients. Standardized differences were >10% for four variables in the exact-matched dataset and <10% for both PSM algorithms and the weighted pseudo-dataset used in the IPTW method. With all methods, ciclesonide was associated with better 1-year asthma-related outcomes, at one-third the prescribed dose, than fine-particle ICS; results varied slightly by method, but direction and statistical significance remained the same. We found that each method has its particular strengths, and we recommend at least two methods be applied for each matched cohort study to evaluate the robustness of the findings. Balance diagnostics should be applied with all methods to check the balance of confounders between treatment cohorts. If exact matching is used, the calculation of a propensity score could be useful to identify variables that require balancing, thereby informing the choice of matching criteria together with clinical considerations.

  5. An evaluation of exact matching and propensity score methods as applied in a comparative effectiveness study of inhaled corticosteroids in asthma

    PubMed Central

    Burden, Anne; Roche, Nicolas; Miglio, Cristiana; Hillyer, Elizabeth V; Postma, Dirkje S; Herings, Ron MC; Overbeek, Jetty A; Khalid, Javaria Mona; van Eickels, Daniela; Price, David B

    2017-01-01

    Background Cohort matching and regression modeling are used in observational studies to control for confounding factors when estimating treatment effects. Our objective was to evaluate exact matching and propensity score methods by applying them in a 1-year pre–post historical database study to investigate asthma-related outcomes by treatment. Methods We drew on longitudinal medical record data in the PHARMO database for asthma patients prescribed the treatments to be compared (ciclesonide and fine-particle inhaled corticosteroid [ICS]). Propensity score methods that we evaluated were propensity score matching (PSM) using two different algorithms, the inverse probability of treatment weighting (IPTW), covariate adjustment using the propensity score, and propensity score stratification. We defined balance, using standardized differences, as differences of <10% between cohorts. Results Of 4064 eligible patients, 1382 (34%) were prescribed ciclesonide and 2682 (66%) fine-particle ICS. The IPTW and propensity score-based methods retained more patients (96%–100%) than exact matching (90%); exact matching selected less severe patients. Standardized differences were >10% for four variables in the exact-matched dataset and <10% for both PSM algorithms and the weighted pseudo-dataset used in the IPTW method. With all methods, ciclesonide was associated with better 1-year asthma-related outcomes, at one-third the prescribed dose, than fine-particle ICS; results varied slightly by method, but direction and statistical significance remained the same. Conclusion We found that each method has its particular strengths, and we recommend at least two methods be applied for each matched cohort study to evaluate the robustness of the findings. Balance diagnostics should be applied with all methods to check the balance of confounders between treatment cohorts. If exact matching is used, the calculation of a propensity score could be useful to identify variables that require balancing, thereby informing the choice of matching criteria together with clinical considerations. PMID:28356782

  6. Exact Solutions of Linear Reaction-Diffusion Processes on a Uniformly Growing Domain: Criteria for Successful Colonization

    PubMed Central

    Simpson, Matthew J

    2015-01-01

    Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction—diffusion process on 0

  7. Exact solutions of linear reaction-diffusion processes on a uniformly growing domain: criteria for successful colonization.

    PubMed

    Simpson, Matthew J

    2015-01-01

    Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction-diffusion process on 0

  8. Seeing the Forest when Entry Is Unlikely: Probability and the Mental Representation of Events

    ERIC Educational Resources Information Center

    Wakslak, Cheryl J.; Trope, Yaacov; Liberman, Nira; Alony, Rotem

    2006-01-01

    Conceptualizing probability as psychological distance, the authors draw on construal level theory (Y. Trope & N. Liberman, 2003) to propose that decreasing an event's probability leads individuals to represent the event by its central, abstract, general features (high-level construal) rather than by its peripheral, concrete, specific features…

  9. Optimized Diffusion of Run-and-Tumble Particles in Crowded Environments

    NASA Astrophysics Data System (ADS)

    Bertrand, Thibault; Zhao, Yongfeng; Bénichou, Olivier; Tailleur, Julien; Voituriez, Raphaël

    2018-05-01

    We study the transport of self-propelled particles in dynamic complex environments. To obtain exact results, we introduce a model of run-and-tumble particles (RTPs) moving in discrete time on a d -dimensional cubic lattice in the presence of diffusing hard-core obstacles. We derive an explicit expression for the diffusivity of the RTP, which is exact in the limit of low density of fixed obstacles. To do so, we introduce a generalization of Kac's theorem on the mean return times of Markov processes, which we expect to be relevant for a large class of lattice gas problems. Our results show the diffusivity of RTPs to be nonmonotonic in the tumbling probability for low enough obstacle mobility. These results prove the potential for the optimization of the transport of RTPs in crowded and disordered environments with applications to motile artificial and biological systems.

  10. Exact Derivation of a Finite-Size Scaling Law and Corrections to Scaling in the Geometric Galton-Watson Process

    PubMed Central

    Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc

    2016-01-01

    The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596

  11. Gravitational lensing effects of vacuum strings - Exact solutions

    NASA Technical Reports Server (NTRS)

    Gott, J. R., III

    1985-01-01

    Exact interior and exterior solutions to Einstein's field equations are derived for vacuum strings. The exterior solution for a uniform density vacuum string corresponds to a conical space while the interior solution is that of a spherical cap. For Mu equals 0-1/4 the external metric is ds-squared = -dt-squared + dr-squared + (1-4 Mu)-squared r-squared dphi-squared + dz-squared, where Mu is the mass per unit length in the string in Planck masses per Planck length. A maximum mass per unit length for a string is 6.73 x 10 to the 27th g/cm. It is shown that strings cause temperature fluctuations in the cosmic microwave background and produce equal brightness double QSO images separated by up to several minutes of arc. Formulae for lensing probabilities, image splittings, and time delays are derived for strings in a realistic cosmological setting. String searches using ST, the VLA, and the COBE satellite are discussed.

  12. The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2007-01-01

    A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  13. Quantum Chemistry on Quantum Computers: A Polynomial-Time Quantum Algorithm for Constructing the Wave Functions of Open-Shell Molecules.

    PubMed

    Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji

    2016-08-18

    Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations.

  14. Eigenvalue statistics for the sum of two complex Wishart matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2014-09-01

    The sum of independent Wishart matrices, taken from distributions with unequal covariance matrices, plays a crucial role in multivariate statistics, and has applications in the fields of quantitative finance and telecommunication. However, analytical results concerning the corresponding eigenvalue statistics have remained unavailable, even for the sum of two Wishart matrices. This can be attributed to the complicated and rotationally noninvariant nature of the matrix distribution that makes extracting the information about eigenvalues a nontrivial task. Using a generalization of the Harish-Chandra-Itzykson-Zuber integral, we find exact solution to this problem for the complex Wishart case when one of the covariance matrices is proportional to the identity matrix, while the other is arbitrary. We derive exact and compact expressions for the joint probability density and marginal density of eigenvalues. The analytical results are compared with numerical simulations and we find perfect agreement.

  15. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  16. The SMM model as a boundary value problem using the discrete diffusion equation.

    PubMed

    Campbell, Joel

    2007-12-01

    A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  17. Prior probability and feature predictability interactively bias perceptual decisions

    PubMed Central

    Dunovan, Kyle E.; Tremel, Joshua J.; Wheeler, Mark E.

    2014-01-01

    Anticipating a forthcoming sensory experience facilitates perception for expected stimuli but also hinders perception for less likely alternatives. Recent neuroimaging studies suggest that expectation biases arise from feature-level predictions that enhance early sensory representations and facilitate evidence accumulation for contextually probable stimuli while suppressing alternatives. Reasonably then, the extent to which prior knowledge biases subsequent sensory processing should depend on the precision of expectations at the feature level as well as the degree to which expected features match those of an observed stimulus. In the present study we investigated how these two sources of uncertainty modulated pre- and post-stimulus bias mechanisms in the drift-diffusion model during a probabilistic face/house discrimination task. We tested several plausible models of choice bias, concluding that predictive cues led to a bias in both the starting-point and rate of evidence accumulation favoring the more probable stimulus category. We further tested the hypotheses that prior bias in the starting-point was conditional on the feature-level uncertainty of category expectations and that dynamic bias in the drift-rate was modulated by the match between expected and observed stimulus features. Starting-point estimates suggested that subjects formed a constant prior bias in favor of the face category, which exhibits less feature-level variability, that was strengthened or weakened by trial-wise predictive cues. Furthermore, we found that the gain on face/house evidence was increased for stimuli with less ambiguous features and that this relationship was enhanced by valid category expectations. These findings offer new evidence that bridges psychological models of decision-making with recent predictive coding theories of perception. PMID:24978303

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modak, Viraj P., E-mail: virajmodak@gmail.com; Wyslouzil, Barbara E., E-mail: wyslouzil.1@osu.edu; Department of Chemistry and Biochemistry, Ohio State University, Columbus, Ohio 43210

    The crystal-vapor surface free energy γ is an important physical parameter governing physical processes, such as wetting and adhesion. We explore exact and approximate routes to calculate γ based on cleaving an intact crystal into non-interacting sub-systems with crystal-vapor interfaces. We do this by turning off the interactions, ΔV, between the sub-systems. Using the soft-core scheme for turning off ΔV, we find that the free energy varies smoothly with the coupling parameter λ, and a single thermodynamic integration yields the exact γ. We generate another exact method, and a cumulant expansion for γ by expressing the surface free energy inmore » terms of an average of e{sup −βΔV} in the intact crystal. The second cumulant, or Gaussian approximation for γ is surprisingly accurate in most situations, even though we find that the underlying probability distribution for ΔV is clearly not Gaussian. We account for this fact by developing a non-Gaussian theory for γ and find that the difference between the non-Gaussian and Gaussian expressions for γ consist of terms that are negligible in many situations. Exact and approximate methods are applied to the (111) surface of a Lennard-Jones crystal and are also tested for more complex molecular solids, the surface of octane and nonadecane. Alkane surfaces were chosen for study because their crystal-vapor surface free energy has been of particular interest for understanding surface freezing in these systems.« less

  19. Secret Bit Transmission Using a Random Deal of Cards

    DTIC Science & Technology

    1990-05-01

    conversation between sender and receiver is public and is heard by all. A correct protocol always succeeds in transmitting the secret bit, and the other player...s), who receive the remaining cards and are assumed to have unlimited computing power, gain no information whatsoever about the value of the secret bit...In other words, their probability of correctly guessing the secret is bit exactly the same after listening to a run of the protocol as it was

  20. Exact Recovery of Chaotic Systems from Highly Corrupted Data

    DTIC Science & Technology

    2016-08-01

    dimension to reconstruct a state space which preserves the topological properties of the original system. In [CM87, RS92], the authors use the singular...in high dimensional nonlinear functional spaces [Spr94, SL00, LCC04]. In this work, we bring together connections between compressed sensing, splitting... compact , connected attractor Λ and the flow admits a unique so-called “physical" measure µ with supp(µ) = Λ. An invariant probability measure µ for a flow

  1. Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph

    DTIC Science & Technology

    2014-07-01

    distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more

  2. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks.

    PubMed

    He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei

    2014-01-01

    Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies.

  3. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies. PMID:25350277

  4. New Aspects of Probabilistic Forecast Verification Using Information Theory

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  5. Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.

    PubMed

    Krishnamurthy, V; Krishnamurthy, E V

    1999-03-01

    A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.

  6. Survival probability for a diffusive process on a growing domain

    NASA Astrophysics Data System (ADS)

    Simpson, Matthew J.; Sharp, Jesse A.; Baker, Ruth E.

    2015-04-01

    We consider the motion of a diffusive population on a growing domain, 0

  7. Some new exact solitary wave solutions of the van der Waals model arising in nature

    NASA Astrophysics Data System (ADS)

    Bibi, Sadaf; Ahmed, Naveed; Khan, Umar; Mohyud-Din, Syed Tauseef

    2018-06-01

    This work proposes two well-known methods, namely, Exponential rational function method (ERFM) and Generalized Kudryashov method (GKM) to seek new exact solutions of the van der Waals normal form for the fluidized granular matter, linked with natural phenomena and industrial applications. New soliton solutions such as kink, periodic and solitary wave solutions are established coupled with 2D and 3D graphical patterns for clarity of physical features. Our comparison reveals that the said methods excel several existing methods. The worked-out solutions show that the suggested methods are simple and reliable as compared to many other approaches which tackle nonlinear equations stemming from applied sciences.

  8. Monogamy equalities for qubit entanglement from Lorentz invariance.

    PubMed

    Eltschka, Christopher; Siewert, Jens

    2015-04-10

    A striking result from nonrelativistic quantum mechanics is the monogamy of entanglement, which states that a particle can be maximally entangled only with one other party, not with several ones. While there is the exact quantitative relation for three qubits and also several inequalities describing monogamy properties, it is not clear to what extent exact monogamy relations are a general feature of quantum mechanics. We prove that in all many-qubit systems there exist strict monogamy laws for quantum correlations. They come about through the curious relationship between the nonrelativistic quantum mechanics of qubits and Minkowski space. We elucidate the origin of entanglement monogamy from this symmetry perspective and provide recipes to construct new families of such equalities.

  9. Post-deformational relocation of mica grains in calcite-dolomite marbles identified by cathodoluminescence microscopy

    NASA Astrophysics Data System (ADS)

    Kuehn, Rebecca; Duschl, Florian; Leiss, Bernd

    2017-04-01

    Hot-cathodoluminescence-microscopy (CL) reveals micas which are rotated or shifted within a calcite fabric from a foliation parallel to a random orientation. This feature has been recognized in calcite-dolomite marble samples from the locations Hammerunterwiesenthal, Erzgebirge, Germany and the Alpi Apuane, Italy. As obtained from petrographic thin section analysis, the micas either moved totally within a single calcite grain or from a grain boundary position, and then the calcite grain growth was dragged with the movement of the mica grain. In the moved-through grain, features like fluid-inclusions, twins or cleavage faces are erased and a new, clear calcite phase developed. This indicates dissolution-precipitation as process which led to the new calcite phase. As former deformation features are erased it can be assumed that the mica relocation is a fluid-driven, post-deformational equilibration process. In CL the new calcite mineral phase shows a zonation indicating a polycyclic process. Calcite CL gradually changes from a very dark purple, exactly as the surrounding grains, to a bright orange CL and supports the idea of fluid-induced deformation relocation. We suppose a specific lattice relationship between mica and calcite as initial driving factor for mica relocation. This recrystallization mechanism is probably supported by fluids - either from an external source or developed during retrograde metamorphosis fluid inclusion studies shall identify formation temperatures and origin of involved fluids and thereby clarify the timing of the post-deformational mica rotation. EBSD analysis of involved calcite and mica grains shall reveal a possible systematic relationship between the orientation of the hosting grains, the orientation of the mica and the final position of the mica. It will be interesting to learn in the future, if this kind of calcite-mica microstructure is a general phenomenon and how it can contribute to the understanding of fabric development.

  10. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  11. Security Threat Assessment of an Internet Security System Using Attack Tree and Vague Sets

    PubMed Central

    2014-01-01

    Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete—the traditional approach for calculating reliability—is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods. PMID:25405226

  12. Security threat assessment of an Internet security system using attack tree and vague sets.

    PubMed

    Chang, Kuei-Hu

    2014-01-01

    Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete--the traditional approach for calculating reliability--is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods.

  13. Asymptotic properties of a bold random walk

    NASA Astrophysics Data System (ADS)

    Serva, Maurizio

    2014-08-01

    In a recent paper we proposed a non-Markovian random walk model with memory of the maximum distance ever reached from the starting point (home). The behavior of the walker is different from the simple symmetric random walk only when she is at this maximum distance, where, having the choice to move either farther or closer, she decides with different probabilities. If the probability of a forward step is higher than the probability of a backward step, the walker is bold and her behavior turns out to be superdiffusive; otherwise she is timorous and her behavior turns out to be subdiffusive. The scaling behavior varies continuously from subdiffusive (timorous) to superdiffusive (bold) according to a single parameter γ ∈R. We investigate here the asymptotic properties of the bold case in the nonballistic region γ ∈[0,1/2], a problem which was left partially unsolved previously. The exact results proved in this paper require new probabilistic tools which rely on the construction of appropriate martingales of the random walk and its hitting times.

  14. Bond Dilution Effects on Bethe Lattice the Spin-1 Blume-Capel Model

    NASA Astrophysics Data System (ADS)

    Albayrak, Erhan

    2017-09-01

    The bond dilution effects are investigated for the spin-1 Blume-Capel model on the Bethe lattice by using the exact recursion relations. The bilinear interaction parameter is either turned on ferromagnetically with probability p or turned off with probability 1 - p between the nearest-neighbor spins. The thermal variations of the order-parameters are studied in detail to obtain the phase diagrams on the possible planes spanned by the temperature (T), probability (p) and crystal field (D) for the coordination numbers q = 3, 4, and 6. The lines of the second-order phase transitions, Tc-lines, combined with the first-order ones, Tt-lines, at the tricritical points (TCP) are always found for any p and q on the (T, D)-planes. It is also found that the model gives only Tc-lines, Tc-lines combined with the Tt-lines at the TCP’s and only Tt-lines with the consecutively decreasing values of D on the (T, p)-planes for all q.

  15. On the performance of energy detection-based CR with SC diversity over IG channel

    NASA Astrophysics Data System (ADS)

    Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka

    2017-12-01

    Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.

  16. Dynamic properties of molecular motors in burnt-bridge models

    NASA Astrophysics Data System (ADS)

    Artyomov, Maxim N.; Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B.

    2007-08-01

    Dynamic properties of molecular motors that fuel their motion by actively interacting with underlying molecular tracks are studied theoretically via discrete-state stochastic 'burnt-bridge' models. The transport of the particles is viewed as an effective diffusion along one-dimensional lattices with periodically distributed weak links. When an unbiased random walker passes the weak link it can be destroyed ('burned') with probability p, providing a bias in the motion of the molecular motor. We present a theoretical approach that allows one to calculate exactly all dynamic properties of motor proteins, such as velocity and dispersion, under general conditions. It is found that dispersion is a decreasing function of the concentration of bridges, while the dependence of dispersion on the burning probability is more complex. Our calculations also show a gap in dispersion for very low concentrations of weak links or for very low burning probabilities which indicates a dynamic phase transition between unbiased and biased diffusion regimes. Theoretical findings are supported by Monte Carlo computer simulations.

  17. Pólya number and first return of bursty random walk: Rigorous solutions

    NASA Astrophysics Data System (ADS)

    Wan, J.; Xu, X. P.

    2012-03-01

    The recurrence properties of random walks can be characterized by Pólya number, i.e., the probability that the walker has returned to the origin at least once. In this paper, we investigate Pólya number and first return for bursty random walk on a line, in which the walk has different step size and moving probabilities. Using the concept of the Catalan number, we obtain exact results for first return probability, the average first return time and Pólya number for the first time. We show that Pólya number displays two different functional behavior when the walk deviates from the recurrent point. By utilizing the Lagrange inversion formula, we interpret our findings by transferring Pólya number to the closed-form solutions of an inverse function. We also calculate Pólya number using another approach, which corroborates our results and conclusions. Finally, we consider the recurrence properties and Pólya number of two variations of the bursty random walk model.

  18. Symptoms of major depression in people with spinal cord injury: implications for screening.

    PubMed

    Bombardier, Charles H; Richards, J Scott; Krause, James S; Tulsky, David; Tate, Denise G

    2004-11-01

    To provide psychometric data on a self-report measure of major depressive disorder (MDD) and to determine whether somatic symptoms are nonspecific or count toward the diagnosis. Survey. Data from the National Spinal Cord Injury Statistical Center representing 16 Model Spinal Cord Injury Systems. Eight hundred forty-nine people with spinal cord injury who completed a standardized follow-up evaluation 1 year after injury. Not applicable. The Patient Health Questionnaire-9 (PHQ-9), a measure of MDD as defined by the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition . We computed descriptive statistics on rates of depressive symptoms and probable MDD, evaluated internal consistency and construct validity, and analyzed the accuracy of individual items as predictors of MDD. Exactly 11.4% of participants met criteria for probable MDD. Probable MDD was associated with poorer subjective health, lower satisfaction with life, and more difficulty in daily role functioning. Probable MDD was not related to most demographic or injury-related variables. Both somatic and psychologic symptoms predicted probable MDD. The PHQ-9 has promise as a tool with which to identify probable MDD in people with SCI. Somatic symptoms should be counted toward the diagnosis and should alert health care providers to the likelihood of MDD. More efficient screening is only one of the quality improvement efforts needed to enhance management of MDD.

  19. Outstanding performance of configuration interaction singles and doubles using exact exchange Kohn-Sham orbitals in real-space numerical grid method

    NASA Astrophysics Data System (ADS)

    Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn

    2016-12-01

    To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.

  20. Rainfall-runoff response informed by exact solutions of Boussinesq equation on hillslopes

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Porporato, A. M.

    2017-12-01

    The Boussinesq equation offers a powerful approach forunderstanding the flow dynamics of unconfined aquifers. Though this nonlinear equation allows for concise representation of both soil and geomorphological controls on groundwater flow, it has only been solved exactly for a limited number of initial and boundary conditions. These solutions do not include source/sink terms (evapotranspiration, recharge, and seepage to bedrock) and are typically limited to horizontal aquifers. Here we present a class of exact solutions that are general to sloping aquifers and a time varying source/sink term. By incorporating the source/sink term, they may describe aquifers with both time varying recharge over seasonal or weekly time scales, as well as a loss of water from seepage to the bedrock interface, which is a common feature in hillslopes. These new solutions shed light on the hysteretic relationship between streamflow and groundwater and the behavior of the hydrograph recession curves, thus providing a robust basis for deriving a runoff curves for the partition of rainfall into infiltration and runoff.

  1. Nodal surfaces and interdimensional degeneracies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loos, Pierre-François, E-mail: pf.loos@anu.edu.au; Bressanini, Dario, E-mail: dario.bressanini@uninsubria.it

    2015-06-07

    The aim of this paper is to shed light on the topology and properties of the nodes (i.e., the zeros of the wave function) in electronic systems. Using the “electrons on a sphere” model, we study the nodes of two-, three-, and four-electron systems in various ferromagnetic configurations (sp, p{sup 2}, sd, pd, p{sup 3}, sp{sup 2}, and sp{sup 3}). In some particular cases (sp, p{sup 2}, sd, pd, and p{sup 3}), we rigorously prove that the non-interacting wave function has the same nodes as the exact (yet unknown) wave function. The number of atomic and molecular systems for whichmore » the exact nodes are known analytically is very limited and we show here that this peculiar feature can be attributed to interdimensional degeneracies. Although we have not been able to prove it rigorously, we conjecture that the nodes of the non-interacting wave function for the sp{sup 3} configuration are exact.« less

  2. Construction of exact constants of motion and effective models for many-body localized systems

    NASA Astrophysics Data System (ADS)

    Goihl, M.; Gluza, M.; Krumnow, C.; Eisert, J.

    2018-04-01

    One of the defining features of many-body localization is the presence of many quasilocal conserved quantities. These constants of motion constitute a cornerstone to an intuitive understanding of much of the phenomenology of many-body localized systems arising from effective Hamiltonians. They may be seen as local magnetization operators smeared out by a quasilocal unitary. However, accurately identifying such constants of motion remains a challenging problem. Current numerical constructions often capture the conserved operators only approximately, thus restricting a conclusive understanding of many-body localization. In this work, we use methods from the theory of quantum many-body systems out of equilibrium to establish an alternative approach for finding a complete set of exact constants of motion which are in addition guaranteed to represent Pauli-z operators. By this we are able to construct and investigate the proposed effective Hamiltonian using exact diagonalization. Hence, our work provides an important tool expected to further boost inquiries into the breakdown of transport due to quenched disorder.

  3. Exact solutions for an oscillator with anti-symmetric quadratic nonlinearity

    NASA Astrophysics Data System (ADS)

    Beléndez, A.; Martínez, F. J.; Beléndez, T.; Pascual, C.; Alvarez, M. L.; Gimeno, E.; Arribas, E.

    2018-04-01

    Closed-form exact solutions for an oscillator with anti-symmetric quadratic nonlinearity are derived from the first integral of the nonlinear differential equation governing the behaviour of this oscillator. The mathematical model is an ordinary second order differential equation in which the sign of the quadratic nonlinear term changes. Two parameters characterize this oscillator: the coefficient of the linear term and the coefficient of the quadratic term. Not only the common case in which both coefficients are positive but also all possible combinations of positive and negative signs of these coefficients which provide periodic motions are considered, giving rise to four different cases. Three different periods and solutions are obtained, since the same result is valid in two of these cases. An interesting feature is that oscillatory motions whose equilibrium points are not at x = 0 are also considered. The periods are given in terms of an incomplete or complete elliptic integral of the first kind, and the exact solutions are expressed as functions including Jacobi elliptic cosine or sine functions.

  4. Numerical analysis of the accuracy of bivariate quantile distributions utilizing copulas compared to the GUM supplement 2 for oil pressure balance uncertainties

    NASA Astrophysics Data System (ADS)

    Ramnath, Vishal

    2017-11-01

    In the field of pressure metrology the effective area is Ae = A0 (1 + λP) where A0 is the zero-pressure area and λ is the distortion coefficient and the conventional practise is to construct univariate probability density functions (PDFs) for A0 and λ. As a result analytical generalized non-Gaussian bivariate joint PDFs has not featured prominently in pressure metrology. Recently extended lambda distribution based quantile functions have been successfully utilized for summarizing univariate arbitrary PDF distributions of gas pressure balances. Motivated by this development we investigate the feasibility and utility of extending and applying quantile functions to systems which naturally exhibit bivariate PDFs. Our approach is to utilize the GUM Supplement 1 methodology to solve and generate Monte Carlo based multivariate uncertainty data for an oil based pressure balance laboratory standard that is used to generate known high pressures, and which are in turn cross-floated against another pressure balance transfer standard in order to deduce the transfer standard's respective area. We then numerically analyse the uncertainty data by formulating and constructing an approximate bivariate quantile distribution that directly couples A0 and λ in order to compare and contrast its accuracy to an exact GUM Supplement 2 based uncertainty quantification analysis.

  5. Effects of a Brief Empowerment Program for Families of Persons with Mental Illness in South Korea: A Pilot Study.

    PubMed

    Hyun, Myung-Sun; Nam, Kyoung A; Kim, Hyunlye

    2018-05-30

    Families of persons with mental illness (PMIs) are considered important resources for PMIs rather than as contributors to their mental illness. However, these families experience not only the burden of caregiving but also social stigma and discrimination in various aspects of their lives, and their psychosocial needs tend to be overlooked. This was a pilot study to explore the effects of a brief empowerment program on the empowerment and quality of life of families of PMIs in South Korea. A repeated-measures design with a control group and pre/post-follow-up testing was used. We enrolled 18 participants (experimental group = 9, control group = 9). The experimental group participated in an empowerment program consisting of four sessions over 4 weeks. Data were collected before and after the program, and again 4 weeks later. The χ 2 -test, Fisher's exact probability test, t-test, and repeated-measures analysis of covariance were used, as appropriate, to analyze data. The program significantly increased empowerment (F = 4.66, p = .020) and quality of life (F = 5.83, p = .009) among participants in the experimental group over time. Its therapeutic features, such as sharing their experiences, discussion, and presentations, can be applied to create effective psychosocial interventions for families of PMIs.

  6. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  7. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  8. Improving deep convolutional neural networks with mixed maxout units

    PubMed Central

    Liu, Fu-xian; Li, Long-yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737

  9. Improved Measures of Integrated Information

    PubMed Central

    Tegmark, Max

    2016-01-01

    Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ-measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands. PMID:27870846

  10. First passage properties of a generalized Pólya urn

    NASA Astrophysics Data System (ADS)

    Kearney, Michael J.; Martin, Richard J.

    2016-12-01

    A generalized two-component Pólya urn process, parameterized by a variable α , is studied in terms of the likelihood that due to fluctuations the initially smaller population in a scenario of competing population growth eventually becomes the larger, or is the larger after a certain passage of time. By casting the problem as an inhomogeneous directed random walk we quantify this role-reversal phenomenon through the first passage probability that equality in size is first reached at a given time, and the related exit probability that equality in size is reached no later than a given time. Using an embedding technique, exact results are obtained which complement existing results and provide new insights into behavioural changes (akin to phase transitions) which occur at defined values of α .

  11. Vibration-translation energy transfer in anharmonic diatomic molecules. 1: A critical evaluation of the semiclassical approximation

    NASA Technical Reports Server (NTRS)

    Mckenzie, R. L.

    1974-01-01

    The semiclassical approximation is applied to anharmonic diatomic oscillators in excited initial states. Multistate numerical solutions giving the vibrational transition probabilities for collinear collisions with an inert atom are compared with equivalent, exact quantum-mechanical calculations. Several symmetrization methods are shown to correlate accurately the predictions of both theories for all initial states, transitions, and molecular types tested, but only if coupling of the oscillator motion and the classical trajectory of the incident particle is considered. In anharmonic heteronuclear molecules, the customary semiclassical method of computing the classical trajectory independently leads to transition probabilities with anomalous low-energy resonances. Proper accounting of the effects of oscillator compression and recoil on the incident particle trajectory removes the anomalies and restores the applicability of the semiclassical approximation.

  12. Approximate and exact numerical integration of the gas dynamic equations

    NASA Technical Reports Server (NTRS)

    Lewis, T. S.; Sirovich, L.

    1979-01-01

    A highly accurate approximation and a rapidly convergent numerical procedure are developed for two dimensional steady supersonic flow over an airfoil. Examples are given for a symmetric airfoil over a range of Mach numbers. Several interesting features are found in the calculation of the tail shock and the flow behind the airfoil.

  13. Geometrical Simplification of the Dipole-Dipole Interaction Formula

    ERIC Educational Resources Information Center

    Kocbach, Ladislav; Lubbad, Suhail

    2010-01-01

    Many students meet dipole-dipole potential energy quite early on when they are taught electrostatics or magnetostatics and it is also a very popular formula, featured in encyclopedias. We show that by a simple rewriting of the formula it becomes apparent that, for example, by reorienting the two dipoles, their attraction can become exactly twice…

  14. The Role of Pedagogical Variables in Intercultural Development: A Study of Faculty-Led Programs

    ERIC Educational Resources Information Center

    Spenader, Allison J.; Retka, Peggy

    2015-01-01

    Study abroad is often regarded as an important curricular component for supporting intercultural development among college students. While creating rich cross-cultural experiences for students is of primary concern, it remains unclear exactly which programmatic features of study abroad influence intercultural growth in a positive way. Consensus…

  15. The Five Marks of the Mental

    PubMed Central

    Pernu, Tuomas K.

    2017-01-01

    The mental realm seems different to the physical realm; the mental is thought to be dependent on, yet distinct from the physical. But how, exactly, are the two realms supposed to be different, and what, exactly, creates the seemingly insurmountable juxtaposition between the mental and the physical? This review identifies and discusses five marks of the mental, features that set characteristically mental phenomena apart from the characteristically physical phenomena. These five marks (intentionality, consciousness, free will, teleology, and normativity) are not presented as a set of features that define mentality. Rather, each of them is something we seem to associate with phenomena we consider mental, and each of them seems to be in tension with the physical view of reality in its own particular way. It is thus suggested how there is no single mind-body problem, but a set of distinct but interconnected problems. Each of these separate problems is analyzed, and their differences, similarities and connections are identified. This provides a useful basis for future theoretical work on psychology and philosophy of mind, that until now has too often suffered from unclarities, inadequacies, and conflations. PMID:28736537

  16. A correlative optical microscopy and scanning electron microscopy approach to locating nanoparticles in brain tumors.

    PubMed

    Kempen, Paul J; Kircher, Moritz F; de la Zerda, Adam; Zavaleta, Cristina L; Jokerst, Jesse V; Mellinghoff, Ingo K; Gambhir, Sanjiv S; Sinclair, Robert

    2015-01-01

    The growing use of nanoparticles in biomedical applications, including cancer diagnosis and treatment, demands the capability to exactly locate them within complex biological systems. In this work a correlative optical and scanning electron microscopy technique was developed to locate and observe multi-modal gold core nanoparticle accumulation in brain tumor models. Entire brain sections from mice containing orthotopic brain tumors injected intravenously with nanoparticles were imaged using both optical microscopy to identify the brain tumor, and scanning electron microscopy to identify the individual nanoparticles. Gold-based nanoparticles were readily identified in the scanning electron microscope using backscattered electron imaging as bright spots against a darker background. This information was then correlated to determine the exact location of the nanoparticles within the brain tissue. The nanoparticles were located only in areas that contained tumor cells, and not in the surrounding healthy brain tissue. This correlative technique provides a powerful method to relate the macro- and micro-scale features visible in light microscopy with the nanoscale features resolvable in scanning electron microscopy. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A Bayesian pick-the-winner design in a randomized phase II clinical trial.

    PubMed

    Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E

    2017-10-24

    Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.

  18. 10 CFR 100.10 - Factors to be considered when evaluating sites.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... reactor incorporates unique or unusual features having a significant bearing on the probability or consequences of accidental release of radioactive materials; (4) The safety features that are to be engineered... radioactive fission products. In addition, the site location and the engineered features included as...

  19. 10 CFR 100.10 - Factors to be considered when evaluating sites.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reactor incorporates unique or unusual features having a significant bearing on the probability or consequences of accidental release of radioactive materials; (4) The safety features that are to be engineered... radioactive fission products. In addition, the site location and the engineered features included as...

  20. 10 CFR 100.10 - Factors to be considered when evaluating sites.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reactor incorporates unique or unusual features having a significant bearing on the probability or consequences of accidental release of radioactive materials; (4) The safety features that are to be engineered... radioactive fission products. In addition, the site location and the engineered features included as...

  1. Development of a Nonlinear Probability of Collision Tool for the Earth Observing System

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2006-01-01

    The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.

  2. Unsupervised Spatio-Temporal Data Mining Framework for Burned Area Mapping

    NASA Technical Reports Server (NTRS)

    Kumar, Vipin (Inventor); Boriah, Shyam (Inventor); Mithal, Varun (Inventor); Khandelwal, Ankush (Inventor)

    2016-01-01

    A method reduces processing time required to identify locations burned by fire by receiving a feature value for each pixel in an image, each pixel representing a sub-area of a location. Pixels are then grouped based on similarities of the feature values to form candidate burn events. For each candidate burn event, a probability that the candidate burn event is a true burn event is determined based on at least one further feature value for each pixel in the candidate burn event. Candidate burn events that have a probability below a threshold are removed from further consideration as burn events to produce a set of remaining candidate burn events.

  3. Should I Stay or Should I Go? A Habitat-Dependent Dispersal Kernel Improves Prediction of Movement

    PubMed Central

    Vinatier, Fabrice; Lescourret, Françoise; Duyck, Pierre-François; Martin, Olivier; Senoussi, Rachid; Tixier, Philippe

    2011-01-01

    The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus) in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes. PMID:21765890

  4. Should I stay or should I go? A habitat-dependent dispersal kernel improves prediction of movement.

    PubMed

    Vinatier, Fabrice; Lescourret, Françoise; Duyck, Pierre-François; Martin, Olivier; Senoussi, Rachid; Tixier, Philippe

    2011-01-01

    The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus) in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes.

  5. Learn-as-you-go acceleration of cosmological parameter estimates

    NASA Astrophysics Data System (ADS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  6. Brownian motion in time-dependent logarithmic potential: Exact results for dynamics and first-passage properties.

    PubMed

    Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor

    2015-09-21

    The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.

  7. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. © 2015, The International Biometric Society.

  8. Learn-as-you-go acceleration of cosmological parameter estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less

  9. Convexity of the entanglement entropy of SU(2N)-symmetric fermions with attractive interactions.

    PubMed

    Drut, Joaquín E; Porter, William J

    2015-02-06

    The positivity of the probability measure of attractively interacting systems of 2N-component fermions enables the derivation of an exact convexity property for the ground-state energy of such systems. Using analogous arguments, applied to path-integral expressions for the entanglement entropy derived recently, we prove nonperturbative analytic relations for the Rényi entropies of those systems. These relations are valid for all subsystem sizes, particle numbers, and dimensions, and in arbitrary external trapping potentials.

  10. Ox Mountain Sanitary Landfill Apanolio Canyon Expansion Site, San Mateo County, California. Volume 2. Appendix

    DTIC Science & Technology

    1989-04-01

    old-growth forest located between Sonoma County and the Oregon border. The exact northern limit of the small southern I population is not known...meadow habitat on the inland side of sand dunes at Pt. Reyes (Matin County) and Bodega Bay ( Sonoma County ). Historically, the silverspot also probably...and Sonoma County (6.5 mi. northeast of Penngrove). Collection dates ranged from 27IJanuary to 30 July. Most of the species of Hydrochara are similar

  11. Solvable multistate model of Landau-Zener transitions in cavity QED

    DOE PAGES

    Sinitsyn, Nikolai; Li, Fuxiang

    2016-06-29

    We consider the model of a single optical cavity mode interacting with two-level systems (spins) driven by a linearly time-dependent field. When this field passes through values at which spin energy level splittings become comparable to spin coupling to the optical mode, a cascade of Landau-Zener (LZ) transitions leads to co-flips of spins in exchange for photons of the cavity. We derive exact transition probabilities between different diabatic states induced by such a sweep of the field.

  12. On the mixing time in the Wang-Landau algorithm

    NASA Astrophysics Data System (ADS)

    Fadeeva, Marina; Shchur, Lev

    2018-01-01

    We present preliminary results of the investigation of the properties of the Markov random walk in the energy space generated by the Wang-Landau probability. We build transition matrix in the energy space (TMES) using the exact density of states for one-dimensional and two-dimensional Ising models. The spectral gap of TMES is inversely proportional to the mixing time of the Markov chain. We estimate numerically the dependence of the mixing time on the lattice size, and extract the mixing exponent.

  13. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  14. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  15. Analytical theory of mesoscopic Bose-Einstein condensation in an ideal gas

    NASA Astrophysics Data System (ADS)

    Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.

    2010-03-01

    We find the universal structure and scaling of the Bose-Einstein condensation (BEC) statistics and thermodynamics (Gibbs free energy, average energy, heat capacity) for a mesoscopic canonical-ensemble ideal gas in a trap with an arbitrary number of atoms, any volume, and any temperature, including the whole critical region. We identify a universal constraint-cutoff mechanism that makes BEC fluctuations strongly non-Gaussian and is responsible for all unusual critical phenomena of the BEC phase transition in the ideal gas. The main result is an analytical solution to the problem of critical phenomena. It is derived by, first, calculating analytically the universal probability distribution of the noncondensate occupation, or a Landau function, and then using it for the analytical calculation of the universal functions for the particular physical quantities via the exact formulas which express the constraint-cutoff mechanism. We find asymptotics of that analytical solution as well as its simple analytical approximations which describe the universal structure of the critical region in terms of the parabolic cylinder or confluent hypergeometric functions. The obtained results for the order parameter, all higher-order moments of BEC fluctuations, and thermodynamic quantities perfectly match the known asymptotics outside the critical region for both low and high temperature limits. We suggest two- and three-level trap models of BEC and find their exact solutions in terms of the cutoff negative binomial distribution (which tends to the cutoff gamma distribution in the continuous limit) and the confluent hypergeometric distribution, respectively. Also, we present an exactly solvable cutoff Gaussian model of BEC in a degenerate interacting gas. All these exact solutions confirm the universality and constraint-cutoff origin of the strongly non-Gaussian BEC statistics. We introduce a regular refinement scheme for the condensate statistics approximations on the basis of the infrared universality of higher-order cumulants and the method of superposition and show how to model BEC statistics in the actual traps. In particular, we find that the three-level trap model with matching the first four or five cumulants is enough to yield remarkably accurate results for all interesting quantities in the whole critical region. We derive an exact multinomial expansion for the noncondensate occupation probability distribution and find its high-temperature asymptotics (Poisson distribution) and corrections to it. Finally, we demonstrate that the critical exponents and a few known terms of the Taylor expansion of the universal functions, which were calculated previously from fitting the finite-size simulations within the phenomenological renormalization-group theory, can be easily obtained from the presented full analytical solutions for the mesoscopic BEC as certain approximations in the close vicinity of the critical point.

  16. [Migraine in SLE: role of antiphospholipid antibodies and Raynaud's phenomenon].

    PubMed

    Annese, Virginia; Tomietto, Paola; Venturini, Paolo; D'Agostini, Serena; Ferraccioli, Gianfranco

    2006-01-01

    To determine the role of antiphospholipid antibodies (aPL) and of Raynaud's phenomenon (RP) in the development of migraine in patients with systemic lupus erythematosus (SLE). 50 unselected SLE patients and 20 rheumatoid arthritis (RA) controls underwent an interview to define the presence of migraine according to the guidelines of the International Headache Society (1988). Serological tests for aPL were performed in all patients. SLE patients were divided according to positivity for RP and/or aPL into 4 subsets: R-/aPL-, R-/aPL+, R+/aPL- and R+/aPL+. Data were analysed using Fisher's exact test, Chi-square test and U Mann-Whitney test. SLE and RA patients were similar for demographic and clinical features; aPL positivity was found in a greater proportion of SLE patients versus RA controls (68% vs 25%, p=0.0036). 31 of the 50 lupic patients (62%) and 7 of the 20 RA controls (35%) suffered from migraine (OR=3, CI:1-8.9). Among SLE and RA patients, migraine was associated with aPL positivity (p=0.027 and p=0.019). Analysing the combined effect of aPL and RP on migraine, in R+/aPL+ patients we detected an higher frequency of migraine (85.7%) with respect to the patients negative for these two features (27%, p=0.0051, OR=16, CI:2.2-118) and to the patients positive only for aPL (65%, p=0.0031, OR=6.2, CI:1.2-32). Migraine in SLE and RA associates with aPL positivity. The simultaneous presence of RP increases by 2,5 times the probability of having migraine, suggesting that cerebral vasospasm might be more common in patients with peripheral vasospasm, given the presence of aPL.

  17. Predicting Space Weather Effects on Close Approach Events

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Newman, Lauri K.; Besser, Rebecca L.; Pachura, Daniel A.

    2015-01-01

    The NASA Robotic Conjunction Assessment Risk Analysis (CARA) team sends ephemeris data to the Joint Space Operations Center (JSpOC) for conjunction assessment screening against the JSpOC high accuracy catalog and then assesses risk posed to protected assets from predicted close approaches. Since most spacecraft supported by the CARA team are located in LEO orbits, atmospheric drag is the primary source of state estimate uncertainty. Drag magnitude and uncertainty is directly governed by atmospheric density and thus space weather. At present the actual effect of space weather on atmospheric density cannot be accurately predicted because most atmospheric density models are empirical in nature, which do not perform well in prediction. The Jacchia-Bowman-HASDM 2009 (JBH09) atmospheric density model used at the JSpOC employs a solar storm active compensation feature that predicts storm sizes and arrival times and thus the resulting neutral density alterations. With this feature, estimation errors can occur in either direction (i.e., over- or under-estimation of density and thus drag). Although the exact effect of a solar storm on atmospheric drag cannot be determined, one can explore the effects of JBH09 model error on conjuncting objects' trajectories to determine if a conjunction is likely to become riskier, less risky, or pass unaffected. The CARA team has constructed a Space Weather Trade-Space tool that systematically alters the drag situation for the conjuncting objects and recalculates the probability of collision for each case to determine the range of possible effects on the collision risk. In addition to a review of the theory and the particulars of the tool, the different types of observed output will be explained, along with statistics of their frequency.

  18. The extended Einstein-Maxwell-aether-axion model: Exact solutions for axionically controlled pp-wave aether modes

    NASA Astrophysics Data System (ADS)

    Balakin, Alexander B.

    2018-03-01

    The extended Einstein-Maxwell-aether-axion model describes internal interactions inside the system, which contains gravitational, electromagnetic fields, the dynamic unit vector field describing the velocity of an aether, and the pseudoscalar field associated with the axionic dark matter. The specific feature of this model is that the axion field controls the dynamics of the aether through the guiding functions incorporated into Jacobson’s constitutive tensor. Depending on the state of the axion field, these guiding functions can control and switch on or switch off the influence of acceleration, shear, vorticity and expansion of the aether flow on the state of physical system as a whole. We obtain new exact solutions, which possess the pp-wave symmetry, and indicate them by the term pp-wave aether modes in contrast to the pure pp-waves, which cannot propagate in this field conglomerate. These exact solutions describe a specific dynamic state of the pseudoscalar field, which corresponds to one of the minima of the axion potential and switches off the influence of shear and expansion of the aether flow; the model does not impose restrictions on Jacobson’s coupling constants and on the axion mass. Properties of these new exact solutions are discussed.

  19. Exact reconstruction with directional wavelets on the sphere

    NASA Astrophysics Data System (ADS)

    Wiaux, Y.; McEwen, J. D.; Vandergheynst, P.; Blanc, O.

    2008-08-01

    A new formalism is derived for the analysis and exact reconstruction of band-limited signals on the sphere with directional wavelets. It represents an evolution of a previously developed wavelet formalism developed by Antoine & Vandergheynst and Wiaux et al. The translations of the wavelets at any point on the sphere and their proper rotations are still defined through the continuous three-dimensional rotations. The dilations of the wavelets are directly defined in harmonic space through a new kernel dilation, which is a modification of an existing harmonic dilation. A family of factorized steerable functions with compact harmonic support which are suitable for this kernel dilation are first identified. A scale-discretized wavelet formalism is then derived, relying on this dilation. The discrete nature of the analysis scales allows the exact reconstruction of band-limited signals. A corresponding exact multi-resolution algorithm is finally described and an implementation is tested. The formalism is of interest notably for the denoising or the deconvolution of signals on the sphere with a sparse expansion in wavelets. In astrophysics, it finds a particular application for the identification of localized directional features in the cosmic microwave background data, such as the imprint of topological defects, in particular, cosmic strings, and for their reconstruction after separation from the other signal components.

  20. A computational model for estimating recruitment of primary afferent fibers by intraneural stimulation in the dorsal root ganglia

    NASA Astrophysics Data System (ADS)

    Bourbeau, D. J.; Hokanson, J. A.; Rubin, J. E.; Weber, D. J.

    2011-10-01

    Primary afferent microstimulation has been proposed as a method for activating cutaneous and muscle afferent fibers to restore tactile and proprioceptive feedback after limb loss or peripheral neuropathy. Large populations of primary afferent fibers can be accessed directly by implanting microelectrode arrays in the dorsal root ganglia (DRG), which provide a compact and stable target for stimulating a diverse group of sensory fibers. To gain insight into factors affecting the number and types of primary afferents activated, we developed a computational model that simulates the recruitment of fibers in the feline L7 DRG. The model comprises two parts. The first part is a single-fiber model used to describe the current-distance relation and was based on the McIntyre-Richardson-Grill model for excitability. The second part uses the results of the singe-fiber model and published data on fiber size distributions to predict the probability of recruiting a given number of fibers as a function of stimulus intensity. The range of intensities over which exactly one fiber was recruited was approximately 0.5-5 µA (0.1-1 nC per phase); the stimulus intensity at which the probability of recruiting exactly one fiber was maximized was 2.3 µA. However, at 2.3 µA, it was also possible to recruit up to three fibers, albeit with a lower probability. Stimulation amplitudes up to 6 µA were tested with the population model, which showed that as the amplitude increased, the number of fibers recruited increased exponentially. The distribution of threshold amplitudes predicted by the model was similar to that previously reported by in vivo experimentation. Finally, the model suggested that medium diameter fibers (7.3-11.5 µm) may be recruited with much greater probability than large diameter fibers (12.8-16 µm). This model may be used to efficiently test a range of stimulation parameters and nerve morphologies to complement results from electrophysiology experiments and to aid in the design of microelectrode arrays for neural interfaces.

  1. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  2. Nonlinear normal modes in electrodynamic systems: A nonperturbative approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudrin, A. V., E-mail: kud@rf.unn.ru; Kudrina, O. A.; Petrov, E. Yu.

    2016-06-15

    We consider electromagnetic nonlinear normal modes in cylindrical cavity resonators filled with a nonlinear nondispersive medium. The key feature of the analysis is that exact analytic solutions of the nonlinear field equations are employed to study the mode properties in detail. Based on such a nonperturbative approach, we rigorously prove that the total energy of free nonlinear oscillations in a distributed conservative system, such as that considered in our work, can exactly coincide with the sum of energies of the normal modes of the system. This fact implies that the energy orthogonality property, which has so far been known tomore » hold only for linear oscillations and fields, can also be observed in a nonlinear oscillatory system.« less

  3. Exact extraction method for road rutting laser lines

    NASA Astrophysics Data System (ADS)

    Hong, Zhiming

    2018-02-01

    This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.

  4. EXACT RELATIVISTIC NEWTONIAN REPRESENTATION OF GRAVITATIONAL STATIC SPACETIME GEOMETRIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Shubhrangshu; Sarkar, Tamal; Bhadra, Arunava, E-mail: sghosh@jcbose.ac.in, E-mail: ta.sa.nbu@hotmail.com, E-mail: aru_bhadra@yahoo.com

    2016-09-01

    We construct a self-consistent relativistic Newtonian analogue corresponding to gravitational static spherical symmetric spacetime geometries, starting directly from a generalized scalar relativistic gravitational action in a Newtonian framework, which gives geodesic equations of motion identical to those of the parent metric. Consequently, the derived velocity-dependent relativistic scalar potential, which is a relativistic generalization of the Newtonian gravitational potential, exactly reproduces the relativistic gravitational features corresponding to any static spherical symmetric spacetime geometry in its entirety, including all the experimentally tested gravitational effects in the weak field up to the present. This relativistic analogous potential is expected to be quite usefulmore » in studying a wide range of astrophysical phenomena, especially in strong field gravity.« less

  5. Generic features of the wealth distribution in ideal-gas-like markets.

    PubMed

    Mohanty, P K

    2006-07-01

    We provide an exact solution to the ideal-gas-like models studied in econophysics to understand the microscopic origin of Pareto law. In these classes of models the key ingredient necessary for having a self-organized scale-free steady-state distribution is the trading or collision rule where agents or particles save a definite fraction of their wealth or energy and invest the rest for trading. Using a Gibbs ensemble approach we could obtain the exact distribution of wealth in this model. Moreover we show that in this model (a) good savers are always rich and (b) every agent poor or rich invests the same amount for trading. Nonlinear trading rules could alter the generic scenario observed here.

  6. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  7. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  8. Self-limiting Atypical Antipsychotics-induced Edema: Clinical Cases and Systematic Review

    PubMed Central

    Umar, Musa Usman; Abdullahi, Aminu Taura

    2016-01-01

    A number of atypical antipsychotics have been associated with peripheral edema. The exact cause is not known. We report two cases of olanzapine-induced edema and a brief review of atypical antipsychotic-induced edema, possible risk factors, etiology, and clinical features. The recommendation is given on different methods of managing this side effect. PMID:27335511

  9. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    ERIC Educational Resources Information Center

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  10. Self-limiting Atypical Antipsychotics-induced Edema: Clinical Cases and Systematic Review.

    PubMed

    Umar, Musa Usman; Abdullahi, Aminu Taura

    2016-01-01

    A number of atypical antipsychotics have been associated with peripheral edema. The exact cause is not known. We report two cases of olanzapine-induced edema and a brief review of atypical antipsychotic-induced edema, possible risk factors, etiology, and clinical features. The recommendation is given on different methods of managing this side effect.

  11. Operational foreshock forecasting: Fifteen years after

    NASA Astrophysics Data System (ADS)

    Ogata, Y.

    2010-12-01

    We are concerned with operational forecasting of the probability that events are foreshocks of a forthcoming earthquake that is significantly larger (mainshock). Specifically, we define foreshocks as the preshocks substantially smaller than the mainshock by a magnitude gap of 0.5 or larger. The probability gain of foreshock forecast is extremely high compare to long-term forecast by renewal processes or various alarm-based intermediate-term forecasts because of a large event’s low occurrence rate in a short period and a narrow target region. Thus, it is desired to establish operational foreshock probability forecasting as seismologists have done for aftershocks. When a series of earthquakes occurs in a region, we attempt to discriminate foreshocks from a swarm or mainshock-aftershock sequence. Namely, after real time identification of an earthquake cluster using methods such as the single-link algorithm, the probability is calculated by applying statistical features that discriminate foreshocks from other types of clusters, by considering the events' stronger proximity in time and space and tendency towards chronologically increasing magnitudes. These features were modeled for probability forecasting and the coefficients of the model were estimated in Ogata et al. (1996) for the JMA hypocenter data (M≧4, 1926-1993). Currently, fifteen years has passed since the publication of the above-stated work so that we are able to present the performance and validation of the forecasts (1994-2009) by using the same model. Taking isolated events into consideration, the probability of the first events in a potential cluster being a foreshock vary in a range between 0+% and 10+% depending on their locations. This conditional forecasting performs significantly better than the unconditional (average) foreshock probability of 3.7% throughout Japan region. Furthermore, when we have the additional events in a cluster, the forecast probabilities range more widely from nearly 0% to about 40% depending on the discrimination features among the events in the cluster. This conditional forecasting further performs significantly better than the unconditional foreshock probability of 7.3%, which is the average probability of the plural events in the earthquake clusters. Indeed, the frequency ratios of the actual foreshocks are consistent with the forecasted probabilities. Reference: Ogata, Y., Utsu, T. and Katsura, K. (1996). Statistical discrimination of foreshocks from other earthquake clusters, Geophys. J. Int. 127, 17-30.

  12. Light scattering by hexagonal ice crystals with distributed inclusions

    NASA Astrophysics Data System (ADS)

    Panetta, R. Lee; Zhang, Jia-Ning; Bi, Lei; Yang, Ping; Tang, Guanlin

    2016-07-01

    Inclusions of air bubbles or soot particles have significant effects on the single-scattering properties of ice crystals, effects that in turn have significant impacts on the radiation budget of an atmosphere containing the crystals. This study investigates some of the single-scattering effects in the case of hexagonal ice crystals, including effects on the backscattering depolarization ratio, a quantity of practical importance in the interpretation of lidar observations. One distinguishing feature of the study is an investigation of scattering properties at a visible wavelength for a crystal with size parameter (x) above 100, a size regime where one expects some agreement between exact methods and geometrical optics methods. This expectation is generally borne out in a test comparison of how the sensitivity of scattering properties to the distribution of a given volume fraction of included air is represented using (i) an approximate Monte Carlo Ray Tracing (MCRT) method and (ii) a numerically exact pseudo-spectral time-domain (PSTD) method. Another distinguishing feature of the study is a close examination, using the numerically exact Invariant-Imbedding T-Matrix (II-TM) method, of how some optical properties of importance to satellite remote sensing vary as the volume fraction of inclusions and size of crystal are varied. Although such an investigation of properties in the x>100 regime faces serious computational burdens that force a large number of idealizations and simplifications in the study, the results nevertheless provide an intriguing glimpse of what is evidently a quite complex sensitivity of optical scattering properties to inclusions of air or soot as volume fraction and size parameter are varied.

  13. A new probably autosomal recessive cardiomelic dysplasia with mesoaxial hexadactyly

    PubMed Central

    Martínez, R Martínez Y; Corona-Rivera, E; Jiménez-Martínez, M; Ocampo-Campos, R; García-Maravilla, S; Cantú, J M

    1981-01-01

    A distinct probably autosomal recessive syndrome was ascertained in a 17-year-old boy and his deceased sister. The main features were cardiac dysplasia, peculiar facies, central bilateral (mesoaxial) hexadactyly, synmetacarpalia, short stature, ocular torticollis, and delayed puberty. Images PMID:7241534

  14. Introducing the Qplex: a novel arena for quantum theory

    NASA Astrophysics Data System (ADS)

    Appleby, Marcus; Fuchs, Christopher A.; Stacey, Blake C.; Zhu, Huangjun

    2017-07-01

    We reconstruct quantum theory starting from the premise that, as Asher Peres remarked, "Unperformed experiments have no results." The tools of quantum information theory, and in particular the symmetric informationally complete (SIC) measurements, provide a concise expression of how exactly Peres's dictum holds true. That expression is a constraint on how the probability distributions for outcomes of different, hypothetical and mutually exclusive experiments ought to mesh together, a type of constraint not foreseen in classical thinking. Taking this as our foundational principle, we show how to reconstruct the formalism of quantum theory in finite-dimensional Hilbert spaces. The central variety of mathematical entity in our reconstruction is the qplex, a very particular type of subset of a probability simplex. Along the way, by closely studying the symmetry properties of qplexes, we derive a condition for the existence of a d-dimensional SIC.

  15. Self-diffusion in periodic porous media: a comparison of numerical simulation and eigenvalue methods.

    PubMed

    Schwartz, L M; Bergman, D J; Dunn, K J; Mitra, P P

    1996-01-01

    Random walk computer simulations are an important tool in understanding magnetic resonance measurements in porous media. In this paper we focus on the description of pulsed field gradient spin echo (PGSE) experiments that measure the probability, P(R,t), that a diffusing water molecule will travel a distance R in a time t. Because PGSE simulations are often limited by statistical considerations, we will see that valuable insight can be gained by working with simple periodic geometries and comparing simulation data to the results of exact eigenvalue expansions. In this connection, our attention will be focused on (1) the wavevector, k, and time dependent magnetization, M(k, t); and (2) the normalized probability, Ps(delta R, t), that a diffusing particle will return to within delta R of the origin after time t.

  16. Measurement Model Nonlinearity in Estimation of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Majji, Manoranjan; Junkins, J. L.; Turner, J. D.

    2012-06-01

    The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.

  17. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  18. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  19. Application of psychometric theory to the measurement of voice quality using rating scales.

    PubMed

    Shrivastav, Rahul; Sapienza, Christine M; Nandur, Vuday

    2005-04-01

    Rating scales are commonly used to study voice quality. However, recent research has demonstrated that perceptual measures of voice quality obtained using rating scales suffer from poor interjudge agreement and reliability, especially in the mid-range of the scale. These findings, along with those obtained using multidimensional scaling (MDS), have been interpreted to show that listeners perceive voice quality in an idiosyncratic manner. Based on psychometric theory, the present research explored an alternative explanation for the poor interlistener agreement observed in previous research. This approach suggests that poor agreement between listeners may result, in part, from measurement errors related to a variety of factors rather than true differences in the perception of voice quality. In this study, 10 listeners rated breathiness for 27 vowel stimuli using a 5-point rating scale. Each stimulus was presented to the listeners 10 times in random order. Interlistener agreement and reliability were calculated from these ratings. Agreement and reliability were observed to improve when multiple ratings of each stimulus from each listener were averaged and when standardized scores were used instead of absolute ratings. The probability of exact agreement was found to be approximately .9 when using averaged ratings and standardized scores. In contrast, the probability of exact agreement was only .4 when a single rating from each listener was used to measure agreement. These findings support the hypothesis that poor agreement reported in past research partly arises from errors in measurement rather than individual differences in the perception of voice quality.

  20. Inferring relationships between pairs of individuals from locus heterozygosities

    PubMed Central

    Presciuttini, Silvano; Toni, Chiara; Tempestini, Elena; Verdiani, Simonetta; Casarino, Lucia; Spinetti, Isabella; Stefano, Francesco De; Domenici, Ranieri; Bailey-Wilson, Joan E

    2002-01-01

    Background The traditional exact method for inferring relationships between individuals from genetic data is not easily applicable in all situations that may be encountered in several fields of applied genetics. This study describes an approach that gives affordable results and is easily applicable; it is based on the probabilities that two individuals share 0, 1 or both alleles at a locus identical by state. Results We show that these probabilities (zi) depend on locus heterozygosity (H), and are scarcely affected by variation of the distribution of allele frequencies. This allows us to obtain empirical curves relating zi's to H for a series of common relationships, so that the likelihood ratio of a pair of relationships between any two individuals, given their genotypes at a locus, is a function of a single parameter, H. Application to large samples of mother-child and full-sib pairs shows that the statistical power of this method to infer the correct relationship is not much lower than the exact method. Analysis of a large database of STR data proves that locus heterozygosity does not vary significantly among Caucasian populations, apart from special cases, so that the likelihood ratio of the more common relationships between pairs of individuals may be obtained by looking at tabulated zi values. Conclusions A simple method is provided, which may be used by any scientist with the help of a calculator or a spreadsheet to compute the likelihood ratios of common alternative relationships between pairs of individuals. PMID:12441003

  1. Exact sampling hardness of Ising spin models

    NASA Astrophysics Data System (ADS)

    Fefferman, B.; Foss-Feig, M.; Gorshkov, A. V.

    2017-09-01

    We study the complexity of classically sampling from the output distribution of an Ising spin model, which can be implemented naturally in a variety of atomic, molecular, and optical systems. In particular, we construct a specific example of an Ising Hamiltonian that, after time evolution starting from a trivial initial state, produces a particular output configuration with probability very nearly proportional to the square of the permanent of a matrix with arbitrary integer entries. In a similar spirit to boson sampling, the ability to sample classically from the probability distribution induced by time evolution under this Hamiltonian would imply unlikely complexity theoretic consequences, suggesting that the dynamics of such a spin model cannot be efficiently simulated with a classical computer. Physical Ising spin systems capable of achieving problem-size instances (i.e., qubit numbers) large enough so that classical sampling of the output distribution is classically difficult in practice may be achievable in the near future. Unlike boson sampling, our current results only imply hardness of exact classical sampling, leaving open the important question of whether a much stronger approximate-sampling hardness result holds in this context. The latter is most likely necessary to enable a convincing experimental demonstration of quantum supremacy. As referenced in a recent paper [A. Bouland, L. Mancinska, and X. Zhang, in Proceedings of the 31st Conference on Computational Complexity (CCC 2016), Leibniz International Proceedings in Informatics (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, 2016)], our result completes the sampling hardness classification of two-qubit commuting Hamiltonians.

  2. Survival probability of a truncated radial oscillator subject to periodic kicks

    NASA Astrophysics Data System (ADS)

    Tanabe, Seiichi; Watanabe, Shinichi; Saif, Farhan; Matsuzawa, Michio

    2002-03-01

    Classical and quantum survival probabilities are compared for a truncated radial oscillator undergoing impulsive interactions with periodic laser pulses represented here as kicks. The system is truncated in the sense that the harmonic potential is made valid only within a finite range; the rest of the space is treated as a perfect absorber. Exploring extended values of the parameters of this model [Phys. Rev. A 63, 052721 (2001)], we supplement discussions on classical and quantum features near resonances. The classical system proves to be quasi-integrable and preserves phase-space area despite the momentum transfered by the kicks, exhibiting simple yet rich phase-space features. A geometrical argument reveals quantum-classical correspondence in the locations of minima in the paired survival probabilities while the ``ionization'' rates differ due to quantum tunneling.

  3. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-03-23

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  4. Radio-nuclide mixture identification using medium energy resolution detectors

    DOEpatents

    Nelson, Karl Einar

    2013-09-17

    According to one embodiment, a method for identifying radio-nuclides includes receiving spectral data, extracting a feature set from the spectral data comparable to a plurality of templates in a template library, and using a branch and bound method to determine a probable template match based on the feature set and templates in the template library. In another embodiment, a device for identifying unknown radio-nuclides includes a processor, a multi-channel analyzer, and a memory operatively coupled to the processor, the memory having computer readable code stored thereon. The computer readable code is configured, when executed by the processor, to receive spectral data, to extract a feature set from the spectral data comparable to a plurality of templates in a template library, and to use a branch and bound method to determine a probable template match based on the feature set and templates in the template library.

  5. A Bayesian model averaging method for improving SMT phrase table

    NASA Astrophysics Data System (ADS)

    Duan, Nan

    2013-03-01

    Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.

  6. Deep learning of support vector machines with class probability output networks.

    PubMed

    Kim, Sangwook; Yu, Zhibin; Kil, Rhee Man; Lee, Minho

    2015-04-01

    Deep learning methods endeavor to learn features automatically at multiple levels and allow systems to learn complex functions mapping from the input space to the output space for the given data. The ability to learn powerful features automatically is increasingly important as the volume of data and range of applications of machine learning methods continues to grow. This paper proposes a new deep architecture that uses support vector machines (SVMs) with class probability output networks (CPONs) to provide better generalization power for pattern classification problems. As a result, deep features are extracted without additional feature engineering steps, using multiple layers of the SVM classifiers with CPONs. The proposed structure closely approaches the ideal Bayes classifier as the number of layers increases. Using a simulation of classification problems, the effectiveness of the proposed method is demonstrated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. V1494 Aql: Eclipsing Fast Nova with an Unusual Orbital Light Curve

    NASA Astrophysics Data System (ADS)

    Kato, Taichi; Ishioka, Ryoko; Uemura, Makoto; Starkey, Donn R.; Krajci, Tom

    2004-03-01

    We present the time-resolved photometry of V1494 Aql (Nova Aql 1999 No. 2) between 2001 November and 2003 June. The object is confirmed to be an eclipsing nova with a period of 0.1346138(2)d. The eclipses were present in all observed epochs. The orbital light curve shows a rather unusual profile, consisting of a bump-like feature at phase 0.6-0.7 and a dip-like feature at phase 0.2-0.4. These features were probably persistently present in all available observations between 2001 and 2003. A period analysis outside of the eclipses has confirmed that these variations have a period common to the orbital period, and are unlikely to be interpreted as superhumps. We suspect that the structure (probably in the accretion disk) fixed in the binary rotational frame is somehow responsible for this feature.

  8. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  9. Comparison of naïve Bayes and logistic regression for computer-aided diagnosis of breast masses using ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Cary, Theodore W.; Cwanger, Alyssa; Venkatesh, Santosh S.; Conant, Emily F.; Sehgal, Chandra M.

    2012-03-01

    This study compares the performance of two proven but very different machine learners, Naïve Bayes and logistic regression, for differentiating malignant and benign breast masses using ultrasound imaging. Ultrasound images of 266 masses were analyzed quantitatively for shape, echogenicity, margin characteristics, and texture features. These features along with patient age, race, and mammographic BI-RADS category were used to train Naïve Bayes and logistic regression classifiers to diagnose lesions as malignant or benign. ROC analysis was performed using all of the features and using only a subset that maximized information gain. Performance was determined by the area under the ROC curve, Az, obtained from leave-one-out cross validation. Naïve Bayes showed significant variation (Az 0.733 +/- 0.035 to 0.840 +/- 0.029, P < 0.002) with the choice of features, but the performance of logistic regression was relatively unchanged under feature selection (Az 0.839 +/- 0.029 to 0.859 +/- 0.028, P = 0.605). Out of 34 features, a subset of 6 gave the highest information gain: brightness difference, margin sharpness, depth-to-width, mammographic BI-RADs, age, and race. The probabilities of malignancy determined by Naïve Bayes and logistic regression after feature selection showed significant correlation (R2= 0.87, P < 0.0001). The diagnostic performance of Naïve Bayes and logistic regression can be comparable, but logistic regression is more robust. Since probability of malignancy cannot be measured directly, high correlation between the probabilities derived from two basic but dissimilar models increases confidence in the predictive power of machine learning models for characterizing solid breast masses on ultrasound.

  10. CT imaging of malignant metastatic hemangiopericytoma of the parotid gland with histopathological correlation

    PubMed Central

    Khoo, James B.; Sittampalam, Kesavan; Chee, Soo K.

    2008-01-01

    Abstract We report an extremely rare case of malignant hemangiopericytoma (HPC) of the parotid gland and its metastatic spread to lung, liver, and skeletal muscle. Computed tomography (CT) imaging, histopathological and immunohistochemical methods were employed to study the features of malignant HPC and its metastases. CT imaging was helpful to determine the exact location, involvement of adjacent structures and vascularity, as well as evaluating pulmonary, hepatic, peritoneal, and muscular metastases. Immunohistochemical and histopatholgical features of the primary tumor as well as the metastases were consistent with the diagnosis of malignant HPC. PMID:18940737

  11. Apollo 15 clastic materials and their relationship to local geologic features

    NASA Technical Reports Server (NTRS)

    Fruchter, J. S.; Stoeser, J. W.; Lindstrom, M. M.; Goles, G. G.

    1973-01-01

    Ninety sub-samples of Apollo 15 materials have been analyzed by instrumental neutron activation analysis techniques for as many as 21 elements. Soil and soil breccia compositions show considerable variation from station to station although at any given station the soils and soil breccias were compositionally very similar to one another. Mixing model calculations show that the station-to-station variations can be related to important local geologic features. These features include the Apennine Front, Hadley Rille and the ray from the craters Aristillus or Autolycus. Compositional similarities between soils and soil breccias at the Apollo 15 site indicate that the breccias and soils are related in some fundamental way, although the exact nature of this relationship is not yet fully understood.

  12. Sorted Index Numbers for Privacy Preserving Face Recognition

    NASA Astrophysics Data System (ADS)

    Wang, Yongjin; Hatzinakos, Dimitrios

    2009-12-01

    This paper presents a novel approach for changeable and privacy preserving face recognition. We first introduce a new method of biometric matching using the sorted index numbers (SINs) of feature vectors. Since it is impossible to recover any of the exact values of the original features, the transformation from original features to the SIN vectors is noninvertible. To address the irrevocable nature of biometric signals whilst obtaining stronger privacy protection, a random projection-based method is employed in conjunction with the SIN approach to generate changeable and privacy preserving biometric templates. The effectiveness of the proposed method is demonstrated on a large generic data set, which contains images from several well-known face databases. Extensive experimentation shows that the proposed solution may improve the recognition accuracy.

  13. Exact and approximate solutions for transient squeezing flow

    NASA Astrophysics Data System (ADS)

    Lang, Ji; Santhanam, Sridhar; Wu, Qianhong

    2017-10-01

    In this paper, we report two novel theoretical approaches to examine a fast-developing flow in a thin fluid gap, which is widely observed in industrial applications and biological systems. The problem is featured by a very small Reynolds number and Strouhal number, making the fluid convective acceleration negligible, while its local acceleration is not. We have developed an exact solution for this problem which shows that the flow starts with an inviscid limit when the viscous effect has no time to appear and is followed by a subsequent developing flow, in which the viscous effect continues to penetrate into the entire fluid gap. An approximate solution is also developed using a boundary layer integral method. This solution precisely captures the general behavior of the transient fluid flow process and agrees very well with the exact solution. We also performed numerical simulation using Ansys-CFX. Excellent agreement between the analytical and the numerical solutions is obtained, indicating the validity of the analytical approaches. The study presented herein fills the gap in the literature and will have a broad impact on industrial and biomedical applications.

  14. Discontinuous functional for linear-response time-dependent density-functional theory: The exact-exchange kernel and approximate forms

    NASA Astrophysics Data System (ADS)

    Hellgren, Maria; Gross, E. K. U.

    2013-11-01

    We present a detailed study of the exact-exchange (EXX) kernel of time-dependent density-functional theory with an emphasis on its discontinuity at integer particle numbers. It was recently found that this exact property leads to sharp peaks and step features in the kernel that diverge in the dissociation limit of diatomic systems [Hellgren and Gross, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.022514 85, 022514 (2012)]. To further analyze the discontinuity of the kernel, we here make use of two different approximations to the EXX kernel: the Petersilka Gossmann Gross (PGG) approximation and a common energy denominator approximation (CEDA). It is demonstrated that whereas the PGG approximation neglects the discontinuity, the CEDA includes it explicitly. By studying model molecular systems it is shown that the so-called field-counteracting effect in the density-functional description of molecular chains can be viewed in terms of the discontinuity of the static kernel. The role of the frequency dependence is also investigated, highlighting its importance for long-range charge-transfer excitations as well as inner-shell excitations.

  15. Probability-based classifications for spatially characterizing the water temperatures and discharge rates of hot springs in the Tatun Volcanic Region, Taiwan.

    PubMed

    Jang, Cheng-Shin

    2015-05-01

    Accurately classifying the spatial features of the water temperatures and discharge rates of hot springs is crucial for environmental resources use and management. This study spatially characterized classifications of the water temperatures and discharge rates of hot springs in the Tatun Volcanic Region of Northern Taiwan by using indicator kriging (IK). The water temperatures and discharge rates of the springs were first assigned to high, moderate, and low categories according to the two thresholds of the proposed spring classification criteria. IK was then used to model the occurrence probabilities of the water temperatures and discharge rates of the springs and probabilistically determine their categories. Finally, nine combinations were acquired from the probability-based classifications for the spatial features of the water temperatures and discharge rates of the springs. Moreover, various combinations of spring water features were examined according to seven subzones of spring use in the study region. The research results reveal that probability-based classifications using IK provide practicable insights related to propagating the uncertainty of classifications according to the spatial features of the water temperatures and discharge rates of the springs. The springs in the Beitou (BT), Xingyi Road (XYR), Zhongshanlou (ZSL), and Lengshuikeng (LSK) subzones are suitable for supplying tourism hotels with a sufficient quantity of spring water because they have high or moderate discharge rates. Furthermore, natural hot springs in riverbeds and valleys should be developed in the Dingbeitou (DBT), ZSL, Xiayoukeng (XYK), and Macao (MC) subzones because of low discharge rates and low or moderate water temperatures.

  16. Joint probabilities and quantum cognition

    NASA Astrophysics Data System (ADS)

    de Barros, J. Acacio

    2012-12-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  17. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  18. Delay chemical master equation: direct and closed-form solutions

    PubMed Central

    Leier, Andre; Marquez-Lago, Tatiana T.

    2015-01-01

    The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived. PMID:26345616

  19. Delay chemical master equation: direct and closed-form solutions.

    PubMed

    Leier, Andre; Marquez-Lago, Tatiana T

    2015-07-08

    The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived.

  20. A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.

    PubMed

    Consiglio, J D; Shan, G; Wilding, G E

    2014-01-01

    Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.

  1. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  2. Efficient cooperative compressive spectrum sensing by identifying multi-candidate and exploiting deterministic matrix

    NASA Astrophysics Data System (ADS)

    Li, Jia; Wang, Qiang; Yan, Wenjie; Shen, Yi

    2015-12-01

    Cooperative spectrum sensing exploits the spatial diversity to improve the detection of occupied channels in cognitive radio networks (CRNs). Cooperative compressive spectrum sensing (CCSS) utilizing the sparsity of channel occupancy further improves the efficiency by reducing the number of reports without degrading detection performance. In this paper, we firstly and mainly propose the referred multi-candidate orthogonal matrix matching pursuit (MOMMP) algorithms to efficiently and effectively detect occupied channels at fusion center (FC), where multi-candidate identification and orthogonal projection are utilized to respectively reduce the number of required iterations and improve the probability of exact identification. Secondly, two common but different approaches based on threshold and Gaussian distribution are introduced to realize the multi-candidate identification. Moreover, to improve the detection accuracy and energy efficiency, we propose the matrix construction based on shrinkage and gradient descent (MCSGD) algorithm to provide a deterministic filter coefficient matrix of low t-average coherence. Finally, several numerical simulations validate that our proposals provide satisfactory performance with higher probability of detection, lower probability of false alarm and less detection time.

  3. Hurdles and sorting by inversions: combinatorial, statistical, and experimental results.

    PubMed

    Swenson, Krister M; Lin, Yu; Rajan, Vaibhav; Moret, Bernard M E

    2009-10-01

    As data about genomic architecture accumulates, genomic rearrangements have attracted increasing attention. One of the main rearrangement mechanisms, inversions (also called reversals), was characterized by Hannenhalli and Pevzner and this characterization in turn extended by various authors. The characterization relies on the concepts of breakpoints, cycles, and obstructions colorfully named hurdles and fortresses. In this paper, we study the probability of generating a hurdle in the process of sorting a permutation if one does not take special precautions to avoid them (as in a randomized algorithm, for instance). To do this we revisit and extend the work of Caprara and of Bergeron by providing simple and exact characterizations of the probability of encountering a hurdle in a random permutation. Using similar methods we provide the first asymptotically tight analysis of the probability that a fortress exists in a random permutation. Finally, we study other aspects of hurdles, both analytically and through experiments: when are they created in a sequence of sorting inversions, how much later are they detected, and how much work may need to be undone to return to a sorting sequence.

  4. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  5. Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference.

    PubMed

    Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing

    2016-01-01

    In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications.

  6. Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference

    PubMed Central

    Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing

    2016-01-01

    In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications. PMID:27851817

  7. Non-renewal statistics for electron transport in a molecular junction with electron-vibration interaction

    NASA Astrophysics Data System (ADS)

    Kosov, Daniel S.

    2017-09-01

    Quantum transport of electrons through a molecule is a series of individual electron tunneling events separated by stochastic waiting time intervals. We study the emergence of temporal correlations between successive waiting times for the electron transport in a vibrating molecular junction. Using the master equation approach, we compute the joint probability distribution for waiting times of two successive tunneling events. We show that the probability distribution is completely reset after each tunneling event if molecular vibrations are thermally equilibrated. If we treat vibrational dynamics exactly without imposing the equilibration constraint, the statistics of electron tunneling events become non-renewal. Non-renewal statistics between two waiting times τ1 and τ2 means that the density matrix of the molecule is not fully renewed after time τ1 and the probability of observing waiting time τ2 for the second electron transfer depends on the previous electron waiting time τ1. The strong electron-vibration coupling is required for the emergence of the non-renewal statistics. We show that in the Franck-Condon blockade regime, extremely rare tunneling events become positively correlated.

  8. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  9. Role of natural and cultural features in residents' perceptions of rural character

    Treesearch

    Dori Pynnonen; Dennis Propst; Christine Vogt; Maureen McDonough

    2006-01-01

    Rural landscapes are rapidly changing as more families migrate in from cities and suburbs, yet there have been few systematic attempts to have residents describe exactly what rural character means to them. As part of a USDA Forest Service research program examining landscape change (Potts et al. 2004), this study focused on the landscape and residents of six...

  10. Flash Platform Examination

    DTIC Science & Technology

    2011-03-01

    than would be performed in software”[108]. Uro Tinic, one of the Flash player’s engineers, further clarifies exactly what Flash player 10 hardware...www.adobe.com/products/flashplayer/features/ (Access date: 28 Sep 2009). [109] Uro , T. What Does GPU Acceleration Mean? (online), http...133] Shorten, A. (2009), Design to Development: Flash Catalyst to Flash Builder, In Proceedings of Adobe Max 2009, Los Angeles, CA. 142 DRDC

  11. Stars and (furry) black holes in Lorentz breaking massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comelli, D.; Nesti, F.; Pilo, L.

    We study the exact spherically symmetric solutions in a class of Lorentz-breaking massive gravity theories, using the effective-theory approach where the graviton mass is generated by the interaction with a suitable set of Stueckelberg fields. We find explicitly the exact black-hole solutions which generalizes the familiar Schwarzschild one, which shows a nonanalytic hair in the form of a powerlike term r{sup {gamma}}. For realistic self-gravitating bodies, we find interesting features, linked to the effective violation of the Gauss law: (i) the total gravitational mass appearing in the standard 1/r term gets a multiplicative renormalization proportional to the area of themore » body itself; (ii) the magnitude of the powerlike hairy correction is also linked to size of the body. The novel features can be ascribed to the presence of the Goldstones fluid turned on by matter inside the body; its equation of state approaching that of dark energy near the center. The Goldstones fluid also changes the matter equilibrium pressure, leading to an upper limit for the graviton mass, m < or approx. 10{sup -28/29} eV, derived from the largest stable gravitational bound states in the Universe.« less

  12. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  13. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    PubMed

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  14. Controlling rogue waves in inhomogeneous Bose-Einstein condensates.

    PubMed

    Loomba, Shally; Kaur, Harleen; Gupta, Rama; Kumar, C N; Raju, Thokala Soloman

    2014-05-01

    We present the exact rogue wave solutions of the quasi-one-dimensional inhomogeneous Gross-Pitaevskii equation by using similarity transformation. Then, by employing the exact analytical solutions we have studied the controllable behavior of rogue waves in the Bose-Einstein condensates context for the experimentally relevant systems. Additionally, we have also investigated the nonlinear tunneling of rogue waves through a conventional hyperbolic barrier and periodic barrier. We have found that, for the conventional nonlinearity barrier case, rogue waves are localized in space and time and get amplified near the barrier, while for the dispersion barrier case rogue waves are localized in space and propagating in time and their amplitude is reduced at the barrier location. In the case of the periodic barrier, the interesting dynamical features of rogue waves are obtained and analyzed analytically.

  15. Aging and coarsening in isolated quantum systems after a quench: Exact results for the quantum O(N) model with N → ∞.

    PubMed

    Maraga, Anna; Chiocchetta, Alessio; Mitra, Aditi; Gambassi, Andrea

    2015-10-01

    The nonequilibrium dynamics of an isolated quantum system after a sudden quench to a dynamical critical point is expected to be characterized by scaling and universal exponents due to the absence of time scales. We explore these features for a quench of the parameters of a Hamiltonian with O(N) symmetry, starting from a ground state in the disordered phase. In the limit of infinite N, the exponents and scaling forms of the relevant two-time correlation functions can be calculated exactly. Our analytical predictions are confirmed by the numerical solution of the corresponding equations. Moreover, we find that the same scaling functions, yet with different exponents, also describe the coarsening dynamics for quenches below the dynamical critical point.

  16. Mechanics of gravitational spreading of steep-sided ridges («sackung»)

    USGS Publications Warehouse

    Savage, W.Z.; Varnes, D.J.

    1987-01-01

    Large-scale gravitational spreading of steep-sided ridges characterized by linear fissures, trenches, and uphill-facing scarps high on the sides and tops of ridges are known worldwide. Such spreading, termed sackung, is commonly attributed to pervasive plastic deformation of a rock mass, and is here analyzed as such. Beginning with a previously developed exact elastic solution for gravity-induced stresses in a symmetric ridge, stresses calculated from the exact solution are used in the Coulomb failure criterion to determine the extent of ridge failure under self-weight. Finally, when the regions of failure are established, a plastic flow solution is applied to predict the location of and sense of movement on upward-facing scarps near ridge crests and other features common in sackung. ?? 1987 International Assocaition of Engineering Geology.

  17. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less

  18. Heart sounds analysis using probability assessment.

    PubMed

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  19. Image processing and machine learning for fully automated probabilistic evaluation of medical images.

    PubMed

    Sajn, Luka; Kukar, Matjaž

    2011-12-01

    The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Ferrofluid patterns in a radial magnetic field: linear stability, nonlinear dynamics, and exact solutions.

    PubMed

    Oliveira, Rafael M; Miranda, José A; Leandro, Eduardo S G

    2008-01-01

    The response of a ferrofluid droplet to a radial magnetic field is investigated, when the droplet is confined in a Hele-Shaw cell. We study how the stability properties of the interface and the shape of the emerging patterns react to the action of the magnetic field. At early linear stages, it is found that the radial field is destabilizing and determines the growth of fingering structures at the interface. In the weakly nonlinear regime, we have verified that the magnetic field favors the formation of peaked patterned structures that tend to become sharper and sharper as the magnitude of the magnetic effects is increased. A more detailed account of the pattern morphology is provided by the determination of nontrivial exact stationary solutions for the problem with finite surface tension. These solutions are obtained analytically and reveal the development of interesting polygon-shaped and starfishlike patterns. For sufficiently large applied fields or magnetic susceptibilities, pinch-off phenomena are detected, tending to occur near the fingertips. We have found that the morphological features obtained from the exact solutions are consistent with our linear and weakly nonlinear predictions. By contrasting the exact solutions for ferrofluids under radial field with those obtained for rotating Hele-Shaw flows with ordinary nonmagnetic fluids, we deduce that they coincide in the limit of very small susceptibilities.

  1. Addition by subtraction in coupled-cluster theory: a reconsideration of the CC and CI interface and the nCC hierarchy.

    PubMed

    Bartlett, Rodney J; Musiał, Monika

    2006-11-28

    The nCC hierarchy of coupled-cluster approximations, where n guarantees exactness for n electrons and all products of n electrons are derived and applied to several illustrative problems. The condition of exactness for n=2 defines nCCSD=2CC, with nCCSDT=3CC and nCCSDTQ=4CC being exact for three and four electrons. To achieve this, the minimum number of diagrams is evaluated, which is less than in the corresponding CC model. For all practical purposes, nCC is also the proper definition of a size-extensive CI. 2CC is also an orbitally invariant coupled electron pair approximation. The numerical results of nCC are close to those for the full CC variant, and in some cases are closer to the full CI reference result. As 2CC is exact for separated electron pairs, it is the natural zeroth-order approximation for the correlation problem in molecules with other effects introduced as these units start to interact. The nCC hierarchy of approximations has all the attractive features of CC including its size extensivity, orbital invariance, and orbital insensitivity, but in a conceptually appealing form suited to bond breaking, while being computationally less demanding. Excited states from the equation of motion (EOM-2CC) are also reported, which show results frequently approaching those of EOM-CCSDT.

  2. Exactly Solvable Models for Topological Phases of Matter

    NASA Astrophysics Data System (ADS)

    Tarantino, Nicolas Alessandro

    Topological systems are characterized by some collection of features which remain unchanged under deformations of the Hamiltonian which leave the band gap open. The earliest examples of these were free fermion systems, allowing us to study the band structure to determine if a candidate material supports topological features. However, we can also ask the reversed question, i.e. Given a band gap, what topological features can be engineered? This classification problem proved to have numerous answers depending on which extra assumptions we allow, producing many candidate phases. While free fermion topological features could be classified by their band structures (culminating in the 10-fold way), strongly interacting systems defied this approach, and so classification outstripped the construction of even the most elementary Hamiltonians, leaving us with a number of phases which could exist, but do not have a single strongly interacting representative. The purpose of this thesis is to resolve this in certain cases by constructing commuting projector models (CPM), a class of exactly solvable models, for two types of topological phases, known as symmetry enriched topological (SET) order and fermionic symmetry protected topological (SPT) phases respectively. After introducing the background and history of commuting projector models, we will move on to the details of how these Hamiltonians are built. In the first case, we construct a CPM for a SET, showing how to encode the necessary group cohomology data into a lattice model. In the second, we construct a CPM for a fermionic SPT, and find that we must include a combinatorial representation of a spin structure to make the model consistent. While these two projects were independent, they are linked thematically by a technique known as decoration, where extra data is encoded onto simple models to generate exotic phases.

  3. Craters Near Nilokeras Scopulus

    NASA Image and Video Library

    2015-03-04

    This image from NASA Mars Reconnaissance Orbiter of craters near Nilokeras Scopulus shows two pits partially filled with lumpy material, probably trapped dust that blew in from the atmosphere. This image shows two pits partially filled with lumpy material, probably trapped dust that blew in from the atmosphere. The pits themselves resemble impact craters, but they are part of a chain of similar features aligned with nearby faults, so they could be collapse features instead. Note also the tracks left by rolling boulders at the bottom of the craters. Nilokeras Scopulus is the name for the cliff, about 756 kilometers long, in the northern hemisphere of Mars where these craters are located. It was named based on an albedo (brightness) feature mapped by astronomer E. M. Antoniadi in 1930. http://photojournal.jpl.nasa.gov/catalog/PIA19304

  4. Daniel Goodman’s empirical approach to Bayesian statistics

    USGS Publications Warehouse

    Gerrodette, Tim; Ward, Eric; Taylor, Rebecca L.; Schwarz, Lisa K.; Eguchi, Tomoharu; Wade, Paul; Himes Boor, Gina

    2016-01-01

    Bayesian statistics, in contrast to classical statistics, uses probability to represent uncertainty about the state of knowledge. Bayesian statistics has often been associated with the idea that knowledge is subjective and that a probability distribution represents a personal degree of belief. Dr. Daniel Goodman considered this viewpoint problematic for issues of public policy. He sought to ground his Bayesian approach in data, and advocated the construction of a prior as an empirical histogram of “similar” cases. In this way, the posterior distribution that results from a Bayesian analysis combined comparable previous data with case-specific current data, using Bayes’ formula. Goodman championed such a data-based approach, but he acknowledged that it was difficult in practice. If based on a true representation of our knowledge and uncertainty, Goodman argued that risk assessment and decision-making could be an exact science, despite the uncertainties. In his view, Bayesian statistics is a critical component of this science because a Bayesian analysis produces the probabilities of future outcomes. Indeed, Goodman maintained that the Bayesian machinery, following the rules of conditional probability, offered the best legitimate inference from available data. We give an example of an informative prior in a recent study of Steller sea lion spatial use patterns in Alaska.

  5. Primitive material surviving in chondrites - Mineral grains

    NASA Astrophysics Data System (ADS)

    Steele, Ian M.

    Besides chondrules and various kinds of polymineralic inclusion, carbonaceous chondrites commonly contain, embedded in their matrices, isolated grains of mafic silicates and metallic iron. Most of the silicate grains probably originated in chondrules, but some appear to predate chondrule formation and may have formed as individual grains in the solar nebula. If that was the case, their compositions suggest some departure from equilibrium condensation from a gas of solar composition. Metal-grain compositions are broadly suggestive of nebular formation but the exact nature of the conditions in which they were formed remains problematical.

  6. Exact solutions for the selection-mutation equilibrium in the Crow-Kimura evolutionary model.

    PubMed

    Semenov, Yuri S; Novozhilov, Artem S

    2015-08-01

    We reformulate the eigenvalue problem for the selection-mutation equilibrium distribution in the case of a haploid asexually reproduced population in the form of an equation for an unknown probability generating function of this distribution. The special form of this equation in the infinite sequence limit allows us to obtain analytically the steady state distributions for a number of particular cases of the fitness landscape. The general approach is illustrated by examples; theoretical findings are compared with numerical calculations. Copyright © 2015. Published by Elsevier Inc.

  7. A binomial stochastic kinetic approach to the Michaelis-Menten mechanism

    NASA Astrophysics Data System (ADS)

    Lente, Gábor

    2013-05-01

    This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.

  8. Binary data corruption due to a Brownian agent

    NASA Astrophysics Data System (ADS)

    Newman, T. J.; Triampo, Wannapong

    1999-05-01

    We introduce a model of binary data corruption induced by a Brownian agent (active random walker) on a d-dimensional lattice. A continuum formulation allows the exact calculation of several quantities related to the density of corrupted bits ρ, for example, the mean of ρ and the density-density correlation function. Excellent agreement is found with the results from numerical simulations. We also calculate the probability distribution of ρ in d=1, which is found to be log normal, indicating that the system is governed by extreme fluctuations.

  9. A Stochastic Super-Exponential Growth Model for Population Dynamics

    NASA Astrophysics Data System (ADS)

    Avila, P.; Rekker, A.

    2010-11-01

    A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.

  10. An analytically soluble problem in fully nonlinear statistical gravitational lensing

    NASA Technical Reports Server (NTRS)

    Schneider, P.

    1987-01-01

    The amplification probability distribution p(I)dI for a point source behind a random star field which acts as the deflector exhibits a I exp-3 behavior for large amplification, as can be shown from the universality of the lens equation near critical lines. In this paper it is shown that the amplitude of the I exp-3 tail can be derived exactly for arbitrary mass distribution of the stars, surface mass density of stars and smoothly distributed matter, and large-scale shear. This is then compared with the corresponding linear result.

  11. Performance of unbalanced QPSK in the presence of noisy reference and crosstalk

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1979-01-01

    The problem of transmitting two telemetry data streams having different rates and different powers using unbalanced quadriphase shift keying (UQPSK) signaling is considered. It is noted that the presence of a noisy carrier phase reference causes a degradation in detection performance in coherent communications systems and that imperfect carrier synchronization not only attenuates the main demodulated signal voltage in UQPSK but also produces interchannel interference (crosstalk) which degrades the performance still further. Exact analytical expressions for symbol error probability of UQPSK in the presence of noise phase reference are derived.

  12. Few-Photon Model of the Optical Emission of Semiconductor Quantum Dots

    NASA Astrophysics Data System (ADS)

    Richter, Marten; Carmele, Alexander; Sitek, Anna; Knorr, Andreas

    2009-08-01

    The Jaynes-Cummings model provides a well established theoretical framework for single electron two level systems in a radiation field. Similar exactly solvable models for semiconductor light emitters such as quantum dots dominated by many particle interactions are not known. We access these systems by a generalized cluster expansion, the photon-probability cluster expansion: a reliable approach for few-photon dynamics in many body electron systems. As a first application, we discuss vacuum Rabi oscillations and show that their amplitude determines the number of electrons in the quantum dot.

  13. The Defense Policy of the Soviet Union

    DTIC Science & Technology

    1989-08-01

    aims, the probable methods of waging armed combat, the tasks to be performed by the Armed Forces, and the measures required for the all-around social ...organ that exercises ultimate decisional authority on all issues of consequence in the Soviet Union. This small body, whose exact size varies slightly...attit) T. Y P E O F E P O R T & M P f tI O C O V E R C O The Defense Policy of the Soviet Union interim 4. PERFORMING ORG. AEPOA1 44juaER 7. Aurimom

  14. Processing and Probability Analysis of Pulsed Terahertz NDE of Corrosion under Shuttle Tile Data

    NASA Technical Reports Server (NTRS)

    Anastasi, Robert F.; Madaras, Eric I.; Seebo, Jeffrey P.; Ely, Thomas M.

    2009-01-01

    This paper examines data processing and probability analysis of pulsed terahertz NDE scans of corrosion defects under a Shuttle tile. Pulsed terahertz data collected from an aluminum plate with fabricated corrosion defects and covered with a Shuttle tile is presented. The corrosion defects imaged were fabricated by electrochemically etching areas of various diameter and depth in the plate. In this work, the aluminum plate echo signal is located in the terahertz time-of-flight data and a threshold is applied to produce a binary image of sample features. Feature location and area are examined and identified as corrosion through comparison with the known defect layout. The results are tabulated with hit, miss, or false call information for a probability of detection analysis that is used to identify an optimal processing threshold.

  15. A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics

    DTIC Science & Technology

    2007-05-01

    findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each

  16. S-Wave Normal Mode Propagation in Aluminum Cylinders

    USGS Publications Warehouse

    Lee, Myung W.; Waite, William F.

    2010-01-01

    Large amplitude waveform features have been identified in pulse-transmission shear-wave measurements through cylinders that are long relative to the acoustic wavelength. The arrival times and amplitudes of these features do not follow the predicted behavior of well-known bar waves, but instead they appear to propagate with group velocities that increase as the waveform feature's dominant frequency increases. To identify these anomalous features, the wave equation is solved in a cylindrical coordinate system using an infinitely long cylinder with a free surface boundary condition. The solution indicates that large amplitude normal-mode propagations exist. Using the high-frequency approximation of the Bessel function, an approximate dispersion relation is derived. The predicted amplitude and group velocities using the approximate dispersion relation qualitatively agree with measured values at high frequencies, but the exact dispersion relation should be used to analyze normal modes for full ranges of frequency of interest, particularly at lower frequencies.

  17. Baraitser and Winter syndrome with growth hormone deficiency.

    PubMed

    Chentli, Farida; Zellagui, Hadjer

    2014-01-01

    Baraitser-Winter syndrome (BWS), first reported in 1988, is apparently due to genetic abnormalities that are still not well-defined, although many gene abnormalities are already discovered and de novo missense changes in the cytoplasmic actin-encoding genes (called ACTB and ACTG1) have been recently discovered. The syndrome combines facial and cerebral malformations. Facial malformations totally or partially present in the same patient are: Iris coloboma, bilateral ptosis, hypertelorism, broad nasal bridge, and prominent epicanthic folds. The various brain malformations are probably responsible for growth and mental retardation. To the best of our knowledge, the syndrome is very rare as few cases have been reported so far. Our aim was to describe a child with a phenotype that looks like BWS with proved partial growth hormone (GH) deficiency which was not reported before. A girl aged 7-year-old of consanguineous parents was referred for short stature and mental retardation. Clinical examination showed dwarfism and a delay in her mental development. Other clinical features included: Strabismus, epicanthic folds, broad nasal bridge, and brain anomalies such as lissencephaly, bilateral hygroma, and cerebral atrophy. Hormonal assessment showed partial GH deficiency without other endocrine disorders. Our case looks exactly like BWS. However, apart from facial and cerebral abnormalities, there is a partial GH deficiency which can explain the harmonious short stature. This case seems worth to be reported as it adds GH deficiency to the very rare syndrome.

  18. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  19. Evolution of wave patterns and temperature field in shock-tube flow

    NASA Astrophysics Data System (ADS)

    Kiverin, A. D.; Yakovenko, I. S.

    2018-05-01

    The paper is devoted to the numerical analysis of wave patterns behind a shock wave propagating in a tube filled with a gaseous mixture. It is shown that the flow inside the boundary layer behind the shock wave is unstable, and the way the instability develops fully corresponds to the solution obtained for the boundary layer over a flat plate. Vortical perturbations inside the boundary layer determine the nonuniformity of the temperature field. In turn, exactly these nonuniformities define the way the ignition kernels arise in the combustible mixture after the reflected shock interaction with the boundary layer. In particular, the temperature nonuniformity determines the spatial limitations of probable ignition kernel position relative to the end wall and side walls of the tube. In the case of low-intensity incident shocks the ignition could start not farther than the point of first interaction between the reflected shock wave and roller vortices formed in the process of boundary layer development. Proposed physical mechanisms are formulated in general terms and can be used for interpretation of the experimental data in any systems with a delayed exothermal reaction start. It is also shown that contact surface thickening occurs due to its interaction with Tollmien-Schlichting waves. This conclusion is of importance for understanding the features of ignition in shock tubes operating in the over-tailored regime.

  20. Ali, Cunich: Halley's Churches: Halley and the London Queen Anne Churches

    NASA Astrophysics Data System (ADS)

    Ali, Jason R.; Cunich, Peter

    2005-04-01

    Edmond Halley's enormous contribution to science has received much attention. New research adds an intriguing chapter to his story and concerns his hitherto unexplored association with the baroque architectural visionary Nicholas Hawksmoor, and some important Temple-inspired churches that were built in London in the early 1700s. We argue that Christchurch Spitalfields and St Anne's Limehouse, which were both started in the summer of 1714, were aligned exactly eastwards using ``corrected'' magnetic-compass bearings and that Halley influenced or aided Hawksmoor. By this time the men had probably known each other for 30 years and had recently worked together on the Clarendon Building in Oxford. Despite there being more than 1500 years of Chinese and about 500 years of Western compass technology at the time, these probably represent the first constructions planned using a modern-day ``scientific'' technique. The research also throws light on Halley's contended religious position.

Top