Sample records for random finite set

  1. A Random Finite Set Approach to Space Junk Tracking and Identification

    DTIC Science & Technology

    2014-09-03

    Final 3. DATES COVERED (From - To) 31 Jan 13 – 29 Apr 14 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and...01-2013 to 29-04-2014 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and Identification 5a. CONTRACT NUMBER FA2386-13...Prescribed by ANSI Std Z39-18 A Random Finite Set Approach to Space Junk Tracking and Indentification Ba-Ngu Vo1, Ba-Tuong Vo1, 1Department of

  2. Multisource passive acoustic tracking: an application of random finite set data fusion

    NASA Astrophysics Data System (ADS)

    Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung

    2010-04-01

    Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.

  3. Finite-time stability of neutral-type neural networks with random time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Saravanan, S.; Zhu, Quanxin

    2017-11-01

    This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.

  4. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  5. Simulation of Voltage SET Operation in Phase-Change Random Access Memories with Heater Addition and Ring-Type Contactor for Low-Power Consumption by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Li, Yi-Jin

    2010-06-01

    A three-dimensional finite element model for phase change random access memory is established to simulate electric, thermal and phase state distribution during (SET) operation. The model is applied to simulate the SET behaviors of the heater addition structure (HS) and the ring-type contact in the bottom electrode (RIB) structure. The simulation results indicate that the small bottom electrode contactor (BEC) is beneficial for heat efficiency and reliability in the HS cell, and the bottom electrode contactor with size Fx = 80 nm is a good choice for the RIB cell. Also shown is that the appropriate SET pulse time is 100 ns for the low power consumption and fast operation.

  6. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Simulation of SET Operation in Phase-Change Random Access Memories with Heater Addition and Ring-Type Contactor for Low-Power Consumption by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Feng, Song-Lin

    2009-11-01

    A three-dimensional finite element model for phase change random access memory (PCRAM) is established for comprehensive electrical and thermal analysis during SET operation. The SET behaviours of the heater addition structure (HS) and the ring-type contact in bottom electrode (RIB) structure are compared with each other. There are two ways to reduce the RESET current, applying a high resistivity interfacial layer and building a new device structure. The simulation results indicate that the variation of SET current with different power reduction ways is little. This study takes the RESET and SET operation current into consideration, showing that the RIB structure PCRAM cell is suitable for future devices with high heat efficiency and high-density, due to its high heat efficiency in RESET operation.

  7. From Large Deviations to Semidistances of Transport and Mixing: Coherence Analysis for Finite Lagrangian Data

    NASA Astrophysics Data System (ADS)

    Koltai, Péter; Renger, D. R. Michiel

    2018-06-01

    One way to analyze complicated non-autonomous flows is through trying to understand their transport behavior. In a quantitative, set-oriented approach to transport and mixing, finite time coherent sets play an important role. These are time-parametrized families of sets with unlikely transport to and from their surroundings under small or vanishing random perturbations of the dynamics. Here we propose, as a measure of transport and mixing for purely advective (i.e., deterministic) flows, (semi)distances that arise under vanishing perturbations in the sense of large deviations. Analogously, for given finite Lagrangian trajectory data we derive a discrete-time-and-space semidistance that comes from the "best" approximation of the randomly perturbed process conditioned on this limited information of the deterministic flow. It can be computed as shortest path in a graph with time-dependent weights. Furthermore, we argue that coherent sets are regions of maximal farness in terms of transport and mixing, and hence they occur as extremal regions on a spanning structure of the state space under this semidistance—in fact, under any distance measure arising from the physical notion of transport. Based on this notion, we develop a tool to analyze the state space (or the finite trajectory data at hand) and identify coherent regions. We validate our approach on idealized prototypical examples and well-studied standard cases.

  8. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Determination of Nonlinear Stiffness Coefficients for Finite Element Models with Application to the Random Vibration Problem

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.

    1999-01-01

    In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.

  10. Time series, correlation matrices and random matrix models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinayak; Seligman, Thomas H.

    2014-01-08

    In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less

  11. Not all (possibly) “random” sequences are created equal

    PubMed Central

    Pincus, Steve; Kalman, Rudolf E.

    1997-01-01

    The need to assess the randomness of a single sequence, especially a finite sequence, is ubiquitous, yet is unaddressed by axiomatic probability theory. Here, we assess randomness via approximate entropy (ApEn), a computable measure of sequential irregularity, applicable to single sequences of both (even very short) finite and infinite length. We indicate the novelty and facility of the multidimensional viewpoint taken by ApEn, in contrast to classical measures. Furthermore and notably, for finite length, finite state sequences, one can identify maximally irregular sequences, and then apply ApEn to quantify the extent to which given sequences differ from maximal irregularity, via a set of deficit (defm) functions. The utility of these defm functions which we show allows one to considerably refine the notions of probabilistic independence and normality, is featured in several studies, including (i) digits of e, π, √2, and √3, both in base 2 and in base 10, and (ii) sequences given by fractional parts of multiples of irrationals. We prove companion analytic results, which also feature in a discussion of the role and validity of the almost sure properties from axiomatic probability theory insofar as they apply to specified sequences and sets of sequences (in the physical world). We conclude by relating the present results and perspective to both previous and subsequent studies. PMID:11038612

  12. Stochastic dynamics of time correlation in complex systems with discrete time

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail

    2000-11-01

    In this paper we present the concept of description of random processes in complex systems with discrete time. It involves the description of kinetics of discrete processes by means of the chain of finite-difference non-Markov equations for time correlation functions (TCFs). We have introduced the dynamic (time dependent) information Shannon entropy Si(t) where i=0,1,2,3,..., as an information measure of stochastic dynamics of time correlation (i=0) and time memory (i=1,2,3,...). The set of functions Si(t) constitute the quantitative measure of time correlation disorder (i=0) and time memory disorder (i=1,2,3,...) in complex system. The theory developed started from the careful analysis of time correlation involving dynamics of vectors set of various chaotic states. We examine two stochastic processes involving the creation and annihilation of time correlation (or time memory) in details. We carry out the analysis of vectors' dynamics employing finite-difference equations for random variables and the evolution operator describing their natural motion. The existence of TCF results in the construction of the set of projection operators by the usage of scalar product operation. Harnessing the infinite set of orthogonal dynamic random variables on a basis of Gram-Shmidt orthogonalization procedure tends to creation of infinite chain of finite-difference non-Markov kinetic equations for discrete TCFs and memory functions (MFs). The solution of the equations above thereof brings to the recurrence relations between the TCF and MF of senior and junior orders. This offers new opportunities for detecting the frequency spectra of power of entropy function Si(t) for time correlation (i=0) and time memory (i=1,2,3,...). The results obtained offer considerable scope for attack on stochastic dynamics of discrete random processes in a complex systems. Application of this technique on the analysis of stochastic dynamics of RR intervals from human ECG's shows convincing evidence for a non-Markovian phenomemena associated with a peculiarities in short- and long-range scaling. This method may be of use in distinguishing healthy from pathologic data sets based in differences in these non-Markovian properties.

  13. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less

  14. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  15. Correlation of finite element free vibration predictions using random vibration test data. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Chambers, Jeffrey A.

    1994-01-01

    Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.

  16. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  17. Finite Birth-and-Death Models in Randomly Changing Environments.

    DTIC Science & Technology

    1982-02-01

    7 AD-AL14 188 NAVAL POSTGRADUATE SCHOOL MONTEREY CA F/ 12/2 FINITE BIRTH-AND-DEATH MODELS IN RANDOMLY CHANGING ENVROENTS-(TC(U) FEB 82 D P 6AVER...Monterey, California DTIC $ELECTEMAY6 1982 B FINITE BIRTH-AND-DEATH MODELS IN RANDOMLY CHANGING ENVIRONMENTS by D. P. Gayer P. A. Jacobs G. Latouche February...CATALOG NUMUEi4NPS55-82-007 [iI. (. 4. TITLE (d Subtitle) S. TYPE OF REPORT A PERIOO COVERED FINITE BIRTH-AND-DEATH MODELS IN RANDOMLY Technical

  18. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  19. Universality in chaos: Lyapunov spectrum and random matrix theory.

    PubMed

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  20. Universality in chaos: Lyapunov spectrum and random matrix theory

    NASA Astrophysics Data System (ADS)

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  1. Quantum Adiabatic Optimization and Combinatorial Landscapes

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Knysh, S.; Morris, R. D.

    2003-01-01

    In this paper we analyze the performance of the Quantum Adiabatic Evolution (QAE) algorithm on a variant of Satisfiability problem for an ensemble of random graphs parametrized by the ratio of clauses to variables, gamma = M / N. We introduce a set of macroscopic parameters (landscapes) and put forward an ansatz of universality for random bit flips. We then formulate the problem of finding the smallest eigenvalue and the excitation gap as a statistical mechanics problem. We use the so-called annealing approximation with a refinement that a finite set of macroscopic variables (verses only energy) is used, and are able to show the existence of a dynamic threshold gamma = gammad, beyond which QAE should take an exponentially long time to find a solution. We compare the results for extended and simplified sets of landscapes and provide numerical evidence in support of our universality ansatz.

  2. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  3. Harnessing the Bethe free energy†

    PubMed Central

    Bapst, Victor

    2016-01-01

    ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178

  4. Critical spreading dynamics of parity conserving annihilating random walks with power-law branching

    NASA Astrophysics Data System (ADS)

    Laise, T.; dos Anjos, F. C.; Argolo, C.; Lyra, M. L.

    2018-09-01

    We investigate the critical spreading of the parity conserving annihilating random walks model with Lévy-like branching. The random walks are considered to perform normal diffusion with probability p on the sites of a one-dimensional lattice, annihilating in pairs by contact. With probability 1 - p, each particle can also produce two offspring which are placed at a distance r from the original site following a power-law Lévy-like distribution P(r) ∝ 1 /rα. We perform numerical simulations starting from a single particle. A finite-time scaling analysis is employed to locate the critical diffusion probability pc below which a finite density of particles is developed in the long-time limit. Further, we estimate the spreading dynamical exponents related to the increase of the average number of particles at the critical point and its respective fluctuations. The critical exponents deviate from those of the counterpart model with short-range branching for small values of α. The numerical data suggest that continuously varying spreading exponents sets up while the branching process still results in a diffusive-like spreading.

  5. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  6. Probabilistic finite elements for fracture mechanics

    NASA Technical Reports Server (NTRS)

    Besterfield, Glen

    1988-01-01

    The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.

  7. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  8. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  9. Measuring order in disordered systems and disorder in ordered systems: Random matrix theory for isotropic and nematic liquid crystals and its perspective on pseudo-nematic domains

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Stratt, Richard M.

    2018-05-01

    Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.

  10. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  11. A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time.

    PubMed

    Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D

    2017-11-01

    This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A comparison of error bounds for a nonlinear tracking system with detection probability Pd < 1.

    PubMed

    Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    2012-12-14

    Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds.

  13. A Comparison of Error Bounds for a Nonlinear Tracking System with Detection Probability Pd < 1

    PubMed Central

    Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    2012-01-01

    Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds. PMID:23242274

  14. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  15. Complexity transitions in global algorithms for sparse linear systems over finite fields

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.

    2002-09-01

    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.

  16. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  17. Positivity, discontinuity, finite resources, and nonzero error for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2014-12-15

    This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuousmore » in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.« less

  18. Bagging Voronoi classifiers for clustering spatial functional data

    NASA Astrophysics Data System (ADS)

    Secchi, Piercesare; Vantini, Simone; Vitelli, Valeria

    2013-06-01

    We propose a bagging strategy based on random Voronoi tessellations for the exploration of geo-referenced functional data, suitable for different purposes (e.g., classification, regression, dimensional reduction, …). Urged by an application to environmental data contained in the Surface Solar Energy database, we focus in particular on the problem of clustering functional data indexed by the sites of a spatial finite lattice. We thus illustrate our strategy by implementing a specific algorithm whose rationale is to (i) replace the original data set with a reduced one, composed by local representatives of neighborhoods covering the entire investigated area; (ii) analyze the local representatives; (iii) repeat the previous analysis many times for different reduced data sets associated to randomly generated different sets of neighborhoods, thus obtaining many different weak formulations of the analysis; (iv) finally, bag together the weak analyses to obtain a conclusive strong analysis. Through an extensive simulation study, we show that this new procedure - which does not require an explicit model for spatial dependence - is statistically and computationally efficient.

  19. Simulation of wave propagation in three-dimensional random media

    NASA Astrophysics Data System (ADS)

    Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1995-04-01

    Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of

  20. Permeability of three-dimensional rock masses containing geomechanically-grown anisotropic fracture networks

    NASA Astrophysics Data System (ADS)

    Thomas, R. N.; Ebigbo, A.; Paluszny, A.; Zimmerman, R. W.

    2016-12-01

    The macroscopic permeability of 3D anisotropic geomechanically-generated fractured rock masses is investigated. The explicitly computed permeabilities are compared to the predictions of classical inclusion-based effective medium theories, and to the permeability of networks of randomly oriented and stochastically generated fractures. Stochastically generated fracture networks lack features that arise from fracture interaction, such as non-planarity, and termination of fractures upon intersection. Recent discrete fracture network studies include heuristic rules that introduce these features to some extent. In this work, fractures grow and extend under tension from a finite set of initial flaws. The finite element method is used to compute displacements, and modal stress intensity factors are computed around each fracture tip using the interaction integral accumulated over a set of virtual discs. Fracture apertures emerge as a result of simulations that honour the constraints of stress equilibrium and mass conservation. The macroscopic permeabilities are explicitly calculated by solving the local cubic law in the fractures, on an element-by-element basis, coupled to Darcy's law in the matrix. The permeabilities are then compared to the estimates given by the symmetric and asymmetric versions of the self-consistent approximation, which, for randomly fractured volumes, were previously demonstrated to be most accurate of the inclusion-based effective medium methods (Ebigbo et al., Transport in Porous Media, 2016). The permeabilities of several dozen geomechanical networks are computed as a function of density and in situ stresses. For anisotropic networks, we find that the asymmetric and symmetric self-consistent methods overestimate the effective permeability in the direction of the dominant fracture set. Effective permeabilities that are more strongly dependent on the connectivity of two or more fracture sets are more accurately captured by the effective medium models.

  1. Extended observability of linear time-invariant systems under recurrent loss of output data

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok; Halevi, Yoram

    1989-01-01

    Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.

  2. Track-before-detect labeled multi-bernoulli particle filter with label switching

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, Angel F.

    2016-10-01

    This paper presents a multitarget tracking particle filter (PF) for general track-before-detect measurement models. The PF is presented in the random finite set framework and uses a labelled multi-Bernoulli approximation. We also present a label switching improvement algorithm based on Markov chain Monte Carlo that is expected to increase filter performance if targets get in close proximity for a sufficiently long time. The PF is tested in two challenging numerical examples.

  3. Managing numerical errors in random sequential adsorption

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Nowak, Aleksandra

    2016-09-01

    Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.

  4. Two Universality Classes for the Many-Body Localization Transition

    NASA Astrophysics Data System (ADS)

    Khemani, Vedika; Sheng, D. N.; Huse, David A.

    2017-08-01

    We provide a systematic comparison of the many-body localization (MBL) transition in spin chains with nonrandom quasiperiodic versus random fields. We find evidence suggesting that these belong to two separate universality classes: the first dominated by "intrinsic" intrasample randomness, and the second dominated by external intersample quenched randomness. We show that the effects of intersample quenched randomness are strongly growing, but not yet dominant, at the system sizes probed by exact-diagonalization studies on random models. Thus, the observed finite-size critical scaling collapses in such studies appear to be in a preasymptotic regime near the nonrandom universality class, but showing signs of the initial crossover towards the external-randomness-dominated universality class. Our results provide an explanation for why exact-diagonalization studies on random models see an apparent scaling near the transition while also obtaining finite-size scaling exponents that strongly violate Harris-Chayes bounds that apply to disorder-driven transitions. We also show that the MBL phase is more stable for the quasiperiodic model as compared to the random one, and the transition in the quasiperiodic model suffers less from certain finite-size effects.

  5. Influence of the random walk finite step on the first-passage probability

    NASA Astrophysics Data System (ADS)

    Klimenkova, Olga; Menshutin, Anton; Shchur, Lev

    2018-01-01

    A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.

  6. Finite-size scaling in the system of coupled oscillators with heterogeneity in coupling strength

    NASA Astrophysics Data System (ADS)

    Hong, Hyunsuk

    2017-07-01

    We consider a mean-field model of coupled phase oscillators with random heterogeneity in the coupling strength. The system that we investigate here is a minimal model that contains randomness in diverse values of the coupling strength, and it is found to return to the original Kuramoto model [Y. Kuramoto, Prog. Theor. Phys. Suppl. 79, 223 (1984), 10.1143/PTPS.79.223] when the coupling heterogeneity disappears. According to one recent paper [H. Hong, H. Chaté, L.-H. Tang, and H. Park, Phys. Rev. E 92, 022122 (2015), 10.1103/PhysRevE.92.022122], when the natural frequency of the oscillator in the system is "deterministically" chosen, with no randomness in it, the system is found to exhibit the finite-size scaling exponent ν ¯=5 /4 . Also, the critical exponent for the dynamic fluctuation of the order parameter is found to be given by γ =1 /4 , which is different from the critical exponents for the Kuramoto model with the natural frequencies randomly chosen. Originally, the unusual finite-size scaling behavior of the Kuramoto model was reported by Hong et al. [H. Hong, H. Chaté, H. Park, and L.-H. Tang, Phys. Rev. Lett. 99, 184101 (2007), 10.1103/PhysRevLett.99.184101], where the scaling behavior is found to be characterized by the unusual exponent ν ¯=5 /2 . On the other hand, if the randomness in the natural frequency is removed, it is found that the finite-size scaling behavior is characterized by a different exponent, ν ¯=5 /4 [H. Hong, H. Chaté, L.-H. Tang, and H. Park, Phys. Rev. E 92, 022122 (2015), 10.1103/PhysRevE.92.022122]. Those findings brought about our curiosity and led us to explore the effects of the randomness on the finite-size scaling behavior. In this paper, we pay particular attention to investigating the finite-size scaling and dynamic fluctuation when the randomness in the coupling strength is considered.

  7. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  8. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  9. Finite-Difference Modeling of Seismic Wave Scattering in 3D Heterogeneous Media: Generation of Tangential Motion from an Explosion Source

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.

    2015-12-01

    Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  10. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  11. Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs

    NASA Astrophysics Data System (ADS)

    van der Hofstad, Remco; Kliem, Sandra; van Leeuwaarden, Johan S. H.

    2018-04-01

    Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299-2361, 2012). It was proved that when the degrees obey a power law with exponent τ \\in (3,4), the sequence of clusters ordered in decreasing size and multiplied through by n^{-(τ -2)/(τ -1)} converges as n→ ∞ to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237-269, 2001) for the Erdős-Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.

  12. Random-field-induced disordering mechanism in a disordered ferromagnet: Between the Imry-Ma and the standard disordering mechanism

    NASA Astrophysics Data System (ADS)

    Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe

    2017-12-01

    Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.

  13. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  14. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  15. Knot probabilities in random diagrams

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Chapman, Harrison; Mastin, Matt

    2016-10-01

    We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.

  16. Compressing random microstructures via stochastic Wang tilings.

    PubMed

    Novák, Jan; Kučerová, Anna; Zeman, Jan

    2012-10-01

    This Rapid Communication presents a stochastic Wang tiling-based technique to compress or reconstruct disordered microstructures on the basis of given spatial statistics. Unlike the existing approaches based on a single unit cell, it utilizes a finite set of tiles assembled by a stochastic tiling algorithm, thereby allowing to accurately reproduce long-range orientation orders in a computationally efficient manner. Although the basic features of the method are demonstrated for a two-dimensional particulate suspension, the present framework is fully extensible to generic multidimensional media.

  17. A fast Karhunen-Loeve transform for a class of random processes

    NASA Technical Reports Server (NTRS)

    Jain, A. K.

    1976-01-01

    It is shown that for a class of finite first-order Markov signals, the Karhunen-Loeve (KL) transform for data compression is a set of periodic sine functions if the boundary values of the signal are fixed or known. These sine functions are shown to be related to the Fourier transform so that a fast Fourier transform algorithm can be used to implement the KL transform. Extension to two dimensions with reference to images with separable contravariance function is shown.

  18. Equivalent Linearization Analysis of Geometrically Nonlinear Random Vibrations Using Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2002-01-01

    Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.

  19. Exact solution of two interacting run-and-tumble random walkers with finite tumble duration

    NASA Astrophysics Data System (ADS)

    Slowman, A. B.; Evans, M. R.; Blythe, R. A.

    2017-09-01

    We study a model of interacting run-and-tumble random walkers operating under mutual hardcore exclusion on a one-dimensional lattice with periodic boundary conditions. We incorporate a finite, poisson-distributed, tumble duration so that a particle remains stationary whilst tumbling, thus generalising the persistent random walker model. We present the exact solution for the nonequilibrium stationary state of this system in the case of two random walkers. We find this to be characterised by two lengthscales, one arising from the jamming of approaching particles, and the other from one particle moving when the other is tumbling. The first of these lengthscales vanishes in a scaling limit where the continuous-space dynamics is recovered whilst the second remains finite. Thus the nonequilibrium stationary state reveals a rich structure of attractive, jammed and extended pieces.

  20. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  1. Bridges for Pedestrians with Random Parameters using the Stochastic Finite Elements Analysis

    NASA Astrophysics Data System (ADS)

    Szafran, J.; Kamiński, M.

    2017-02-01

    The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments.

  2. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  3. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  4. Adaptive disturbance compensation finite control set optimal control for PMSM systems based on sliding mode extended state observer

    NASA Astrophysics Data System (ADS)

    Wu, Yun-jie; Li, Guo-fei

    2018-01-01

    Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.

  5. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  6. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  7. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  8. Average dynamics of a finite set of coupled phase oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dima, Germán C., E-mail: gdima@df.uba.ar; Mindlin, Gabriel B.

    2014-06-15

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  9. Average dynamics of a finite set of coupled phase oscillators

    PubMed Central

    Dima, Germán C.; Mindlin, Gabriel B.

    2014-01-01

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate. PMID:24985426

  10. Average dynamics of a finite set of coupled phase oscillators.

    PubMed

    Dima, Germán C; Mindlin, Gabriel B

    2014-06-01

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  11. Analysis of random structure-acoustic interaction problems using coupled boundary element and finite element methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Pates, Carl S., III

    1994-01-01

    A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.

  12. A probabilistic analysis of electrical equipment vulnerability to carbon fibers

    NASA Technical Reports Server (NTRS)

    Elber, W.

    1980-01-01

    The statistical problems of airborne carbon fibers falling onto electrical circuits were idealized and analyzed. The probability of making contact between randomly oriented finite length fibers and sets of parallel conductors with various spacings and lengths was developed theoretically. The probability of multiple fibers joining to bridge a single gap between conductors, or forming continuous networks is included. From these theoretical considerations, practical statistical analyses to assess the likelihood of causing electrical malfunctions was produced. The statistics obtained were confirmed by comparison with results of controlled experiments.

  13. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  14. NESSUS/NASTRAN Interface

    NASA Technical Reports Server (NTRS)

    Millwater, Harry; Riha, David

    1996-01-01

    The NESSUS probabilistic analysis computer program has been developed with a built-in finite element analysis program NESSUS/FEM. However, the NESSUS/FEM program is specialized for engine structures and may not contain sufficient features for other applications. In addition, users often become well acquainted with a particular finite element code and want to use that code for probabilistic structural analysis. For these reasons, this work was undertaken to develop an interface between NESSUS and NASTRAN such that NASTRAN can be used for the finite element analysis and NESSUS can be used for the probabilistic analysis. In addition, NESSUS was restructured such that other finite element codes could be more easily coupled with NESSUS. NESSUS has been enhanced such that NESSUS will modify the NASTRAN input deck for a given set of random variables, run NASTRAN and read the NASTRAN result. The coordination between the two codes is handled automatically. The work described here was implemented within NESSUS 6.2 which was delivered to NASA in September 1995. The code runs on Unix machines: Cray, HP, Sun, SGI and IBM. The new capabilities have been implemented such that a user familiar with NESSUS using NESSUS/FEM and NASTRAN can immediately use NESSUS with NASTRAN. In other words, the interface with NASTRAN has been implemented in an analogous manner to the interface with NESSUS/FEM. Only finite element specific input has been changed. This manual is written as an addendum to the existing NESSUS 6.2 manuals. We assume users have access to NESSUS manuals and are familiar with the operation of NESSUS including probabilistic finite element analysis. Update pages to the NESSUS PFEM manual are contained in Appendix E. The finite element features of the code and the probalistic analysis capabilities are summarized.

  15. Probabilistic boundary element method

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Raveendra, S. T.

    1989-01-01

    The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.

  16. Realistic noise-tolerant randomness amplification using finite number of devices.

    PubMed

    Brandão, Fernando G S L; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-21

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  17. Realistic noise-tolerant randomness amplification using finite number of devices

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-04-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology.

  18. Realistic noise-tolerant randomness amplification using finite number of devices

    PubMed Central

    Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna

    2016-01-01

    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology. PMID:27098302

  19. Phase transition in the countdown problem

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Luque, Bartolo

    2012-07-01

    We present a combinatorial decision problem, inspired by the celebrated quiz show called Countdown, that involves the computation of a given target number T from a set of k randomly chosen integers along with a set of arithmetic operations. We find that the probability of winning the game evidences a threshold phenomenon that can be understood in the terms of an algorithmic phase transition as a function of the set size k. Numerical simulations show that such probability sharply transitions from zero to one at some critical value of the control parameter, hence separating the algorithm's parameter space in different phases. We also find that the system is maximally efficient close to the critical point. We derive analytical expressions that match the numerical results for finite size and permit us to extrapolate the behavior in the thermodynamic limit.

  20. Simulation study on heat conduction of a nanoscale phase-change random access memory cell.

    PubMed

    Kim, Junho; Song, Ki-Bong

    2006-11-01

    We have investigated heat transfer characteristics of a nano-scale phase-change random access memory (PRAM) cell using finite element method (FEM) simulation. Our PRAM cell is based on ternary chalcogenide alloy, Ge2Sb2Te5 (GST), which is used as a recording layer. For contact area of 100 x 100 nm2, simulations of crystallization and amorphization processes were carried out. Physical quantities such as electric conductivity, thermal conductivity, and specific heat were treated as temperature-dependent parameters. Through many simulations, it is concluded that one can reduce set current by decreasing both electric conductivities of amorphous GST and crystalline GST, and in addition to these conditions by decreasing electric conductivity of molten GST one can also reduce reset current significantly.

  1. Finite Element Analysis of the Random Response Suppression of Composite Panels at Elevated Temperatures using Shape Memory Alloy Fibers

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Zhong, Z. W.; Mei, Chuh

    1994-01-01

    A feasibility study on the use of shape memory alloys (SMA) for suppression of the random response of composite panels due to acoustic loads at elevated temperatures is presented. The constitutive relations for a composite lamina with embedded SMA fibers are developed. The finite element governing equations and the solution procedures for a composite plate subjected to combined acoustic and thermal loads are presented. Solutions include: 1) Critical buckling temperature; 2) Flat panel random response; 3) Thermal postbuckling deflection; 4) Random response of a thermally buckled panel. The preliminary results demonstrate that the SMA fibers can completely eliminate the thermal postbuckling deflection and significantly reduce the random response at elevated temperatures.

  2. Random covering of the circle: the configuration-space of the free deposition process

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry

    2003-12-01

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.

  3. Angular Random Walk Estimation of a Time-Domain Switching Micromachined Gyroscope

    DTIC Science & Technology

    2016-10-19

    1 2. PARAMETRIC SYSTEM IDENTIFICATION BASED ON TIME-DOMAIN SWITCHING ........ 2 3. FINITE ELEMENT MODELING OF RESONATOR...8 3. FINITE ELEMENT MODELING OF RESONATOR This section details basic finite element modeling of the resonator used with the TDSMG. While it...Based on finite element simulations of the employed resonator, it is found that the effects of thermomechanical noise is on par with 10 ps of timing

  4. Finite plateau in spectral gap of polychromatic constrained random networks

    NASA Astrophysics Data System (ADS)

    Avetisov, V.; Gorsky, A.; Nechaev, S.; Valba, O.

    2017-12-01

    We consider critical behavior in the ensemble of polychromatic Erdős-Rényi networks and regular random graphs, where network vertices are painted in different colors. The links can be randomly removed and added to the network subject to the condition of the vertex degree conservation. In these constrained graphs we run the Metropolis procedure, which favors the connected unicolor triads of nodes. Changing the chemical potential, μ , of such triads, for some wide region of μ , we find the formation of a finite plateau in the number of intercolor links, which exactly matches the finite plateau in the network algebraic connectivity (the value of the first nonvanishing eigenvalue of the Laplacian matrix, λ2). We claim that at the plateau the spontaneously broken Z2 symmetry is restored by the mechanism of modes collectivization in clusters of different colors. The phenomena of a finite plateau formation holds also for polychromatic networks with M ≥2 colors. The behavior of polychromatic networks is analyzed via the spectral properties of their adjacency and Laplacian matrices.

  5. Efficient Z gates for quantum computing

    NASA Astrophysics Data System (ADS)

    McKay, David C.; Wood, Christopher J.; Sheldon, Sarah; Chow, Jerry M.; Gambetta, Jay M.

    2017-08-01

    For superconducting qubits, microwave pulses drive rotations around the Bloch sphere. The phase of these drives can be used to generate zero-duration arbitrary virtual Z gates, which, combined with two Xπ /2 gates, can generate any SU(2) gate. Here we show how to best utilize these virtual Z gates to both improve algorithms and correct pulse errors. We perform randomized benchmarking using a Clifford set of Hadamard and Z gates and show that the error per Clifford is reduced versus a set consisting of standard finite-duration X and Y gates. Z gates can correct unitary rotation errors for weakly anharmonic qubits as an alternative to pulse-shaping techniques such as derivative removal by adiabatic gate (DRAG). We investigate leakage and show that a combination of DRAG pulse shaping to minimize leakage and Z gates to correct rotation errors realizes a 13.3 ns Xπ /2 gate characterized by low error [1.95 (3 ) ×10-4] and low leakage [3.1 (6 ) ×10-6] . Ultimately leakage is limited by the finite temperature of the qubit, but this limit is two orders of magnitude smaller than pulse errors due to decoherence.

  6. Continuous family of finite-dimensional representations of a solvable Lie algebra arising from singularities

    PubMed Central

    Yau, Stephen S.-T.

    1983-01-01

    A natural mapping from the set of complex analytic isolated hypersurface singularities to the set of finite dimensional Lie algebras is first defined. It is proven that the image under this natural mapping is contained in the set of solvable Lie algebras. This approach gives rise to a continuous inequivalent family of finite dimensional representations of a solvable Lie algebra. PMID:16593401

  7. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  8. Hybrid phase transition into an absorbing state: Percolation and avalanches

    NASA Astrophysics Data System (ADS)

    Lee, Deokjae; Choi, S.; Stippinger, M.; Kertész, J.; Kahng, B.

    2016-04-01

    Interdependent networks are more fragile under random attacks than simplex networks, because interlayer dependencies lead to cascading failures and finally to a sudden collapse. This is a hybrid phase transition (HPT), meaning that at the transition point the order parameter has a jump but there are also critical phenomena related to it. Here we study these phenomena on the Erdős-Rényi and the two-dimensional interdependent networks and show that the hybrid percolation transition exhibits two kinds of critical behaviors: divergence of the fluctuations of the order parameter and power-law size distribution of finite avalanches at a transition point. At the transition point global or "infinite" avalanches occur, while the finite ones have a power law size distribution; thus the avalanche statistics also has the nature of a HPT. The exponent βm of the order parameter is 1 /2 under general conditions, while the value of the exponent γm characterizing the fluctuations of the order parameter depends on the system. The critical behavior of the finite avalanches can be described by another set of exponents, βa and γa. These two critical behaviors are coupled by a scaling law: 1 -βm=γa .

  9. Mixing rates and limit theorems for random intermittent maps

    NASA Astrophysics Data System (ADS)

    Bahsoun, Wael; Bose, Christopher

    2016-04-01

    We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.

  10. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  11. Extended self-similarity in the two-dimensional metal-insulator transition

    NASA Astrophysics Data System (ADS)

    Moriconi, L.

    2003-09-01

    We show that extended self-similarity, a scaling phenomenon first observed in classical turbulent flows, holds for a two-dimensional metal-insulator transition that belongs to the universality class of random Dirac fermions. Deviations from multifractality, which in turbulence are due to the dominance of diffusive processes at small scales, appear in the condensed-matter context as a large-scale, finite-size effect related to the imposition of an infrared cutoff in the field theory formulation. We propose a phenomenological interpretation of extended self-similarity in the metal-insulator transition within the framework of the random β-model description of multifractal sets. As a natural step, our discussion is bridged to the analysis of strange attractors, where crossovers between multifractal and nonmultifractal regimes are found and extended self-similarity turns out to be verified as well.

  12. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  13. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors

    NASA Astrophysics Data System (ADS)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  14. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    PubMed

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  15. Microscopic and macroscopic instabilities in finitely strained porous elastomers

    NASA Astrophysics Data System (ADS)

    Michel, J. C.; Lopez-Pamies, O.; Ponte Castañeda, P.; Triantafyllidis, N.

    2007-05-01

    The present work is an in-depth study of the connections between microstructural instabilities and their macroscopic manifestations—as captured through the effective properties—in finitely strained porous elastomers. The powerful second-order homogenization (SOH) technique initially developed for random media, is used for the first time here to study the onset of failure in periodic porous elastomers and the results are compared to more accurate finite element method (FEM) calculations. The influence of different microgeometries (random and periodic), initial porosity, matrix constitutive law and macroscopic load orientation on the microscopic buckling (for periodic microgeometries) and macroscopic loss of ellipticity (for all microgeometries) is investigated in detail. In addition to the above-described stability-based onset-of-failure mechanisms, constraints on the principal solution are also addressed, thus giving a complete picture of the different possible failure mechanisms present in finitely strained porous elastomers.

  16. Spread of information and infection on finite random networks

    NASA Astrophysics Data System (ADS)

    Isham, Valerie; Kaczmarska, Joanna; Nekovee, Maziar

    2011-04-01

    The modeling of epidemic-like processes on random networks has received considerable attention in recent years. While these processes are inherently stochastic, most previous work has been focused on deterministic models that ignore important fluctuations that may persist even in the infinite network size limit. In a previous paper, for a class of epidemic and rumor processes, we derived approximate models for the full probability distribution of the final size of the epidemic, as opposed to only mean values. In this paper we examine via direct simulations the adequacy of the approximate model to describe stochastic epidemics and rumors on several random network topologies: homogeneous networks, Erdös-Rényi (ER) random graphs, Barabasi-Albert scale-free networks, and random geometric graphs. We find that the approximate model is reasonably accurate in predicting the probability of spread. However, the position of the threshold and the conditional mean of the final size for processes near the threshold are not well described by the approximate model even in the case of homogeneous networks. We attribute this failure to the presence of other structural properties beyond degree-degree correlations, and in particular clustering, which are present in any finite network but are not incorporated in the approximate model. In order to test this “hypothesis” we perform additional simulations on a set of ER random graphs where degree-degree correlations and clustering are separately and independently introduced using recently proposed algorithms from the literature. Our results show that even strong degree-degree correlations have only weak effects on the position of the threshold and the conditional mean of the final size. On the other hand, the introduction of clustering greatly affects both the position of the threshold and the conditional mean. Similar analysis for the Barabasi-Albert scale-free network confirms the significance of clustering on the dynamics of rumor spread. For this network, though, with its highly skewed degree distribution, the addition of positive correlation had a much stronger effect on the final size distribution than was found for the simple random graph.

  17. Hydro-mechanical coupled simulation of hydraulic fracturing using the eXtended Finite Element Method (XFEM)

    NASA Astrophysics Data System (ADS)

    Youn, Dong Joon

    This thesis presents the development and validation of an advanced hydro-mechanical coupled finite element program analyzing hydraulic fracture propagation within unconventional hydrocarbon formations under various conditions. The realistic modeling of hydraulic fracturing is necessarily required to improve the understanding and efficiency of the stimulation technique. Such modeling remains highly challenging, however, due to factors including the complexity of fracture propagation mechanisms, the coupled behavior of fracture displacement and fluid pressure, the interactions between pre-existing natural and initiated hydraulic fractures and the formation heterogeneity of the target reservoir. In this research, an eXtended Finite Element Method (XFEM) scheme is developed allowing for representation of single or multiple fracture propagations without any need for re-meshing. Also, the coupled flows through the fracture are considered in the program to account for their influence on stresses and deformations along the hydraulic fracture. In this research, a sequential coupling scheme is applied to estimate fracture aperture and fluid pressure with the XFEM. Later, the coupled XFEM program is used to estimate wellbore bottomhole pressure during fracture propagation, and the pressure variations are analyzed to determine the geometry and performance of the hydraulic fracturing as pressure leak-off test. Finally, material heterogeneity is included into the XFEM program to check the effect of random formation property distributions to the hydraulic fracture geometry. Random field theory is used to create the random realization of the material heterogeneity with the consideration of mean, standard deviation, and property correlation length. These analyses lead to probabilistic information on the response of unconventional reservoirs and offer a more scientific approach regarding risk management for the unconventional reservoir stimulation. The new stochastic approach combining XFEM and random field is named as eXtended Random Finite Element Method (XRFEM). All the numerical analysis codes in this thesis are written in Fortran 2003, and these codes are applicable as a series of sub-modules within a suite of finite element codes developed by Smith and Griffiths (2004).

  18. The Transition from Comparison of Finite to the Comparison of Infinite Sets: Teaching Prospective Teachers.

    ERIC Educational Resources Information Center

    Tsamir, Pessia

    1999-01-01

    Describes a course in Cantorian Set Theory relating to prospective secondary mathematics teachers' tendencies to overgeneralize from finite to infinite sets. Indicates that when comparing the number of elements in infinite sets, teachers who took the course were more successful and more consistent in their use of single method than those who…

  19. Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2000-01-01

    A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.

  20. One-Shot Coherence Dilution.

    PubMed

    Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng

    2018-02-16

    Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost-the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.

  1. One-Shot Coherence Dilution

    NASA Astrophysics Data System (ADS)

    Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng

    2018-02-01

    Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost—the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.

  2. Arbitrarily small amounts of correlation for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2013-11-15

    As our main result show that in order to achieve the randomness assisted message and entanglement transmission capacities of a finite arbitrarily varying quantum channel it is not necessary that sender and receiver share (asymptotically perfect) common randomness. Rather, it is sufficient that they each have access to an unlimited amount of uses of one part of a correlated bipartite source. This access might be restricted to an arbitrary small (nonzero) fraction per channel use, without changing the main result. We investigate the notion of common randomness. It turns out that this is a very costly resource – generically, itmore » cannot be obtained just by local processing of a bipartite source. This result underlines the importance of our main result. Also, the asymptotic equivalence of the maximal- and average error criterion for classical message transmission over finite arbitrarily varying quantum channels is proven. At last, we prove a simplified symmetrizability condition for finite arbitrarily varying quantum channels.« less

  3. A scaling law for random walks on networks

    PubMed Central

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-01-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870

  4. Effect of randomness on multi-frequency aeroelastic responses resolved by Unsteady Adaptive Stochastic Finite Elements

    NASA Astrophysics Data System (ADS)

    Witteveen, Jeroen A. S.; Bijl, Hester

    2009-10-01

    The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.

  5. A scaling law for random walks on networks

    NASA Astrophysics Data System (ADS)

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  6. A scaling law for random walks on networks.

    PubMed

    Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-14

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  7. Finite Optimal Stopping Problems: The Seller's Perspective

    ERIC Educational Resources Information Center

    Hemmati, Mehdi; Smith, J. Cole

    2011-01-01

    We consider a version of an optimal stopping problem, in which a customer is presented with a finite set of items, one by one. The customer is aware of the number of items in the finite set and the minimum and maximum possible value of each item, and must purchase exactly one item. When an item is presented to the customer, she or he observes its…

  8. A Wave Chaotic Study of Quantum Graphs with Microwave Networks

    NASA Astrophysics Data System (ADS)

    Fu, Ziyuan

    Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.

  9. Learning dependence from samples.

    PubMed

    Seth, Sohan; Príncipe, José C

    2014-01-01

    Mutual information, conditional mutual information and interaction information have been widely used in scientific literature as measures of dependence, conditional dependence and mutual dependence. However, these concepts suffer from several computational issues; they are difficult to estimate in continuous domain, the existing regularised estimators are almost always defined only for real or vector-valued random variables, and these measures address what dependence, conditional dependence and mutual dependence imply in terms of the random variables but not finite realisations. In this paper, we address the issue that given a set of realisations in an arbitrary metric space, what characteristic makes them dependent, conditionally dependent or mutually dependent. With this novel understanding, we develop new estimators of association, conditional association and interaction association. Some attractive properties of these estimators are that they do not require choosing free parameter(s), they are computationally simpler, and they can be applied to arbitrary metric spaces.

  10. Free Vibration of Uncertain Unsymmetrically Laminated Beams

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Goyal, Vijay K.

    2001-01-01

    Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.

  11. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  12. Stochastic Galerkin methods for the steady-state Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu

    2016-07-01

    We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less

  13. Linear regression analysis of survival data with missing censoring indicators.

    PubMed

    Wang, Qihua; Dinse, Gregg E

    2011-04-01

    Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.

  14. Stochastic Galerkin methods for the steady-state Navier–Stokes equations

    DOE PAGES

    Sousedík, Bedřich; Elman, Howard C.

    2016-04-12

    We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less

  15. A process of rumour scotching on finite populations.

    PubMed

    de Arruda, Guilherme Ferraz; Lebensztayn, Elcio; Rodrigues, Francisco A; Rodríguez, Pablo Martín

    2015-09-01

    Rumour spreading is a ubiquitous phenomenon in social and technological networks. Traditional models consider that the rumour is propagated by pairwise interactions between spreaders and ignorants. Only spreaders are active and may become stiflers after contacting spreaders or stiflers. Here we propose a competition-like model in which spreaders try to transmit an information, while stiflers are also active and try to scotch it. We study the influence of transmission/scotching rates and initial conditions on the qualitative behaviour of the process. An analytical treatment based on the theory of convergence of density-dependent Markov chains is developed to analyse how the final proportion of ignorants behaves asymptotically in a finite homogeneously mixing population. We perform Monte Carlo simulations in random graphs and scale-free networks and verify that the results obtained for homogeneously mixing populations can be approximated for random graphs, but are not suitable for scale-free networks. Furthermore, regarding the process on a heterogeneous mixing population, we obtain a set of differential equations that describes the time evolution of the probability that an individual is in each state. Our model can also be applied for studying systems in which informed agents try to stop the rumour propagation, or for describing related susceptible-infected-recovered systems. In addition, our results can be considered to develop optimal information dissemination strategies and approaches to control rumour propagation.

  16. A process of rumour scotching on finite populations

    PubMed Central

    de Arruda, Guilherme Ferraz; Lebensztayn, Elcio; Rodrigues, Francisco A.; Rodríguez, Pablo Martín

    2015-01-01

    Rumour spreading is a ubiquitous phenomenon in social and technological networks. Traditional models consider that the rumour is propagated by pairwise interactions between spreaders and ignorants. Only spreaders are active and may become stiflers after contacting spreaders or stiflers. Here we propose a competition-like model in which spreaders try to transmit an information, while stiflers are also active and try to scotch it. We study the influence of transmission/scotching rates and initial conditions on the qualitative behaviour of the process. An analytical treatment based on the theory of convergence of density-dependent Markov chains is developed to analyse how the final proportion of ignorants behaves asymptotically in a finite homogeneously mixing population. We perform Monte Carlo simulations in random graphs and scale-free networks and verify that the results obtained for homogeneously mixing populations can be approximated for random graphs, but are not suitable for scale-free networks. Furthermore, regarding the process on a heterogeneous mixing population, we obtain a set of differential equations that describes the time evolution of the probability that an individual is in each state. Our model can also be applied for studying systems in which informed agents try to stop the rumour propagation, or for describing related susceptible–infected–recovered systems. In addition, our results can be considered to develop optimal information dissemination strategies and approaches to control rumour propagation. PMID:26473048

  17. Physical states and finite-size effects in Kitaev's honeycomb model: Bond disorder, spin excitations, and NMR line shape

    NASA Astrophysics Data System (ADS)

    Zschocke, Fabian; Vojta, Matthias

    2015-07-01

    Kitaev's compass model on the honeycomb lattice realizes a spin liquid whose emergent excitations are dispersive Majorana fermions and static Z2 gauge fluxes. We discuss the proper selection of physical states for finite-size simulations in the Majorana representation, based on a recent paper by F. L. Pedrocchi, S. Chesi, and D. Loss [Phys. Rev. B 84, 165414 (2011), 10.1103/PhysRevB.84.165414]. Certain physical observables acquire large finite-size effects, in particular if the ground state is not fermion-free, which we prove to generally apply to the system in the gapless phase and with periodic boundary conditions. To illustrate our findings, we compute the static and dynamic spin susceptibilities for finite-size systems. Specifically, we consider random-bond disorder (which preserves the solubility of the model), calculate the distribution of local flux gaps, and extract the NMR line shape. We also predict a transition to a random-flux state with increasing disorder.

  18. Bell - Kochen - Specker theorem for any finite dimension ?

    NASA Astrophysics Data System (ADS)

    Cabello, Adán; García-Alcaine, Guillermo

    1996-03-01

    The Bell - Kochen - Specker theorem against non-contextual hidden variables can be proved by constructing a finite set of `totally non-colourable' directions, as Kochen and Specker did in a Hilbert space of dimension n = 3. We generalize Kochen and Specker's set to Hilbert spaces of any finite dimension 0305-4470/29/5/016/img2, in a three-step process that shows the relationship between different kinds of proofs (`continuum', `probabilistic', `state-specific' and `state-independent') of the Bell - Kochen - Specker theorem. At the same time, this construction of a totally non-colourable set of directions in any dimension explicitly solves the question raised by Zimba and Penrose about the existence of such a set for n = 5.

  19. Finite Size Corrections to the Parisi Overlap Function in the GREM

    NASA Astrophysics Data System (ADS)

    Derrida, Bernard; Mottishaw, Peter

    2018-01-01

    We investigate the effects of finite size corrections on the overlap probabilities in the Generalized Random Energy Model in two situations where replica symmetry is broken in the thermodynamic limit. Our calculations do not use replicas, but shed some light on what the replica method should give for finite size corrections. In the gradual freezing situation, which is known to exhibit full replica symmetry breaking, we show that the finite size corrections lead to a modification of the simple relations between the sample averages of the overlaps Y_k between k configurations predicted by replica theory. This can be interpreted as fluctuations in the replica block size with a negative variance. The mechanism is similar to the one we found recently in the random energy model in Derrida and Mottishaw (J Stat Mech 2015(1): P01021, 2015). We also consider a simultaneous freezing situation, which is known to exhibit one step replica symmetry breaking. We show that finite size corrections lead to full replica symmetry breaking and give a more complete derivation of the results presented in Derrida and Mottishaw (Europhys Lett 115(4): 40005, 2016) for the directed polymer on a tree.

  20. Cooperative Solutions in Multi-Person Quadratic Decision Problems: Finite-Horizon and State-Feedback Cost-Cumulant Control Paradigm

    DTIC Science & Technology

    2007-01-01

    CONTRACT NUMBER Problems: Finite -Horizon and State-Feedback Cost-Cumulant Control Paradigm (PREPRINT) 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...cooperative cost-cumulant control regime for the class of multi-person single-objective decision problems characterized by quadratic random costs and... finite -horizon integral quadratic cost associated with a linear stochastic system . Since this problem formation is parameterized by the number of cost

  1. Bioinspired Concepts: Unified Theory for Complex Biological and Engineering Systems

    DTIC Science & Technology

    2006-01-01

    i.e., data flows of finite size arrive at the system randomly. For such a system , we propose a modified dual scheduling algorithm that stabilizes ...demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits...and highly optimized tolerance. PNAS, 102, 2005. 51. G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  3. Rare Event Simulation in Radiation Transport

    NASA Astrophysics Data System (ADS)

    Kollman, Craig

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.

  4. Optimal search strategies of space-time coupled random walkers with finite lifetimes

    NASA Astrophysics Data System (ADS)

    Campos, D.; Abad, E.; Méndez, V.; Yuste, S. B.; Lindenberg, K.

    2015-05-01

    We present a simple paradigm for detection of an immobile target by a space-time coupled random walker with a finite lifetime. The motion of the walker is characterized by linear displacements at a fixed speed and exponentially distributed duration, interrupted by random changes in the direction of motion and resumption of motion in the new direction with the same speed. We call these walkers "mortal creepers." A mortal creeper may die at any time during its motion according to an exponential decay law characterized by a finite mean death rate ωm. While still alive, the creeper has a finite mean frequency ω of change of the direction of motion. In particular, we consider the efficiency of the target search process, characterized by the probability that the creeper will eventually detect the target. Analytic results confirmed by numerical results show that there is an ωm-dependent optimal frequency ω =ωopt that maximizes the probability of eventual target detection. We work primarily in one-dimensional (d =1 ) domains and examine the role of initial conditions and of finite domain sizes. Numerical results in d =2 domains confirm the existence of an optimal frequency of change of direction, thereby suggesting that the observed effects are robust to changes in dimensionality. In the d =1 case, explicit expressions for the probability of target detection in the long time limit are given. In the case of an infinite domain, we compute the detection probability for arbitrary times and study its early- and late-time behavior. We further consider the survival probability of the target in the presence of many independent creepers beginning their motion at the same location and at the same time. We also consider a version of the standard "target problem" in which many creepers start at random locations at the same time.

  5. Finite-temperature mechanical instability in disordered lattices.

    PubMed

    Zhang, Leyou; Mao, Xiaoming

    2016-02-01

    Mechanical instability takes different forms in various ordered and disordered systems and little is known about how thermal fluctuations affect different classes of mechanical instabilities. We develop an analytic theory involving renormalization of rigidity and coherent potential approximation that can be used to understand finite-temperature mechanical stabilities in various disordered systems. We use this theory to study two disordered lattices: a randomly diluted triangular lattice and a randomly braced square lattice. These two lattices belong to two different universality classes as they approach mechanical instability at T=0. We show that thermal fluctuations stabilize both lattices. In particular, the triangular lattice displays a critical regime in which the shear modulus scales as G∼T(1/2), whereas the square lattice shows G∼T(2/3). We discuss generic scaling laws for finite-T mechanical instabilities and relate them to experimental systems.

  6. Experimental and numerical analysis of the constitutive equation of rubber composites reinforced with random ceramic particle

    NASA Astrophysics Data System (ADS)

    Luo, D. M.; Xie, Y.; Su, X. R.; Zhou, Y. L.

    2018-01-01

    Based on the four classical models of Mooney-Rivlin (M-R), Yeoh, Ogden and Neo-Hookean (N-H) model, a strain energy constitutive equation with large deformation for rubber composites reinforced with random ceramic particles is proposed from the angle of continuum mechanics theory in this paper. By decoupling the interaction between matrix and random particles, the strain energy of each phase is obtained to derive the explicit constitutive equation for rubber composites. The tests results of uni-axial tensile, pure shear and equal bi-axial tensile are simulated by the non-linear finite element method on the ANSYS platform. The results from finite element method are compared with those from experiment, and the material parameters are determined by fitting the results from different test conditions, and the influence of radius of random ceramic particles on the effective mechanical properties are analyzed.

  7. Exploration and Trapping of Mortal Random Walkers

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.; Abad, E.; Lindenberg, Katja

    2013-05-01

    Exploration and trapping properties of random walkers that may evanesce at any time as they walk have seen very little treatment in the literature, and yet a finite lifetime is a frequent occurrence, and its effects on a number of random walk properties may be profound. For instance, whereas the average number of distinct sites visited by an immortal walker grows with time without bound, that of a mortal walker may, depending on dimensionality and rate of evanescence, remain finite or keep growing with the passage of time. This number can in turn be used to calculate other classic quantities such as the survival probability of a target surrounded by diffusing traps. If the traps are immortal, the survival probability will vanish with increasing time. However, if the traps are evanescent, the target may be spared a certain death. We analytically calculate a number of basic and broadly used quantities for evanescent random walkers.

  8. Empirical scaling of the length of the longest increasing subsequences of random walks

    NASA Astrophysics Data System (ADS)

    Mendonça, J. Ricardo G.

    2017-02-01

    We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.

  9. The one-dimensional asymmetric persistent random walk

    NASA Astrophysics Data System (ADS)

    Rossetto, Vincent

    2018-04-01

    Persistent random walks are intermediate transport processes between a uniform rectilinear motion and a Brownian motion. They are formed by successive steps of random finite lengths and directions travelled at a fixed speed. The isotropic and symmetric 1D persistent random walk is governed by the telegrapher’s equation, also called the hyperbolic heat conduction equation. These equations have been designed to resolve the paradox of the infinite speed in the heat and diffusion equations. The finiteness of both the speed and the correlation length leads to several classes of random walks: Persistent random walk in one dimension can display anomalies that cannot arise for Brownian motion such as anisotropy and asymmetries. In this work we focus on the case where the mean free path is anisotropic, the only anomaly leading to a physics that is different from the telegrapher’s case. We derive exact expression of its Green’s function, for its scattering statistics and distribution of first-passage time at the origin. The phenomenology of the latter shows a transition for quantities like the escape probability and the residence time.

  10. Use of adjoint methods in the probabilistic finite element approach to fracture mechanics

    NASA Technical Reports Server (NTRS)

    Liu, Wing Kam; Besterfield, Glen; Lawrence, Mark; Belytschko, Ted

    1988-01-01

    The adjoint method approach to probabilistic finite element methods (PFEM) is presented. When the number of objective functions is small compared to the number of random variables, the adjoint method is far superior to the direct method in evaluating the objective function derivatives with respect to the random variables. The PFEM is extended to probabilistic fracture mechanics (PFM) using an element which has the near crack-tip singular strain field embedded. Since only two objective functions (i.e., mode I and II stress intensity factors) are needed for PFM, the adjoint method is well suited.

  11. Electromagnetic scattering from a layer of finite length, randomly oriented, dielectric, circular cylinders over a rough interface with application to vegetation

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1988-01-01

    A scattering model for defoliated vegetation is developed by treating a layer of defoliated vegetation as a collection of randomly oriented dielectric cylinders of finite length over an irregular ground surface. Both polarized and depolarized backscattering are computed and their behavior versus the volume fraction, the incidence angle, the frequency, the angular distribution and the cylinder size are illustrated. It is found that both the angular distribution and the cylinder size have significant effects on the backscattered signal. The present theory is compared with measurements from defoliated vegetations.

  12. A heuristic for the distribution of point counts for random curves over a finite field.

    PubMed

    Achter, Jeffrey D; Erman, Daniel; Kedlaya, Kiran S; Wood, Melanie Matchett; Zureick-Brown, David

    2015-04-28

    How many rational points are there on a random algebraic curve of large genus g over a given finite field Fq? We propose a heuristic for this question motivated by a (now proven) conjecture of Mumford on the cohomology of moduli spaces of curves; this heuristic suggests a Poisson distribution with mean q+1+1/(q-1). We prove a weaker version of this statement in which g and q tend to infinity, with q much larger than g. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  13. Decomposition of Fuzzy Soft Sets with Finite Value Spaces

    PubMed Central

    Jun, Young Bae

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342

  14. Decomposition of fuzzy soft sets with finite value spaces.

    PubMed

    Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.

  15. Probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  16. Nonlinear probabilistic finite element models of laminated composite shells

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Reddy, J. N.

    1993-01-01

    A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.

  17. Impurity-directed transport within a finite disordered lattice

    NASA Astrophysics Data System (ADS)

    Magnetta, Bradley J.; Ordonez, Gonzalo; Garmon, Savannah

    2018-02-01

    We consider a finite, disordered 1D quantum lattice with a side-attached impurity. We study theoretically the transport of a single electron from the impurity into the lattice, at zero temperature. The transport is dominated by Anderson localization and, in general, the electron motion has a random character due to the lattice disorder. However, we show that by adjusting the impurity energy the electron can attain quasi-periodic motions, oscillating between the impurity and a small region of the lattice. This region corresponds to the spatial extent of a localized state with an energy matched by that of the impurity. By precisely tuning the impurity energy, the electron can be set to oscillate between the impurity and a region far from the impurity, even distances larger than the Anderson localization length. The electron oscillations result from the interference of hybridized states, which have some resemblance to Pendry's necklace states (Pendry, 1987) [21]. The dependence of the electron motion on the impurity energy gives a potential mechanism for selectively routing an electron towards different regions of a 1D disordered lattice.

  18. Development of digital phantoms based on a finite element model to simulate low-attenuation areas in CT imaging for pulmonary emphysema quantification.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2017-09-01

    To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.

  19. Studies of Sound Absorption by and Transmission Through Layers of Elastic Noise Control Foams: Finite Element Modeling and Effects of Anisotropy

    NASA Astrophysics Data System (ADS)

    Kang, Yeon June

    In this thesis an elastic-absorption finite element model of isotropic elastic porous noise control materials is first presented as a means of investigating the effects of finite dimension and edge constraints on the sound absorption by, and transmission through, layers of acoustical foams. Methods for coupling foam finite elements with conventional acoustic and structural finite elements are also described. The foam finite element model based on the Biot theory allows for the simultaneous propagation of the three types of waves known to exist in an elastic porous material. Various sets of boundary conditions appropriate for modeling open, membrane-sealed and panel-bonded foam surfaces are formulated and described. Good agreement was achieved when finite element predictions were compared with previously established analytical results for the plane wave absorption coefficient and transmission loss in the case of wave propagation both in foam-filled waveguides and through foam-lined double panel structures of infinite lateral extent. The primary effect of the edge constraints of a foam layer was found to be an acoustical stiffening of the foam. Constraining the ends of the facing panels in foam-lined double panel systems was also found to increase the sound transmission loss significantly in the low frequency range. In addition, a theoretical multi-dimensional model for wave propagation in anisotropic elastic porous materials was developed to study the effect of anisotropy on the sound transmission of foam-lined noise control treatments. The predictions of the theoretical anisotropic model have been compared with experimental measurements for the random incidence sound transmission through double panel structure lined with polyimide foam. The predictions were made by using the measured and estimated macroscopic physical parameters of polyimide foam samples which were known to be anisotropic. It has been found that the macroscopic physical parameters in the direction normal to the face of foam layer play the principal role in determining the acoustical behavior of polyimide foam layers, although more satisfactory agreement between experimental measurements and theoretical predictions of transmission loss is obtained when the anisotropic properties are allowed in the model.

  20. A probabilistic model of a porous heat exchanger

    NASA Technical Reports Server (NTRS)

    Agrawal, O. P.; Lin, X. A.

    1995-01-01

    This paper presents a probabilistic one-dimensional finite element model for heat transfer processes in porous heat exchangers. The Galerkin approach is used to develop the finite element matrices. Some of the submatrices are asymmetric due to the presence of the flow term. The Neumann expansion is used to write the temperature distribution as a series of random variables, and the expectation operator is applied to obtain the mean and deviation statistics. To demonstrate the feasibility of the formulation, a one-dimensional model of heat transfer phenomenon in superfluid flow through a porous media is considered. Results of this formulation agree well with the Monte-Carlo simulations and the analytical solutions. Although the numerical experiments are confined to parametric random variables, a formulation is presented to account for the random spatial variations.

  1. Influence of material uncertainties on the RLC parameters of wound inductors modeled using the finite element method

    NASA Astrophysics Data System (ADS)

    Lossa, Geoffrey; Deblecker, Olivier; Grève, Zacharie De

    2018-05-01

    In this work, we highlight the influence of the material uncertainties (magnetic permeability, electric conductivity of a Mn-Zn ferrite core, and electric permittivity of wire insulation) on the RLC parameters of a wound inductor extracted from the finite element method. To that end, the finite element method is embedded in a Monte Carlo simulation. We show that considering mentioned different material properties as real random variables, leads to significant variations in the distributions of the RLC parameters.

  2. Dynamic Loads Generation for Multi-Point Vibration Excitation Problems

    NASA Technical Reports Server (NTRS)

    Shen, Lawrence

    2011-01-01

    A random-force method has been developed to predict dynamic loads produced by rocket-engine random vibrations for new rocket-engine designs. The method develops random forces at multiple excitation points based on random vibration environments scaled from accelerometer data obtained during hot-fire tests of existing rocket engines. This random-force method applies random forces to the model and creates expected dynamic response in a manner that simulates the way the operating engine applies self-generated random vibration forces (random pressure acting on an area) with the resulting responses that we measure with accelerometers. This innovation includes the methodology (implementation sequence), the computer code, two methods to generate the random-force vibration spectra, and two methods to reduce some of the inherent conservatism in the dynamic loads. This methodology would be implemented to generate the random-force spectra at excitation nodes without requiring the use of artificial boundary conditions in a finite element model. More accurate random dynamic loads than those predicted by current industry methods can then be generated using the random force spectra. The scaling method used to develop the initial power spectral density (PSD) environments for deriving the random forces for the rocket engine case is based on the Barrett Criteria developed at Marshall Space Flight Center in 1963. This invention approach can be applied in the aerospace, automotive, and other industries to obtain reliable dynamic loads and responses from a finite element model for any structure subject to multipoint random vibration excitations.

  3. Protein Loop Structure Prediction Using Conformational Space Annealing.

    PubMed

    Heo, Seungryong; Lee, Juyong; Joo, Keehyoung; Shin, Hang-Cheol; Lee, Jooyoung

    2017-05-22

    We have developed a protein loop structure prediction method by combining a new energy function, which we call E PLM (energy for protein loop modeling), with the conformational space annealing (CSA) global optimization algorithm. The energy function includes stereochemistry, dynamic fragment assembly, distance-scaled finite ideal gas reference (DFIRE), and generalized orientation- and distance-dependent terms. For the conformational search of loop structures, we used the CSA algorithm, which has been quite successful in dealing with various hard global optimization problems. We assessed the performance of E PLM with two widely used loop-decoy sets, Jacobson and RAPPER, and compared the results against the DFIRE potential. The accuracy of model selection from a pool of loop decoys as well as de novo loop modeling starting from randomly generated structures was examined separately. For the selection of a nativelike structure from a decoy set, E PLM was more accurate than DFIRE in the case of the Jacobson set and had similar accuracy in the case of the RAPPER set. In terms of sampling more nativelike loop structures, E PLM outperformed E DFIRE for both decoy sets. This new approach equipped with E PLM and CSA can serve as the state-of-the-art de novo loop modeling method.

  4. Dynamical transition for a particle in a squared Gaussian potential

    NASA Astrophysics Data System (ADS)

    Touya, C.; Dean, D. S.

    2007-02-01

    We study the problem of a Brownian particle diffusing in finite dimensions in a potential given by ψ = phi2/2 where phi is Gaussian random field. Exact results for the diffusion constant in the high temperature phase are given in one and two dimensions and it is shown to vanish in a power-law fashion at the dynamical transition temperature. Our results are confronted with numerical simulations where the Gaussian field is constructed, in a standard way, as a sum over random Fourier modes. We show that when the number of Fourier modes is finite the low temperature diffusion constant becomes non-zero and has an Arrhenius form. Thus we have a simple model with a fully understood finite size scaling theory for the dynamical transition. In addition we analyse the nature of the anomalous diffusion in the low temperature regime and show that the anomalous exponent agrees with that predicted by a trap model.

  5. A framework for analyzing contagion in assortative banking networks

    PubMed Central

    Hurd, Thomas R.; Gleeson, James P.; Melnik, Sergey

    2017-01-01

    We introduce a probabilistic framework that represents stylized banking networks with the aim of predicting the size of contagion events. Most previous work on random financial networks assumes independent connections between banks, whereas our framework explicitly allows for (dis)assortative edge probabilities (i.e., a tendency for small banks to link to large banks). We analyze default cascades triggered by shocking the network and find that the cascade can be understood as an explicit iterated mapping on a set of edge probabilities that converges to a fixed point. We derive a cascade condition, analogous to the basic reproduction number R0 in epidemic modelling, that characterizes whether or not a single initially defaulted bank can trigger a cascade that extends to a finite fraction of the infinite network. This cascade condition is an easily computed measure of the systemic risk inherent in a given banking network topology. We use percolation theory for random networks to derive a formula for the frequency of global cascades. These analytical results are shown to provide limited quantitative agreement with Monte Carlo simulation studies of finite-sized networks. We show that edge-assortativity, the propensity of nodes to connect to similar nodes, can have a strong effect on the level of systemic risk as measured by the cascade condition. However, the effect of assortativity on systemic risk is subtle, and we propose a simple graph theoretic quantity, which we call the graph-assortativity coefficient, that can be used to assess systemic risk. PMID:28231324

  6. A framework for analyzing contagion in assortative banking networks.

    PubMed

    Hurd, Thomas R; Gleeson, James P; Melnik, Sergey

    2017-01-01

    We introduce a probabilistic framework that represents stylized banking networks with the aim of predicting the size of contagion events. Most previous work on random financial networks assumes independent connections between banks, whereas our framework explicitly allows for (dis)assortative edge probabilities (i.e., a tendency for small banks to link to large banks). We analyze default cascades triggered by shocking the network and find that the cascade can be understood as an explicit iterated mapping on a set of edge probabilities that converges to a fixed point. We derive a cascade condition, analogous to the basic reproduction number R0 in epidemic modelling, that characterizes whether or not a single initially defaulted bank can trigger a cascade that extends to a finite fraction of the infinite network. This cascade condition is an easily computed measure of the systemic risk inherent in a given banking network topology. We use percolation theory for random networks to derive a formula for the frequency of global cascades. These analytical results are shown to provide limited quantitative agreement with Monte Carlo simulation studies of finite-sized networks. We show that edge-assortativity, the propensity of nodes to connect to similar nodes, can have a strong effect on the level of systemic risk as measured by the cascade condition. However, the effect of assortativity on systemic risk is subtle, and we propose a simple graph theoretic quantity, which we call the graph-assortativity coefficient, that can be used to assess systemic risk.

  7. Wave chaos in a randomly inhomogeneous waveguide: spectral analysis of the finite-range evolution operator.

    PubMed

    Makarov, D V; Kon'kov, L E; Uleysky, M Yu; Petrov, P S

    2013-01-01

    The problem of sound propagation in a randomly inhomogeneous oceanic waveguide is considered. An underwater sound channel in the Sea of Japan is taken as an example. Our attention is concentrated on the domains of finite-range ray stability in phase space and their influence on wave dynamics. These domains can be found by means of the one-step Poincare map. To study manifestations of finite-range ray stability, we introduce the finite-range evolution operator (FREO) describing transformation of a wave field in the course of propagation along a finite segment of a waveguide. Carrying out statistical analysis of the FREO spectrum, we estimate the contribution of regular domains and explore their evanescence with increasing length of the segment. We utilize several methods of spectral analysis: analysis of eigenfunctions by expanding them over modes of the unperturbed waveguide, approximation of level-spacing statistics by means of the Berry-Robnik distribution, and the procedure used by A. Relano and coworkers [Relano et al., Phys. Rev. Lett. 89, 244102 (2002); Relano, Phys. Rev. Lett. 100, 224101 (2008)]. Comparing the results obtained with different methods, we find that the method based on the statistical analysis of FREO eigenfunctions is the most favorable for estimating the contribution of regular domains. It allows one to find directly the waveguide modes whose refraction is regular despite the random inhomogeneity. For example, it is found that near-axial sound propagation in the Sea of Japan preserves stability even over distances of hundreds of kilometers due to the presence of a shearless torus in the classical phase space. Increasing the acoustic wavelength degrades scattering, resulting in recovery of eigenfunction localization near periodic orbits of the one-step Poincaré map.

  8. A Riemann-Hilbert formulation for the finite temperature Hubbard model

    NASA Astrophysics Data System (ADS)

    Cavaglià, Andrea; Cornagliotto, Martina; Mattelliano, Massimo; Tateo, Roberto

    2015-06-01

    Inspired by recent results in the context of AdS/CFT integrability, we reconsider the Thermodynamic Bethe Ansatz equations describing the 1D fermionic Hubbard model at finite temperature. We prove that the infinite set of TBA equations are equivalent to a simple nonlinear Riemann-Hilbert problem for a finite number of unknown functions. The latter can be transformed into a set of three coupled nonlinear integral equations defined over a finite support, which can be easily solved numerically. We discuss the emergence of an exact Bethe Ansatz and the link between the TBA approach and the results by Jüttner, Klümper and Suzuki based on the Quantum Transfer Matrix method. We also comment on the analytic continuation mechanism leading to excited states and on the mirror equations describing the finite-size Hubbard model with twisted boundary conditions.

  9. Empirical performance of the multivariate normal universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2013-09-01

    Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.

  10. Local dependence in random graph models: characterization, properties and statistical inference

    PubMed Central

    Schweinberger, Michael; Handcock, Mark S.

    2015-01-01

    Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142

  11. A mapping from the unitary to doubly stochastic matrices and symbols on a finite set

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2008-11-01

    We prove that the mapping from the unitary to doubly stochastic matrices that maps a unitary matrix (ukl) to the doubly stochastic matrix (|ukl|2) is a submersion at a generic unitary matrix. The proof uses the framework of operator symbols on a finite set.

  12. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  13. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    PubMed

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  14. Electric and magnetic microfields inside and outside space-limited configurations of ions and ionic currents

    NASA Astrophysics Data System (ADS)

    Romanovsky, M. Yu; Ebeling, W.; Schimansky-Geier, L.

    2005-01-01

    The problem of electric and magnetic microfields inside finite spherical systems of stochastically moving ions and outside them is studied. The first possible field of applications is high temperature ion clusters created by laser fields [1]. Other possible applications are nearly spherical liquid systems at room-temperature containing electrolytes. Looking for biological applications we may also think about a cell which is a complicated electrolytic system or even a brain which is a still more complicated system of electrolytic currents. The essential model assumption is the random character of charges motion. We assume in our basic model that we have a finite nearly spherical system of randomly moving charges. Even taking into account that this is at best a caricature of any real system, it might be of interest as a limiting case, which admits a full theoretical treatment. For symmetry reasons, a random configuration of moving charges cannot generate a macroscopic magnetic field, but there will be microscopic fluctuating magnetic fields. Distributions for electric and magnetic microfields inside and outside such space- limited systems are calculated. Spherical systems of randomly distributed moving charges are investigated. Starting from earlier results for infinitely large systems, which lead to Holtsmark- type distributions, we show that the fluctuations in finite charge distributions are larger (in comparison to infinite systems of the same charge density).

  15. Probalistic Finite Elements (PFEM) structural dynamics and fracture mechanics

    NASA Technical Reports Server (NTRS)

    Liu, Wing-Kam; Belytschko, Ted; Mani, A.; Besterfield, G.

    1989-01-01

    The purpose of this work is to develop computationally efficient methodologies for assessing the effects of randomness in loads, material properties, and other aspects of a problem by a finite element analysis. The resulting group of methods is called probabilistic finite elements (PFEM). The overall objective of this work is to develop methodologies whereby the lifetime of a component can be predicted, accounting for the variability in the material and geometry of the component, the loads, and other aspects of the environment; and the range of response expected in a particular scenario can be presented to the analyst in addition to the response itself. Emphasis has been placed on methods which are not statistical in character; that is, they do not involve Monte Carlo simulations. The reason for this choice of direction is that Monte Carlo simulations of complex nonlinear response require a tremendous amount of computation. The focus of efforts so far has been on nonlinear structural dynamics. However, in the continuation of this project, emphasis will be shifted to probabilistic fracture mechanics so that the effect of randomness in crack geometry and material properties can be studied interactively with the effect of random load and environment.

  16. Skewness and kurtosis analysis for non-Gaussian distributions

    NASA Astrophysics Data System (ADS)

    Celikoglu, Ahmet; Tirnakli, Ugur

    2018-06-01

    In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.

  17. Nature of magnetization and lateral spin-orbit interaction in gated semiconductor nanowires.

    PubMed

    Karlsson, H; Yakimenko, I I; Berggren, K-F

    2018-05-31

    Semiconductor nanowires are interesting candidates for realization of spintronics devices. In this paper we study electronic states and effects of lateral spin-orbit coupling (LSOC) in a one-dimensional asymmetrically biased nanowire using the Hartree-Fock method with Dirac interaction. We have shown that spin polarization can be triggered by LSOC at finite source-drain bias,as a result of numerical noise representing a random magnetic field due to wiring or a random background magnetic field by Earth magnetic field, for instance. The electrons spontaneously arrange into spin rows in the wire due to electron interactions leading to a finite spin polarization. The direction of polarization is, however, random at zero source-drain bias. We have found that LSOC has an effect on orientation of spin rows only in the case when source-drain bias is applied.

  18. Nature of magnetization and lateral spin–orbit interaction in gated semiconductor nanowires

    NASA Astrophysics Data System (ADS)

    Karlsson, H.; Yakimenko, I. I.; Berggren, K.-F.

    2018-05-01

    Semiconductor nanowires are interesting candidates for realization of spintronics devices. In this paper we study electronic states and effects of lateral spin–orbit coupling (LSOC) in a one-dimensional asymmetrically biased nanowire using the Hartree–Fock method with Dirac interaction. We have shown that spin polarization can be triggered by LSOC at finite source-drain bias,as a result of numerical noise representing a random magnetic field due to wiring or a random background magnetic field by Earth magnetic field, for instance. The electrons spontaneously arrange into spin rows in the wire due to electron interactions leading to a finite spin polarization. The direction of polarization is, however, random at zero source-drain bias. We have found that LSOC has an effect on orientation of spin rows only in the case when source-drain bias is applied.

  19. Lack of a thermodynamic finite-temperature spin-glass phase in the two-dimensional randomly coupled ferromagnet

    NASA Astrophysics Data System (ADS)

    Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.

    2018-05-01

    The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.

  20. Random isotropic one-dimensional XY-model

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  1. SPAR data set contents. [finite element structural analysis system

    NASA Technical Reports Server (NTRS)

    Cunningham, S. W.

    1981-01-01

    The contents of the stored data sets of the SPAR (space processing applications rocket) finite element structural analysis system are documented. The data generated by each of the system's processors are stored in a data file organized as a library. Each data set, containing a two-dimensional table or matrix, is identified by a four-word name listed in a table of contents. The creating SPAR processor, number of rows and columns, and definitions of each of the data items are listed for each data set. An example SPAR problem using these data sets is also presented.

  2. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    NASA Astrophysics Data System (ADS)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  3. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems.

    PubMed

    Junge, Oliver; Kevrekidis, Ioannis G

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  4. Relative commutativity degree of some dihedral groups

    NASA Astrophysics Data System (ADS)

    Abdul Hamid, Muhanizah; Mohd Ali, Nor Muhainiah; Sarmin, Nor Haniza; Abd Manaf, Fadila Normahia

    2013-04-01

    The commutativity degree of a finite group G was introduced by Erdos and Turan for symmetric groups, finite groups and finite rings in 1968. The commutativity degree, P(G), is defined as the probability that a random pair of elements in a group commute. The relative commutativity degree of a group G is defined as the probability for an element of subgroup, H and an element of G to commute with one another and denoted by P(H,G). In this research the relative commutativity degree of some dihedral groups are determined.

  5. Implementation of a finite-amplitude method in a relativistic meson-exchange model

    NASA Astrophysics Data System (ADS)

    Sun, Xuwei; Lu, Dinghui

    2017-08-01

    The finite-amplitude method is a feasible numerical approach to large scale random phase approximation calculations. It avoids the storage and calculation of residual interaction elements as well as the diagonalization of the RPA matrix, which will be prohibitive when the configuration space is huge. In this work we finished the implementation of a finite-amplitude method in a relativistic meson exchange mean field model with axial symmetry. The direct variation approach makes our FAM scheme capable of being extended to the multipole excitation case.

  6. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Orthogonality preserving infinite dimensional quadratic stochastic operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akın, Hasan; Mukhamedov, Farrukh

    In the present paper, we consider a notion of orthogonal preserving nonlinear operators. We introduce π-Volterra quadratic operators finite and infinite dimensional settings. It is proved that any orthogonal preserving quadratic operator on finite dimensional simplex is π-Volterra quadratic operator. In infinite dimensional setting, we describe all π-Volterra operators in terms orthogonal preserving operators.

  8. Finite Set Control Transcription for Optimal Control Applications

    DTIC Science & Technology

    2009-05-01

    Figures 1.1 The Parameters of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Categories of Optimization Algorithms ...Programming (NLP) algorithm , such as SNOPT2 (hereafter, called the optimizer). The Finite Set Control Transcription (FSCT) method is essentially a...artificial neural networks, ge- netic algorithms , or combinations thereof for analysis.4,5 Indeed, an actual biological neural network is an example of

  9. On the Use of a Mixed Gaussian/Finite-Element Basis Set for the Calculation of Rydberg States

    NASA Technical Reports Server (NTRS)

    Thuemmel, Helmar T.; Langhoff, Stephen (Technical Monitor)

    1996-01-01

    Configuration-interaction studies are reported for the Rydberg states of the helium atom using mixed Gaussian/finite-element (GTO/FE) one particle basis sets. Standard Gaussian valence basis sets are employed, like those, used extensively in quantum chemistry calculations. It is shown that the term values for high-lying Rydberg states of the helium atom can be obtained accurately (within 1 cm -1), even for a small GTO set, by augmenting the n-particle space with configurations, where orthonormalized interpolation polynomials are singly occupied.

  10. Safety assessment of a shallow foundation using the random finite element method

    NASA Astrophysics Data System (ADS)

    Zaskórski, Łukasz; Puła, Wojciech

    2015-04-01

    A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.

  11. Finite Topological Spaces as a Pedagogical Tool

    ERIC Educational Resources Information Center

    Helmstutler, Randall D.; Higginbottom, Ryan S.

    2012-01-01

    We propose the use of finite topological spaces as examples in a point-set topology class especially suited to help students transition into abstract mathematics. We describe how carefully chosen examples involving finite spaces may be used to reinforce concepts, highlight pathologies, and develop students' non-Euclidean intuition. We end with a…

  12. Computer-Oriented Calculus Courses Using Finite Differences.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    The so-called discrete approach in calculus instruction involves introducing topics from the calculus of finite differences and finite sums, both for motivation and as useful tools for applications of the calculus. In particular, it provides an ideal setting in which to incorporate computers into calculus courses. This approach has been…

  13. Computational work and time on finite machines.

    NASA Technical Reports Server (NTRS)

    Savage, J. E.

    1972-01-01

    Measures of the computational work and computational delay required by machines to compute functions are given. Exchange inequalities are developed for random access, tape, and drum machines to show that product inequalities between storage and time, number of drum tracks and time, number of bits in an address and time, etc., must be satisfied to compute finite functions on bounded machines.

  14. A finite-time exponent for random Ehrenfest gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moudgalya, Sanjay; Chandra, Sarthak; Jain, Sudhir R., E-mail: srjain@barc.gov.in

    2015-10-15

    We consider the motion of a system of free particles moving on a plane with regular hard polygonal scatterers arranged in a random manner. Calling this the Ehrenfest gas, which is known to have a zero Lyapunov exponent, we propose a finite-time exponent to characterize its dynamics. As the number of sides of the polygon goes to infinity, when polygon tends to a circle, we recover the usual Lyapunov exponent for the Lorentz gas from the exponent proposed here. To obtain this result, we generalize the reflection law of a beam of rays incident on a polygonal scatterer in amore » way that the formula for the circular scatterer is recovered in the limit of infinite number of vertices. Thus, chaos emerges from pseudochaos in an appropriate limit. - Highlights: • We present a finite-time exponent for particles moving in a plane containing polygonal scatterers. • The exponent found recovers the Lyapunov exponent in the limit of the polygon becoming a circle. • Our findings unify pseudointegrable and chaotic scattering via a generalized collision rule. • Stretch and fold:shuffle and cut :: Lyapunov:finite-time exponent :: fluid:granular mixing.« less

  15. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  16. Data-driven train set crash dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2017-02-01

    Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.

  17. Error detection and data smoothing based on local procedures

    NASA Technical Reports Server (NTRS)

    Guerra, V. M.

    1974-01-01

    An algorithm is presented which is able to locate isolated bad points and correct them without contaminating the rest of the good data. This work has been greatly influenced and motivated by what is currently done in the manual loft. It is not within the scope of this work to handle small random errors characteristic of a noisy system, and it is therefore assumed that the bad points are isolated and relatively few when compared with the total number of points. Motivated by the desire to imitate the loftsman a visual experiment was conducted to determine what is considered smooth data. This criterion is used to determine how much the data should be smoothed and to prove that this method produces such data. The method utimately converges to a set of points that lies on the polynomial that interpolates the first and last points; however convergence to such a set is definitely not the purpose of our algorithm. The proof of convergence is necessary to demonstrate that oscillation does not take place and that in a finite number of steps the method produces a set as smooth as desired.

  18. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    PubMed

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  19. Evolutionary Games with Randomly Changing Payoff Matrices

    NASA Astrophysics Data System (ADS)

    Yakushkina, Tatiana; Saakian, David B.; Bratus, Alexander; Hu, Chin-Kun

    2015-06-01

    Evolutionary games are used in various fields stretching from economics to biology. In most of these games a constant payoff matrix is assumed, although some works also consider dynamic payoff matrices. In this article we assume a possibility of switching the system between two regimes with different sets of payoff matrices. Potentially such a model can qualitatively describe the development of bacterial or cancer cells with a mutator gene present. A finite population evolutionary game is studied. The model describes the simplest version of annealed disorder in the payoff matrix and is exactly solvable at the large population limit. We analyze the dynamics of the model, and derive the equations for both the maximum and the variance of the distribution using the Hamilton-Jacobi equation formalism.

  20. Quantifying ‘Causality’ in Complex Systems: Understanding Transfer Entropy

    PubMed Central

    Abdul Razak, Fatimah; Jensen, Henrik Jeldtoft

    2014-01-01

    ‘Causal’ direction is of great importance when dealing with complex systems. Often big volumes of data in the form of time series are available and it is important to develop methods that can inform about possible causal connections between the different observables. Here we investigate the ability of the Transfer Entropy measure to identify causal relations embedded in emergent coherent correlations. We do this by firstly applying Transfer Entropy to an amended Ising model. In addition we use a simple Random Transition model to test the reliability of Transfer Entropy as a measure of ‘causal’ direction in the presence of stochastic fluctuations. In particular we systematically study the effect of the finite size of data sets. PMID:24955766

  1. An uncertainty principle for unimodular quantum groups

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crann, Jason; Université Lille 1 - Sciences et Technologies, UFR de Mathématiques, Laboratoire de Mathématiques Paul Painlevé - UMR CNRS 8524, 59655 Villeneuve d'Ascq Cédex; Kalantar, Mehrdad, E-mail: jason-crann@carleton.ca, E-mail: mkalanta@math.carleton.ca

    2014-08-15

    We present a generalization of Hirschman's entropic uncertainty principle for locally compact Abelian groups to unimodular locally compact quantum groups. As a corollary, we strengthen a well-known uncertainty principle for compact groups, and generalize the relation to compact quantum groups of Kac type. We also establish the complementarity of finite-dimensional quantum group algebras. In the non-unimodular setting, we obtain an uncertainty relation for arbitrary locally compact groups using the relative entropy with respect to the Haar weight as the measure of uncertainty. We also show that when restricted to q-traces of discrete quantum groups, the relative entropy with respect tomore » the Haar weight reduces to the canonical entropy of the random walk generated by the state.« less

  2. Resolvent-Techniques for Multiple Exercise Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less

  3. Random walks on combs

    NASA Astrophysics Data System (ADS)

    Durhuus, Bergfinnur; Jonsson, Thordur; Wheater, John F.

    2006-02-01

    We develop techniques to obtain rigorous bounds on the behaviour of random walks on combs. Using these bounds, we calculate exactly the spectral dimension of random combs with infinite teeth at random positions or teeth with random but finite length. We also calculate exactly the spectral dimension of some fixed non-translationally invariant combs. We relate the spectral dimension to the critical exponent of the mass of the two-point function for random walks on random combs, and compute mean displacements as a function of walk duration. We prove that the mean first passage time is generally infinite for combs with anomalous spectral dimension.

  4. Measures with locally finite support and spectrum.

    PubMed

    Meyer, Yves F

    2016-03-22

    The goal of this paper is the construction of measures μ on R(n)enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ f μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order.

  5. Measures with locally finite support and spectrum

    PubMed Central

    Meyer, Yves F.

    2016-01-01

    The goal of this paper is the construction of measures μ on Rn enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ^ of μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order. PMID:26929358

  6. Solidification of a binary mixture

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1982-01-01

    The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.

  7. Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.

    2018-05-01

    Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.

  8. Entropy and long-range memory in random symbolic additive Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2016-06-01

    The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.

  9. Entropy and long-range memory in random symbolic additive Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2016-06-01

    The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.

  10. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  11. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources.

    PubMed

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-10-16

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation.

  12. Repeated Random Sampling in Year 5

    ERIC Educational Resources Information Center

    Watson, Jane M.; English, Lyn D.

    2016-01-01

    As an extension to an activity introducing Year 5 students to the practice of statistics, the software "TinkerPlots" made it possible to collect repeated random samples from a finite population to informally explore students' capacity to begin reasoning with a distribution of sample statistics. This article provides background for the…

  13. Quantum mechanics over sets

    NASA Astrophysics Data System (ADS)

    Ellerman, David

    2014-03-01

    In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.

  14. EXODUS II: A finite element data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  15. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  16. Noisy bases in Hilbert space: A new class of thermal coherent states and their properties

    NASA Technical Reports Server (NTRS)

    Vourdas, A.; Bishop, R. F.

    1995-01-01

    Coherent mixed states (or thermal coherent states) associated with the displaced harmonic oscillator at finite temperature, are introduced as a 'random' (or 'thermal' or 'noisy') basis in Hilbert space. A resolution of the identity for these states is proved and used to generalize the usual coherent state formalism for the finite temperature case. The Bargmann representation of an operator is introduced and its relation to the P and Q representations is studied. Generalized P and Q representations for the finite temperature case are also considered and several interesting relations among them are derived.

  17. Electroencephalography (EEG) forward modeling via H(div) finite element sources with focal interpolation.

    PubMed

    Pursiainen, S; Vorwerk, J; Wolters, C H

    2016-12-21

    The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.

  18. Simple scheme to implement decoy-state reference-frame-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Zhu, Jianrong; Wang, Qin

    2018-06-01

    We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.

  19. Kinetics of Cyclic Oxidation and Cracking and Finite Element Analysis of MA956 and Sapphire/MA956 Composite System

    NASA Technical Reports Server (NTRS)

    Lee, Kang N.; Arya, Vinod K.; Halford, Gary R.; Barrett, Charles A.

    1996-01-01

    Sapphire fiber-reinforced MA956 composites hold promise for significant weight savings and increased high-temperature structural capability, as compared to unreinforced MA956. As part of an overall assessment of the high-temperature characteristics of this material system, cyclic oxidation behavior was studied at 1093 C and 1204 C. Initially, both sets of coupons exhibited parabolic oxidation kinetics. Later, monolithic MA956 exhibited spallation and a linear weight loss, whereas the composite showed a linear weight gain without spallation. Weight loss of the monolithic MA956 resulted from the linking of a multiplicity of randomly oriented and closely spaced surface cracks that facilitated ready spallation. By contrast, cracking of the composite's oxide layer was nonintersecting and aligned nominally parallel with the orientation of the subsurface reinforcing fibers. Oxidative lifetime of monolithic MA956 was projected from the observed oxidation kinetics. Linear elastic, finite element continuum, and micromechanics analyses were performed on coupons of the monolithic and composite materials. Results of the analyses qualitatively agreed well with the observed oxide cracking and spallation behavior of both the MA956 and the Sapphire/MA956 composite coupons.

  20. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. On the extreme value statistics of normal random matrices and 2D Coulomb gases: Universality and finite N corrections

    NASA Astrophysics Data System (ADS)

    Ebrahimi, R.; Zohren, S.

    2018-03-01

    In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.

  2. The Difference Calculus and The NEgative Binomial Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Kimiko o; Shenton, LR

    2007-01-01

    In a previous paper we state the dominant term in the third central moment of the maximum likelihood estimator k of the parameter k in the negative binomial probability function where the probability generating function is (p + 1 - pt){sup -k}. A partial sum of the series {Sigma}1/(k + x){sup 3} is involved, where x is a negative binomial random variate. In expectation this sum can only be found numerically using the computer. Here we give a simple definite integral in (0,1) for the generalized case. This means that now we do have a valid expression for {radical}{beta}{sub 11}(k)more » and {radical}{beta}{sub 11}(p). In addition we use the finite difference operator {Delta}, and E = 1 + {Delta} to set up formulas for low order moments. Other examples of the operators are quoted relating to the orthogonal set of polynomials associated with the negative binomial probability function used as a weight function.« less

  3. Efficiency and formalism of quantum games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.F.; Johnson, Neil F.

    We show that quantum games are more efficient than classical games and provide a saturated upper bound for this efficiency. We also demonstrate that the set of finite classical games is a strict subset of the set of finite quantum games. Our analysis is based on a rigorous formulation of quantum games, from which quantum versions of the minimax theorem and the Nash equilibrium theorem can be deduced.

  4. Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.

    PubMed

    Jiang, Z; Chen, W; Burkhart, C

    2013-11-01

    Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  5. Propagation of finite amplitude sound through turbulence: Modeling with geometrical acoustics and the parabolic approximation

    NASA Astrophysics Data System (ADS)

    Blanc-Benon, Philippe; Lipkens, Bart; Dallois, Laurent; Hamilton, Mark F.; Blackstock, David T.

    2002-01-01

    Sonic boom propagation can be affected by atmospheric turbulence. It has been shown that turbulence affects the perceived loudness of sonic booms, mainly by changing its peak pressure and rise time. The models reported here describe the nonlinear propagation of sound through turbulence. Turbulence is modeled as a set of individual realizations of a random temperature or velocity field. In the first model, linear geometrical acoustics is used to trace rays through each realization of the turbulent field. A nonlinear transport equation is then derived along each eigenray connecting the source and receiver. The transport equation is solved by a Pestorius algorithm. In the second model, the KZK equation is modified to account for the effect of a random temperature field and it is then solved numerically. Results from numerical experiments that simulate the propagation of spark-produced N waves through turbulence are presented. It is observed that turbulence decreases, on average, the peak pressure of the N waves and increases the rise time. Nonlinear distortion is less when turbulence is present than without it. The effects of random vector fields are stronger than those of random temperature fields. The location of the caustics and the deformation of the wave front are also presented. These observations confirm the results from the model experiment in which spark-produced N waves are used to simulate sonic boom propagation through a turbulent atmosphere.

  6. Propagation of finite amplitude sound through turbulence: modeling with geometrical acoustics and the parabolic approximation.

    PubMed

    Blanc-Benon, Philippe; Lipkens, Bart; Dallois, Laurent; Hamilton, Mark F; Blackstock, David T

    2002-01-01

    Sonic boom propagation can be affected by atmospheric turbulence. It has been shown that turbulence affects the perceived loudness of sonic booms, mainly by changing its peak pressure and rise time. The models reported here describe the nonlinear propagation of sound through turbulence. Turbulence is modeled as a set of individual realizations of a random temperature or velocity field. In the first model, linear geometrical acoustics is used to trace rays through each realization of the turbulent field. A nonlinear transport equation is then derived along each eigenray connecting the source and receiver. The transport equation is solved by a Pestorius algorithm. In the second model, the KZK equation is modified to account for the effect of a random temperature field and it is then solved numerically. Results from numerical experiments that simulate the propagation of spark-produced N waves through turbulence are presented. It is observed that turbulence decreases, on average, the peak pressure of the N waves and increases the rise time. Nonlinear distortion is less when turbulence is present than without it. The effects of random vector fields are stronger than those of random temperature fields. The location of the caustics and the deformation of the wave front are also presented. These observations confirm the results from the model experiment in which spark-produced N waves are used to simulate sonic boom propagation through a turbulent atmosphere.

  7. Evaluation of Strip Footing Bearing Capacity Built on the Anthropogenic Embankment by Random Finite Element Method

    NASA Astrophysics Data System (ADS)

    Pieczynska-Kozlowska, Joanna

    2014-05-01

    One of a geotechnical problem in the area of Wroclaw is an anthropogenic embankment layer delaying to the depth of 4-5m, arising as a result of historical incidents. In such a case an assumption of bearing capacity of strip footing might be difficult. The standard solution is to use a deep foundation or foundation soil replacement. However both methods generate significant costs. In the present paper the authors focused their attention on the influence of anthropogenic embankment variability on bearing capacity. Soil parameters were defined on the basis of CPT test and modeled as 2D anisotropic random fields and the assumption of bearing capacity were made according deterministic finite element methods. Many repeated of the different realizations of random fields lead to stable expected value of bearing capacity. The algorithm used to estimate the bearing capacity of strip footing was the random finite element method (e.g. [1]). In traditional approach of bearing capacity the formula proposed by [2] is taken into account. qf = c'Nc + qNq + 0.5γBN- γ (1) where: qf is the ultimate bearing stress, cis the cohesion, qis the overburden load due to foundation embedment, γ is the soil unit weight, Bis the footing width, and Nc, Nq and Nγ are the bearing capacity factors. The method of evaluation the bearing capacity of strip footing based on finite element method incorporate five parameters: Young's modulus (E), Poisson's ratio (ν), dilation angle (ψ), cohesion (c), and friction angle (φ). In the present study E, ν and ψ are held constant while c and φ are randomized. Although the Young's modulus does not affect the bearing capacity it governs the initial elastic response of the soil. Plastic stress redistribution is accomplished using a viscoplastic algorithm merge with an elastic perfectly plastic (Mohr - Coulomb) failure criterion. In this paper a typical finite element mesh was assumed with 8-node elements consist in 50 columns and 20 rows. Footings width B occupies 10 elements, 0.1 x 0.1 meter size. The footings are placed at the center of the mesh. Figure 1 shows the mesh used in probabilistic bearing capacity analysis. PIC Figure 1- Mesh used in analyses REFERENCES Fenton, G.A., Griffiths, D.V., (2008) Risk Assessment in Geotechnical Engineering, John Wiley & Sons, New York, Terzaghi, K. (1943). Theoretical Soil Mechanics, New York: John Wiley & Sons.

  8. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  9. Random phase approximation and cluster mean field studies of hard core Bose Hubbard model

    NASA Astrophysics Data System (ADS)

    Alavani, Bhargav K.; Gaude, Pallavi P.; Pai, Ramesh V.

    2018-04-01

    We investigate zero temperature and finite temperature properties of the Bose Hubbard Model in the hard core limit using Random Phase Approximation (RPA) and Cluster Mean Field Theory (CMFT). We show that our RPA calculations are able to capture quantum and thermal fluctuations significantly better than CMFT.

  10. On the validation of seismic imaging methods: Finite frequency or ray theory?

    DOE PAGES

    Maceira, Monica; Larmat, Carene; Porritt, Robert W.; ...

    2015-01-23

    We investigate the merits of the more recently developed finite-frequency approach to tomography against the more traditional and approximate ray theoretical approach for state of the art seismic models developed for western North America. To this end, we employ the spectral element method to assess the agreement between observations on real data and measurements made on synthetic seismograms predicted by the models under consideration. We check for phase delay agreement as well as waveform cross-correlation values. Based on statistical analyses on S wave phase delay measurements, finite frequency shows an improvement over ray theory. Random sampling using cross-correlation values identifiesmore » regions where synthetic seismograms computed with ray theory and finite-frequency models differ the most. Our study suggests that finite-frequency approaches to seismic imaging exhibit measurable improvement for pronounced low-velocity anomalies such as mantle plumes.« less

  11. Pion properties at finite isospin chemical potential with isospin symmetry breaking

    NASA Astrophysics Data System (ADS)

    Wu, Zuqing; Ping, Jialun; Zong, Hongshi

    2017-12-01

    Pion properties at finite temperature, finite isospin and baryon chemical potentials are investigated within the SU(2) NJL model. In the mean field approximation for quarks and random phase approximation fpr mesons, we calculate the pion mass, the decay constant and the phase diagram with different quark masses for the u quark and d quark, related to QCD corrections, for the first time. Our results show an asymmetry between μI <0 and μI >0 in the phase diagram, and different values for the charged pion mass (or decay constant) and neutral pion mass (or decay constant) at finite temperature and finite isospin chemical potential. This is caused by the effect of isospin symmetry breaking, which is from different quark masses. Supported by National Natural Science Foundation of China (11175088, 11475085, 11535005, 11690030) and the Fundamental Research Funds for the Central Universities (020414380074)

  12. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources

    PubMed Central

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-01-01

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation. PMID:26471947

  13. Finite element analysis and genetic algorithm optimization design for the actuator placement on a large adaptive structure

    NASA Astrophysics Data System (ADS)

    Sheng, Lizeng

    The dissertation focuses on one of the major research needs in the area of adaptive/intelligent/smart structures, the development and application of finite element analysis and genetic algorithms for optimal design of large-scale adaptive structures. We first review some basic concepts in finite element method and genetic algorithms, along with the research on smart structures. Then we propose a solution methodology for solving a critical problem in the design of a next generation of large-scale adaptive structures---optimal placements of a large number of actuators to control thermal deformations. After briefly reviewing the three most frequently used general approaches to derive a finite element formulation, the dissertation presents techniques associated with general shell finite element analysis using flat triangular laminated composite elements. The element used here has three nodes and eighteen degrees of freedom and is obtained by combining a triangular membrane element and a triangular plate bending element. The element includes the coupling effect between membrane deformation and bending deformation. The membrane element is derived from the linear strain triangular element using Cook's transformation. The discrete Kirchhoff triangular (DKT) element is used as the plate bending element. For completeness, a complete derivation of the DKT is presented. Geometrically nonlinear finite element formulation is derived for the analysis of adaptive structures under the combined thermal and electrical loads. Next, we solve the optimization problems of placing a large number of piezoelectric actuators to control thermal distortions in a large mirror in the presence of four different thermal loads. We then extend this to a multi-objective optimization problem of determining only one set of piezoelectric actuator locations that can be used to control the deformation in the same mirror under the action of any one of the four thermal loads. A series of genetic algorithms, GA Version 1, 2 and 3, were developed to find the optimal locations of piezoelectric actuators from the order of 1021 ˜ 1056 candidate placements. Introducing a variable population approach, we improve the flexibility of selection operation in genetic algorithms. Incorporating mutation and hill climbing into micro-genetic algorithms, we are able to develop a more efficient genetic algorithm. Through extensive numerical experiments, we find that the design search space for the optimal placements of a large number of actuators is highly multi-modal and that the most distinct nature of genetic algorithms is their robustness. They give results that are random but with only a slight variability. The genetic algorithms can be used to get adequate solution using a limited number of evaluations. To get the highest quality solution, multiple runs including different random seed generators are necessary. The investigation time can be significantly reduced using a very coarse grain parallel computing. Overall, the methodology of using finite element analysis and genetic algorithm optimization provides a robust solution approach for the challenging problem of optimal placements of a large number of actuators in the design of next generation of adaptive structures.

  14. Hierarchy of Certain Types of DNA Splicing Systems

    NASA Astrophysics Data System (ADS)

    Yusof, Yuhani; Sarmin, Nor Haniza; Goode, T. Elizabeth; Mahmud, Mazri; Heng, Fong Wan

    A Head splicing system (H-system)consists of a finite set of strings (words) written over a finite alphabet, along with a finite set of rules that acts on the strings by iterated cutting and pasting to create a splicing language. Any interpretation that is aligned with Tom Head's original idea is one in which the strings represent double-stranded deoxyribonucleic acid (dsDNA) and the rules represent the cutting and pasting action of restriction enzymes and ligase, respectively. A new way of writing the rule sets is adopted so as to make the biological interpretation transparent. This approach is used in a formal language- theoretic analysis of the hierarchy of certain classes of splicing systems, namely simple, semi-simple and semi-null splicing systems. The relations between such systems and their associated languages are given as theorems, corollaries and counterexamples.

  15. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  16. All the noncontextuality inequalities for arbitrary prepare-and-measure experiments with respect to any fixed set of operational equivalences

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.; Wolfe, Elie

    2018-06-01

    Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.

  17. The modelling of the flow-induced vibrations of periodic flat and axial-symmetric structures with a wave-based method

    NASA Astrophysics Data System (ADS)

    Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.

    2018-06-01

    The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.

  18. Full Wave Analysis of RF Signal Attenuation in a Lossy Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2004-12-06

    We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less

  19. Complex networks: Effect of subtle changes in nature of randomness

    NASA Astrophysics Data System (ADS)

    Goswami, Sanchari; Biswas, Soham; Sen, Parongama

    2011-03-01

    In two different classes of network models, namely, the Watts Strogatz type and the Euclidean type, subtle changes have been introduced in the randomness. In the Watts Strogatz type network, rewiring has been done in different ways and although the qualitative results remain the same, finite differences in the exponents are observed. In the Euclidean type networks, where at least one finite phase transition occurs, two models differing in a similar way have been considered. The results show a possible shift in one of the phase transition points but no change in the values of the exponents. The WS and Euclidean type models are equivalent for extreme values of the parameters; we compare their behaviour for intermediate values.

  20. Mean-Potential Law in Evolutionary Games

    NASA Astrophysics Data System (ADS)

    Nałecz-Jawecki, Paweł; Miekisz, Jacek

    2018-01-01

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  1. On the genealogy of branching random walks and of directed polymers

    NASA Astrophysics Data System (ADS)

    Derrida, Bernard; Mottishaw, Peter

    2016-08-01

    It is well known that the mean-field theory of directed polymers in a random medium exhibits replica symmetry breaking with a distribution of overlaps which consists of two delta functions. Here we show that the leading finite-size correction to this distribution of overlaps has a universal character which can be computed explicitly. Our results can also be interpreted as genealogical properties of branching Brownian motion or of branching random walks.

  2. Numerical Analysis of Solids at Failure

    DTIC Science & Technology

    2011-08-20

    failure analyses include the formulation of invariant finite elements for thin Kirchhoff rods, and preliminary initial studies of growth in...analysis of the failure of other structural/mechanical systems, including the finite element modeling of thin Kirchhoff rods and the constitutive...algorithm based on the connectivity graph of the underlying finite element mesh. In this setting, the discontinuities are defined by fronts propagating

  3. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  4. Opinion dynamics in a group-based society

    NASA Astrophysics Data System (ADS)

    Gargiulo, F.; Huet, S.

    2010-09-01

    Many models have been proposed to analyze the evolution of opinion structure due to the interaction of individuals in their social environment. Such models analyze the spreading of ideas both in completely interacting backgrounds and on social networks, where each person has a finite set of interlocutors. In this paper we analyze the reciprocal feedback between the opinions of the individuals and the structure of the interpersonal relationships at the level of community structures. For this purpose we define a group-based random network and we study how this structure co-evolves with opinion dynamics processes. We observe that the adaptive network structure affects the opinion dynamics process helping the consensus formation. The results also show interesting behaviors in regards to the size distribution of the groups and their correlation with opinion structure.

  5. Finite deformation of incompressible fiber-reinforced elastomers: A computational micromechanics approach

    NASA Astrophysics Data System (ADS)

    Moraleda, Joaquín; Segurado, Javier; LLorca, Javier

    2009-09-01

    The in-plane finite deformation of incompressible fiber-reinforced elastomers was studied using computational micromechanics. Composite microstructure was made up of a random and homogeneous dispersion of aligned rigid fibers within a hyperelastic matrix. Different matrices (Neo-Hookean and Gent), fibers (monodisperse or polydisperse, circular or elliptical section) and reinforcement volume fractions (10-40%) were analyzed through the finite element simulation of a representative volume element of the microstructure. A successive remeshing strategy was employed when necessary to reach the large deformation regime in which the evolution of the microstructure influences the effective properties. The simulations provided for the first time "quasi-exact" results of the in-plane finite deformation for this class of composites, which were used to assess the accuracy of the available homogenization estimates for incompressible hyperelastic composites.

  6. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    PubMed Central

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656

  7. Extending cluster lot quality assurance sampling designs for surveillance programs.

    PubMed

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Two-point correlation function for Dirichlet L-functions

    NASA Astrophysics Data System (ADS)

    Bogomolny, E.; Keating, J. P.

    2013-03-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.

  9. Estimation of population mean in the presence of measurement error and non response under stratified random sampling

    PubMed Central

    Shabbir, Javid

    2018-01-01

    In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519

  10. Localization on Quantum Graphs with Random Vertex Couplings

    NASA Astrophysics Data System (ADS)

    Klopp, Frédéric; Pankrashkin, Konstantin

    2008-05-01

    We consider Schrödinger operators on a class of periodic quantum graphs with randomly distributed Kirchhoff coupling constants at all vertices. We obtain necessary conditions for localization on quantum graphs in terms of finite volume criteria for some energy-dependent discrete Hamiltonians. These conditions hold in the strong disorder limit and at the spectral edges.

  11. Quantum random walks on congested lattices and the effect of dephasing.

    PubMed

    Motes, Keith R; Gilchrist, Alexei; Rohde, Peter P

    2016-01-27

    We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker's direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices.

  12. Singularity computations. [finite element methods for elastoplastic flow

    NASA Technical Reports Server (NTRS)

    Swedlow, J. L.

    1978-01-01

    Direct descriptions of the structure of a singularity would describe the radial and angular distributions of the field quantities as explicitly as practicable along with some measure of the intensity of the singularity. This paper discusses such an approach based on recent development of numerical methods for elastoplastic flow. Attention is restricted to problems where one variable or set of variables is finite at the origin of the singularity but a second set is not.

  13. Rectifiability of Line Defects in Liquid Crystals with Variable Degree of Orientation

    NASA Astrophysics Data System (ADS)

    Alper, Onur

    2018-04-01

    In [2], H ardt, L in and the author proved that the defect set of minimizers of the modified Ericksen energy for nematic liquid crystals consists locally of a finite union of isolated points and Hölder continuous curves with finitely many crossings. In this article, we show that each Hölder continuous curve in the defect set is of finite length. Hence, locally, the defect set is rectifiable. For the most part, the proof closely follows the work of D e L ellis et al. (Rectifiability and upper minkowski bounds for singularities of harmonic q-valued maps, arXiv:1612.01813, 2016) on harmonic Q-valued maps. The blow-up analysis in A lper et al. (Calc Var Partial Differ Equ 56(5):128, 2017) allows us to simplify the covering arguments in [11] and locally estimate the length of line defects in a geometric fashion.

  14. Criticality in finite dynamical networks

    NASA Astrophysics Data System (ADS)

    Rohlf, Thimo; Gulbahce, Natali; Teuscher, Christof

    2007-03-01

    It has been shown analytically and experimentally that both random boolean and random threshold networks show a transition from ordered to chaotic dynamics at a critical average connectivity Kc in the thermodynamical limit [1]. By looking at the statistical distributions of damage spreading (damage sizes), we go beyond this extensively studied mean-field approximation. We study the scaling properties of damage size distributions as a function of system size N and initial perturbation size d(t=0). We present numerical evidence that another characteristic point, Kd exists for finite system sizes, where the expectation value of damage spreading in the network is independent of the system size N. Further, the probability to obtain critical networks is investigated for a given system size and average connectivity k. Our results suggest that, for finite size dynamical networks, phase space structure is very complex and may not exhibit a sharp order-disorder transition. Finally, we discuss the implications of our findings for evolutionary processes and learning applied to networks which solve specific computational tasks. [1] Derrida, B. and Pomeau, Y. (1986), Europhys. Lett., 1, 45-49

  15. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  16. KC-135 aero-optical turbulent boundary layer/shear layer experiment revisited

    NASA Technical Reports Server (NTRS)

    Craig, J.; Allen, C.

    1987-01-01

    The aero-optical effects associated with propagating a laser beam through both an aircraft turbulent boundary layer and artificially generated shear layers are examined. The data present comparisons from observed optical performance with those inferred from aerodynamic measurements of unsteady density and correlation lengths within the same random flow fields. Using optical instrumentation with tens of microsecond temporal resolution through a finite aperture, optical performance degradation was determined and contrasted with the infinite aperture time averaged aerodynamic measurement. In addition, the optical data were artificially clipped to compare to theoretical scaling calculations. Optical instrumentation consisted of a custom Q switched Nd:Yag double pulsed laser, and a holographic camera which recorded the random flow field in a double pass, double pulse mode. Aerodynamic parameters were measured using hot film anemometer probes and a five hole pressure probe. Each technique is described with its associated theoretical basis for comparison. The effects of finite aperture and spatial and temporal frequencies of the random flow are considered.

  17. Random walk numerical simulation for hopping transport at finite carrier concentrations: diffusion coefficient and transport energy concept.

    PubMed

    Gonzalez-Vazquez, J P; Anta, Juan A; Bisquert, Juan

    2009-11-28

    The random walk numerical simulation (RWNS) method is used to compute diffusion coefficients for hopping transport in a fully disordered medium at finite carrier concentrations. We use Miller-Abrahams jumping rates and an exponential distribution of energies to compute the hopping times in the random walk simulation. The computed diffusion coefficient shows an exponential dependence with respect to Fermi-level and Arrhenius behavior with respect to temperature. This result indicates that there is a well-defined transport level implicit to the system dynamics. To establish the origin of this transport level we construct histograms to monitor the energies of the most visited sites. In addition, we construct "corrected" histograms where backward moves are removed. Since these moves do not contribute to transport, these histograms provide a better estimation of the effective transport level energy. The analysis of this concept in connection with the Fermi-level dependence of the diffusion coefficient and the regime of interest for the functioning of dye-sensitised solar cells is thoroughly discussed.

  18. Survival behavior in the cyclic Lotka-Volterra model with a randomly switching reaction rate

    NASA Astrophysics Data System (ADS)

    West, Robert; Mobilia, Mauro; Rucklidge, Alastair M.

    2018-02-01

    We study the influence of a randomly switching reproduction-predation rate on the survival behavior of the nonspatial cyclic Lotka-Volterra model, also known as the zero-sum rock-paper-scissors game, used to metaphorically describe the cyclic competition between three species. In large and finite populations, demographic fluctuations (internal noise) drive two species to extinction in a finite time, while the species with the smallest reproduction-predation rate is the most likely to be the surviving one (law of the weakest). Here we model environmental (external) noise by assuming that the reproduction-predation rate of the strongest species (the fastest to reproduce and predate) in a given static environment randomly switches between two values corresponding to more and less favorable external conditions. We study the joint effect of environmental and demographic noise on the species survival probabilities and on the mean extinction time. In particular, we investigate whether the survival probabilities follow the law of the weakest and analyze their dependence on the external noise intensity and switching rate. Remarkably, when, on average, there is a finite number of switches prior to extinction, the survival probability of the predator of the species whose reaction rate switches typically varies nonmonotonically with the external noise intensity (with optimal survival about a critical noise strength). We also outline the relationship with the case where all reaction rates switch on markedly different time scales.

  19. Simulating Fragmentation and Fluid-Induced Fracture in Disordered Media Using Random Finite-Element Meshes

    DOE PAGES

    Bishop, Joseph E.; Martinez, Mario J.; Newell, Pania

    2016-11-08

    Fracture and fragmentation are extremely nonlinear multiscale processes in which microscale damage mechanisms emerge at the macroscale as new fracture surfaces. Numerous numerical methods have been developed for simulating fracture initiation, propagation, and coalescence. In this paper, we present a computational approach for modeling pervasive fracture in quasi-brittle materials based on random close-packed Voronoi tessellations. Each Voronoi cell is formulated as a polyhedral finite element containing an arbitrary number of vertices and faces. Fracture surfaces are allowed to nucleate only at the intercell faces. Cohesive softening tractions are applied to new fracture surfaces in order to model the energy dissipatedmore » during fracture growth. The randomly seeded Voronoi cells provide a regularized discrete random network for representing fracture surfaces. The potential crack paths within the random network are viewed as instances of realizable crack paths within the continuum material. Mesh convergence of fracture simulations is viewed in a weak, or distributional, sense. The explicit facet representation of fractures within this approach is advantageous for modeling contact on new fracture surfaces and fluid flow within the evolving fracture network. Finally, applications of interest include fracture and fragmentation in quasi-brittle materials and geomechanical applications such as hydraulic fracturing, engineered geothermal systems, compressed-air energy storage, and carbon sequestration.« less

  20. Simulating Fragmentation and Fluid-Induced Fracture in Disordered Media Using Random Finite-Element Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, Joseph E.; Martinez, Mario J.; Newell, Pania

    Fracture and fragmentation are extremely nonlinear multiscale processes in which microscale damage mechanisms emerge at the macroscale as new fracture surfaces. Numerous numerical methods have been developed for simulating fracture initiation, propagation, and coalescence. In this paper, we present a computational approach for modeling pervasive fracture in quasi-brittle materials based on random close-packed Voronoi tessellations. Each Voronoi cell is formulated as a polyhedral finite element containing an arbitrary number of vertices and faces. Fracture surfaces are allowed to nucleate only at the intercell faces. Cohesive softening tractions are applied to new fracture surfaces in order to model the energy dissipatedmore » during fracture growth. The randomly seeded Voronoi cells provide a regularized discrete random network for representing fracture surfaces. The potential crack paths within the random network are viewed as instances of realizable crack paths within the continuum material. Mesh convergence of fracture simulations is viewed in a weak, or distributional, sense. The explicit facet representation of fractures within this approach is advantageous for modeling contact on new fracture surfaces and fluid flow within the evolving fracture network. Finally, applications of interest include fracture and fragmentation in quasi-brittle materials and geomechanical applications such as hydraulic fracturing, engineered geothermal systems, compressed-air energy storage, and carbon sequestration.« less

  1. Digital-Analog Hybrid Scheme and Its Application to Chaotic Random Number Generators

    NASA Astrophysics Data System (ADS)

    Yuan, Zeshi; Li, Hongtao; Miao, Yunchi; Hu, Wen; Zhu, Xiaohua

    2017-12-01

    Practical random number generation (RNG) circuits are typically achieved with analog devices or digital approaches. Digital-based techniques, which use field programmable gate array (FPGA) and graphics processing units (GPU) etc. usually have better performances than analog methods as they are programmable, efficient and robust. However, digital realizations suffer from the effect of finite precision. Accordingly, the generated random numbers (RNs) are actually periodic instead of being real random. To tackle this limitation, in this paper we propose a novel digital-analog hybrid scheme that employs the digital unit as the main body, and minimum analog devices to generate physical RNs. Moreover, the possibility of realizing the proposed scheme with only one memory element is discussed. Without loss of generality, we use the capacitor and the memristor along with FPGA to construct the proposed hybrid system, and a chaotic true random number generator (TRNG) circuit is realized, producing physical RNs at a throughput of Gbit/s scale. These RNs successfully pass all the tests in the NIST SP800-22 package, confirming the significance of the scheme in practical applications. In addition, the use of this new scheme is not restricted to RNGs, and it also provides a strategy to solve the effect of finite precision in other digital systems.

  2. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  3. Fold pattern formation in 3D

    NASA Astrophysics Data System (ADS)

    Schmid, Daniel W.; Dabrowski, Marcin; Krotkiewski, Marcin

    2010-05-01

    The vast majority of studies concerned with folding focus on 2D and assume that the resulting fold structures are cylindrically extended in the out of place direction. This simplification is often justified as fold aspect ratios, length/width, are quite large. However, folds always exhibit finite aspect ratios and it is unclear what controls this (cf. Fletcher 1995). Surprisingly little is known about the fold pattern formation in 3D for different in-plane loading conditions. Even more complicated is the pattern formation when several folding events are superposed. Let us take the example of a plane strain pure shear superposed by the same kind of deformation but rotated by 90 degrees. The text book prediction for this event is the formation of an egg carton structure; relevant analogue models either agree and produce type 1 interference patterns or contradict and produce type 2. In order to map out 3D fold pattern formation we have performed a systematic parameter space investigation using BILAMIN, our efficient unstructured mesh finite element Stokes solver. BILAMIN is capable of solving problems with more than half a billion unknowns. This allows us to study fold patterns that emerge in randomly (red noise) perturbed layers. We classify the resulting structures with differential geometry tools. Our results show that there is a relationship between fold aspect ratio and in-plane loading conditions. We propose that this finding can be used to determine the complete parameter set potentially contained in the geometry of three dimensional folds: mechanical properties of natural rocks, maximum strain, and relative strength of the in-plane far-field load components. Furthermore, we show how folds in 3D amplify and that there is a second deformation mode, besides continuous amplification, where compression leads to a lateral rearrangement of blocks of folds. Finally, we demonstrate that the textbook prediction of egg carton shaped dome and basin structures resulting from folding instabilities in constriction is largely oversimplified. The fold patterns resulting in this setting are curved, elongated folds with random orientation. Reference Fletcher, R. C. 1995. 3-Dimensional Folding and Necking of a Power-Law Layer - Are Folds Cylindrical, and, If So, Do We Understand Why. Tectonophysics 147(1-4), 65-83.

  4. Critical scaling of the mutual information in two-dimensional disordered Ising models

    NASA Astrophysics Data System (ADS)

    Sriluckshmy, P. V.; Mandal, Ipsita

    2018-04-01

    Rényi mutual information, computed from second Rényi entropies, can identify classical phase transitions from their finite-size scaling at critical points. We apply this technique to examine the presence or absence of finite temperature phase transitions in various two-dimensional models on a square lattice, which are extensions of the conventional Ising model by adding a quenched disorder. When the quenched disorder causes the nearest neighbor bonds to be both ferromagnetic and antiferromagnetic, (a) a spin glass phase exists only at zero temperature, and (b) a ferromagnetic phase exists at a finite temperature when the antiferromagnetic bond distributions are sufficiently dilute. Furthermore, finite temperature paramagnetic-ferromagnetic transitions can also occur when the disordered bonds involve only ferromagnetic couplings of random strengths. In our numerical simulations, the ‘zero temperature only’ phase transitions are identified when there is no consistent finite-size scaling of the Rényi mutual information curves, while for finite temperature critical points, the curves can identify the critical temperature T c by their crossings at T c and 2 Tc .

  5. Probabilistic fracture finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-01-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  6. Probabilistic fracture finite elements

    NASA Astrophysics Data System (ADS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-05-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  7. Finite-time scaling at the Anderson transition for vibrations in solids

    NASA Astrophysics Data System (ADS)

    Beltukov, Y. M.; Skipetrov, S. E.

    2017-11-01

    A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.

  8. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less

  9. Mean-Potential Law in Evolutionary Games.

    PubMed

    Nałęcz-Jawecki, Paweł; Miękisz, Jacek

    2018-01-12

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1/3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  10. Stabilised finite-element methods for solving the level set equation with mass conservation

    NASA Astrophysics Data System (ADS)

    Kabirou Touré, Mamadou; Fahsi, Adil; Soulaïmani, Azzeddine

    2016-01-01

    Finite-element methods are studied for solving moving interface flow problems using the level set approach and a stabilised variational formulation proposed in Touré and Soulaïmani (2012; Touré and Soulaïmani To appear in 2016), coupled with a level set correction method. The level set correction is intended to enhance the mass conservation satisfaction property. The stabilised variational formulation (Touré and Soulaïmani 2012; Touré and Soulaïmani, To appear in 2016) constrains the level set function to remain close to the signed distance function, while the mass conservation is a correction step which enforces the mass balance. The eXtended finite-element method (XFEM) is used to take into account the discontinuities of the properties within an element. XFEM is applied to solve the Navier-Stokes equations for two-phase flows. The numerical methods are numerically evaluated on several test cases such as time-reversed vortex flow, a rigid-body rotation of Zalesak's disc, sloshing flow in a tank, a dam-break over a bed, and a rising bubble subjected to buoyancy. The numerical results show the importance of satisfying global mass conservation to accurately capture the interface position.

  11. Stochastic Control and Numerical Methods with Applications to Communications. Game Theoretic/Subsolution to Importance Sampling for Rare Event Simulation

    DTIC Science & Technology

    2008-11-01

    support to the value of the approach. 9. Scheduling and Control of Mobile Communications Networks with Randomly Time Varying Channels by Stability ...biological systems . Many examples arise in communications and queueing, due to the finite speed of signal transmission, the nonnegligible time required...without delays, the system state takes values in a subset of some finite -dimensional Euclidean space, and the control is a functional of the current

  12. Computer Generated Pictorial Stores Management Displays for Fighter Aircraft.

    DTIC Science & Technology

    1983-05-01

    questionnaire rating-scale data. KRISHNAIAH FINITE INTERSECTION TESTS (FITs) - A set of tests conducted after significant MANOVA results are found to...the Social Sciences (SPSS) (Reference 2). To further examine significant performance differences, the Krishnaiah Finite Intersection Test (FIT), a...New York: McGraw-Hill Book Company, 1975. 3. C. M. Cox, P. R. Krishnaiah , J. C. Lee, J. M. Reising, and F. J. Schuurman, A study on Finite Intersection

  13. Synchronization of Finite State Shared Resources

    DTIC Science & Technology

    1976-03-01

    IMHI uiw mmm " AFOSR -TR- 70- 0^8 3 QC o SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Sei neide.- DEPARTMENT of COMPUTER...34" ■ ■ ^ I I. i. . : ,1 . i-i SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Schneider Department of Computer...SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. ABSTRACT The problem of synchronizing a set of operations defined on a shared resource

  14. Occurrence and Nonoccurrence of Random Sequences: Comment on Hahn and Warren (2009)

    ERIC Educational Resources Information Center

    Sun, Yanlong; Tweney, Ryan D.; Wang, Hongbin

    2010-01-01

    On the basis of the statistical concept of waiting time and on computer simulations of the "probabilities of nonoccurrence" (p. 457) for random sequences, Hahn and Warren (2009) proposed that given people's experience of a finite data stream from the environment, the gambler's fallacy is not as gross an error as it might seem. We deal with two…

  15. Modeling and Predicting the Stress Relaxation of Composites with Short and Randomly Oriented Fibers

    PubMed Central

    Obaid, Numaira; Sain, Mohini

    2017-01-01

    The addition of short fibers has been experimentally observed to slow the stress relaxation of viscoelastic polymers, producing a change in the relaxation time constant. Our recent study attributed this effect of fibers on stress relaxation behavior to the interfacial shear stress transfer at the fiber-matrix interface. This model explained the effect of fiber addition on stress relaxation without the need to postulate structural changes at the interface. In our previous study, we developed an analytical model for the effect of fully aligned short fibers, and the model predictions were successfully compared to finite element simulations. However, in most industrial applications of short-fiber composites, fibers are not aligned, and hence it is necessary to examine the time dependence of viscoelastic polymers containing randomly oriented short fibers. In this study, we propose an analytical model to predict the stress relaxation behavior of short-fiber composites where the fibers are randomly oriented. The model predictions were compared to results obtained from Monte Carlo finite element simulations, and good agreement between the two was observed. The analytical model provides an excellent tool to accurately predict the stress relaxation behavior of randomly oriented short-fiber composites. PMID:29053601

  16. Random element method for numerical modeling of diffusional processes

    NASA Technical Reports Server (NTRS)

    Ghoniem, A. F.; Oppenheim, A. K.

    1982-01-01

    The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.

  17. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  18. Anderson transition in a three-dimensional kicked rotor

    NASA Astrophysics Data System (ADS)

    Wang, Jiao; García-García, Antonio M.

    2009-03-01

    We investigate Anderson localization in a three-dimensional (3D) kicked rotor. By a finite-size scaling analysis we identify a mobility edge for a certain value of the kicking strength k=kc . For k>kc dynamical localization does not occur, all eigenstates are delocalized and the spectral correlations are well described by Wigner-Dyson statistics. This can be understood by mapping the kicked rotor problem onto a 3D Anderson model (AM) where a band of metallic states exists for sufficiently weak disorder. Around the critical region k≈kc we carry out a detailed study of the level statistics and quantum diffusion. In agreement with the predictions of the one parameter scaling theory (OPT) and with previous numerical simulations, the number variance is linear, level repulsion is still observed, and quantum diffusion is anomalous with ⟨p2⟩∝t2/3 . We note that in the 3D kicked rotor the dynamics is not random but deterministic. In order to estimate the differences between these two situations we have studied a 3D kicked rotor in which the kinetic term of the associated evolution matrix is random. A detailed numerical comparison shows that the differences between the two cases are relatively small. However in the deterministic case only a small set of irrational periods was used. A qualitative analysis of a much larger set suggests that deviations between the random and the deterministic kicked rotor can be important for certain choices of periods. Heuristically it is expected that localization effects will be weaker in a nonrandom potential since destructive interference will be less effective to arrest quantum diffusion. However we have found that certain choices of irrational periods enhance Anderson localization effects.

  19. Advances and trends in structures and dynamics; Proceedings of the Symposium, Washington, DC, October 22-25, 1984

    NASA Technical Reports Server (NTRS)

    Noor, A. K. (Editor); Hayduk, R. J. (Editor)

    1985-01-01

    Among the topics discussed are developments in structural engineering hardware and software, computation for fracture mechanics, trends in numerical analysis and parallel algorithms, mechanics of materials, advances in finite element methods, composite materials and structures, determinations of random motion and dynamic response, optimization theory, automotive tire modeling methods and contact problems, the damping and control of aircraft structures, and advanced structural applications. Specific topics covered include structural design expert systems, the evaluation of finite element system architectures, systolic arrays for finite element analyses, nonlinear finite element computations, hierarchical boundary elements, adaptive substructuring techniques in elastoplastic finite element analyses, automatic tracking of crack propagation, a theory of rate-dependent plasticity, the torsional stability of nonlinear eccentric structures, a computation method for fluid-structure interaction, the seismic analysis of three-dimensional soil-structure interaction, a stress analysis for a composite sandwich panel, toughness criterion identification for unidirectional composite laminates, the modeling of submerged cable dynamics, and damping synthesis for flexible spacecraft structures.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vourdas, A.

    The finite set of subsystems of a finite quantum system with variables in Z(n), is studied as a Heyting algebra. The physical meaning of the logical connectives is discussed. It is shown that disjunction of subsystems is more general concept than superposition. Consequently, the quantum probabilities related to commuting projectors in the subsystems, are incompatible with associativity of the join in the Heyting algebra, unless if the variables belong to the same chain. This leads to contextuality, which in the present formalism has as contexts, the chains in the Heyting algebra. Logical Bell inequalities, which contain “Heyting factors,” are discussed.more » The formalism is also applied to the infinite set of all finite quantum systems, which is appropriately enlarged in order to become a complete Heyting algebra.« less

  1. A class of generalized Ginzburg-Landau equations with random switching

    NASA Astrophysics Data System (ADS)

    Wu, Zheng; Yin, George; Lei, Dongxia

    2018-09-01

    This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.

  2. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  3. Empirical likelihood inference in randomized clinical trials.

    PubMed

    Zhang, Biao

    2017-01-01

    In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.

  4. Experimental evidence of phase coherence of magnetohydrodynamic turbulence in the solar wind: GEOTAIL satellite data.

    PubMed

    Koga, D; Chian, A C-L; Hada, T; Rempel, E L

    2008-02-13

    Magnetohydrodynamic (MHD) turbulence is commonly observed in the solar wind. Nonlinear interactions among MHD waves are likely to produce finite correlation of the wave phases. For discussions of various transport processes of energetic particles, it is fundamentally important to determine whether the wave phases are randomly distributed (as assumed in the quasi-linear theory) or have a finite coherence. Using a method based on the surrogate data technique, we analysed the GEOTAIL magnetic field data to evaluate the phase coherence in MHD turbulence in the Earth's foreshock region. The results demonstrate the existence of finite phase correlation, indicating that nonlinear wave-wave interactions are in progress.

  5. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  6. Conservative properties of finite difference schemes for incompressible flow

    NASA Technical Reports Server (NTRS)

    Morinishi, Youhei

    1995-01-01

    The purpose of this research is to construct accurate finite difference schemes for incompressible unsteady flow simulations such as LES (large-eddy simulation) or DNS (direct numerical simulation). In this report, conservation properties of the continuity, momentum, and kinetic energy equations for incompressible flow are specified as analytical requirements for a proper set of discretized equations. Existing finite difference schemes in staggered grid systems are checked for satisfaction of the requirements. Proper higher order accurate finite difference schemes in a staggered grid system are then proposed. Plane channel flow is simulated using the proposed fourth order accurate finite difference scheme and the results compared with those of the second order accurate Harlow and Welch algorithm.

  7. Specialized Finite Set Statistics (FISST)-Based Estimation Methods to Enhance Space Situational Awareness in Medium Earth Orbit (MEO) and Geostationary Earth Orbit (GEO)

    DTIC Science & Technology

    2016-08-17

    Research Laboratory AFRL /RVSV Space Vehicles Directorate 3550 Aberdeen Ave, SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV...1 cy AFRL /RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSV/Richard S. Erwin 1 cy... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0114 TR-2016-0114 SPECIALIZED FINITE SET STATISTICS (FISST)- BASED ESTIMATION METHODS TO ENHANCE SPACE SITUATIONAL

  8. Renormalizable Electrodynamics of Scalar and Vector Mesons. Part II

    DOE R&D Accomplishments Database

    Salam, Abdus; Delbourgo, Robert

    1964-01-01

    The "gauge" technique" for solving theories introduced in an earlier paper is applied to scalar and vector electrodynamics. It is shown that for scalar electrodynamics, there is no {lambda}φ*2φ2 infinity in the theory, while with conventional subtractions vector electrodynamics is completely finite. The essential ideas of the gauge technique are explained in section 3, and a preliminary set of rules for finite computation in vector electrodynamics is set out in Eqs. (7.28) - (7.34).

  9. Quantum random walks on congested lattices and the effect of dephasing

    PubMed Central

    Motes, Keith R.; Gilchrist, Alexei; Rohde, Peter P.

    2016-01-01

    We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker’s direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices. PMID:26812924

  10. The Quantized Geometry of Visual Space: The Coherent Computation of Depth, Form, and Lightness. Revised Version.

    DTIC Science & Technology

    1982-08-01

    of sensitivity with background luminance, and the finitE capacity of visual short term memory are discussed in terms of a small set of ...binocular rivalry, reflectance rivalry, Fechner’s paradox, decrease of threshold contrast with increased number of cycles in a grating pattern, hysteresis...adaptation level tuning, Weber law modulation, shift of sensitivity with background luminance, and the finite capacity of visual

  11. Evaluation of Resuspension from Propeller Wash in DoD Harbors

    DTIC Science & Technology

    2016-09-01

    Environmental Research and Development Center FANS FOV ICP-MS Finite Analytical Navier-Stoker Solver Field of View Inductively Coupled Plasma with...Model (1984) and the Finite Analytical Navier- Stoker Solver (FANS) model (Chen et al., 2003) were set up to simulate and evaluate flow velocities and...model for evaluating the resuspension potential of propeller wash by a tugboat and the FANS model for a DDG. The Finite -Analytic Navier-Stokes (FANS

  12. Quantum spectral curve for arbitrary state/operator in AdS5/CFT4

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay; Kazakov, Vladimir; Leurent, Sébastien; Volin, Dmytro

    2015-09-01

    We give a derivation of quantum spectral curve (QSC) — a finite set of Riemann-Hilbert equations for exact spectrum of planar N=4 SYM theory proposed in our recent paper Phys. Rev. Lett. 112 (2014). We also generalize this construction to all local single trace operators of the theory, in contrast to the TBA-like approaches worked out only for a limited class of states. We reveal a rich algebraic and analytic structure of the QSC in terms of a so called Q-system — a finite set of Baxter-like Q-functions. This new point of view on the finite size spectral problem is shown to be completely compatible, though in a far from trivial way, with already known exact equations (analytic Y-system/TBA, or FiNLIE). We use the knowledge of this underlying Q-system to demonstrate how the classical finite gap solutions and the asymptotic Bethe ansatz emerge from our formalism in appropriate limits.

  13. Minimal measures for Euler-Lagrange flows on finite covering spaces

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Xia, Zhihong

    2016-12-01

    In this paper we study the minimal measures for positive definite Lagrangian systems on compact manifolds. We are particularly interested in manifolds with more complicated fundamental groups. Mather’s theory classifies the minimal or action-minimizing measures according to the first (co-)homology group of a given manifold. We extend Mather’s notion of minimal measures to a larger class for compact manifolds with non-commutative fundamental groups, and use finite coverings to study the structure of these extended minimal measures. We also define action-minimizers and minimal measures in the homotopical sense. Our program is to study the structure of homotopical minimal measures by considering Mather’s minimal measures on finite covering spaces. Our goal is to show that, in general, manifolds with a non-commutative fundamental group have a richer set of minimal measures, hence a richer dynamical structure. As an example, we study the geodesic flow on surfaces of higher genus. Indeed, by going to the finite covering spaces, the set of minimal measures is much larger and more interesting.

  14. Definition of NASTRAN sets by use of parametric geometry

    NASA Technical Reports Server (NTRS)

    Baughn, Terry V.; Tiv, Mehran

    1989-01-01

    Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.

  15. Optimizing Nanoscale Quantitative Optical Imaging of Subfield Scattering Targets

    PubMed Central

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui; Sohn, Martin; Silver, Richard M.

    2016-01-01

    The full 3-D scattered field above finite sets of features has been shown to contain a continuum of spatial frequency information, and with novel optical microscopy techniques and electromagnetic modeling, deep-subwavelength geometrical parameters can be determined. Similarly, by using simulations, scattering geometries and experimental conditions can be established to tailor scattered fields that yield lower parametric uncertainties while decreasing the number of measurements and the area of such finite sets of features. Such optimized conditions are reported through quantitative optical imaging in 193 nm scatterfield microscopy using feature sets up to four times smaller in area than state-of-the-art critical dimension targets. PMID:27805660

  16. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  17. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  18. Exact Derivation of a Finite-Size Scaling Law and Corrections to Scaling in the Geometric Galton-Watson Process

    PubMed Central

    Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc

    2016-01-01

    The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596

  19. Influence of stochastic geometric imperfections on the load-carrying behaviour of thin-walled structures using constrained random fields

    NASA Astrophysics Data System (ADS)

    Lauterbach, S.; Fina, M.; Wagner, W.

    2018-04-01

    Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.

  20. Some Minorants and Majorants of Random Walks and Levy Processes

    NASA Astrophysics Data System (ADS)

    Abramson, Joshua Simon

    This thesis consists of four chapters, all relating to some sort of minorant or majorant of random walks or Levy processes. In Chapter 1 we provide an overview of recent work on descriptions and properties of the convex minorant of random walks and Levy processes as detailed in Chapter 2, [72] and [73]. This work rejuvenated the field of minorants, and led to the work in all the subsequent chapters. The results surveyed include point process descriptions of the convex minorant of random walks and Levy processes on a fixed finite interval, up to an independent exponential time, and in the infinite horizon case. These descriptions follow from the invariance of these processes under an adequate path transformation. In the case of Brownian motion, we note how further special properties of this process, including time-inversion, imply a sequential description for the convex minorant of the Brownian meander. This chapter is based on [3], which was co-written with Jim Pitman, Nathan Ross and Geronimo Uribe Bravo. Chapter 1 serves as a long introduction to Chapter 2, in which we offer a unified approach to the theory of concave majorants of random walks. The reasons for the switch from convex minorants to concave majorants are discussed in Section 1.1, but the results are all equivalent. This unified theory is arrived at by providing a path transformation for a walk of finite length that leaves the law of the walk unchanged whilst providing complete information about the concave majorant - the path transformation is different from the one discussed in Chapter 1, but this is necessary to deal with a more general case than the standard one as done in Section 2.6. The path transformation of Chapter 1, which is discussed in detail in Section 2.8, is more relevant to the limiting results for Levy processes that are of interest in Chapter 1. Our results lead to a description of a walk of random geometric length as a Poisson point process of excursions away from its concave majorant, which is then used to find a complete description of the concave majorant of a walk of infinite length. In the case where subsets of increments may have the same arithmetic mean (the more general case mentioned above), we investigate three nested compositions that naturally arise from our construction of the concave majorant. This chapter is based on [4], which was co-written with Jim Pitman. In Chapter 3, we study the Lipschitz minorant of a Levy process. For alpha > 0, the alpha-Lipschitz minorant of a function f : R→R is the greatest function m : R→R such that m ≤ f and | m(s) - m(t)| ≤ alpha |s - t| for all s, t ∈ R should such a function exist. If X = Xtt∈ R is a real-valued Levy process that is not pure linear drift with slope +/-alpha, then the sample paths of X have an alpha-Lipschitz minorant almost surely if and only if | E [X1]| < alpha. Denoting the minorant by M, we investigate properties of the random closed set Z := {t ∈ R : Mt = {Xt ∧ Xt-}, which, since it is regenerative and stationary, has the distribution of the closed range of some subordinator "made stationary" in a suitable sense. We give conditions for the contact set Z to be countable or to have zero Lebesgue measure, and we obtain formulas that characterize the Levy measure of the associated subordinator. We study the limit of Z as alpha → infinity and find for the so-called abrupt Levy processes introduced by Vigon that this limit is the set of local infima of X. When X is a Brownian motion with drift beta such that |beta| < alpha, we calculate explicitly the densities of various random variables related to the minorant. This chapter is based on [2], which was co-written with Steven N. Evans. Finally, in Chapter 4 we study the structure of the shocks for the inviscid Burgers equation in dimension 1 when the initial velocity is given by Levy noise, or equivalently when the initial potential is a two-sided Levy process This shock structure turns out to give rise to a parabolic minorant of the Levy process--see Section 4.2 for details. The main results are that when psi0 is abrupt in the sense of Vigon or has bounded variation with limsuph-2 h↓0y0 h=infinity , the set of points with zero velocity is regenerative, and that in the latter case this set is equal to the set of Lagrangian regular points, which is non-empty. When psi0 is abrupt the shock structure is discrete and when psi0 is eroded there are no rarefaction intervals. This chapter is based on [1].

  1. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: McDonnell-Douglas Helicopter Company achievements

    NASA Technical Reports Server (NTRS)

    Toossi, Mostafa; Weisenburger, Richard; Hashemi-Kia, Mostafa

    1993-01-01

    This paper presents a summary of some of the work performed by McDonnell Douglas Helicopter Company under NASA Langley-sponsored rotorcraft structural dynamics program known as DAMVIBS (Design Analysis Methods for VIBrationS). A set of guidelines which is applicable to dynamic modeling, analysis, testing, and correlation of both helicopter airframes and a large variety of structural finite element models is presented. Utilization of these guidelines and the key features of their applications to vibration modeling of helicopter airframes are discussed. Correlation studies with the test data, together with the development and applications of a set of efficient finite element model checkout procedures, are demonstrated on a large helicopter airframe finite element model. Finally, the lessons learned and the benefits resulting from this program are summarized.

  2. Sufficient Condition for Finite-Time Singularity in a High-Symmetry Euler Flow

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, A.; Ng, C. S.

    1997-11-01

    The possibility of a finite-time singularity (FTS) with a smooth initial condition is considered in a high-symmetry Euler flow (the Kida flow). It has been shown recently [C. S. Ng and A. Bhattacharjee, Phys. Rev. E 54 1530, 1996] that there must be a FTS if the fourth order pressure derivative (p_xxxx) is always positive within a finite range X on the x-axis around the origin. This sufficient condition is now extended to the case when the range X is itself time-dependent. It is shown that a FTS must still exist even when X arrow 0 if the p_xxxx value at the origin is growing faster than X-2. It is tested statistically that p_xxxx at the origin is most probably positive for a Kida flow with random Fourier amplitudes and that it is generally growing as energy cascades to Fourier modes with higher wavenumbers k. The condition that p_xxxx grows faster than X-2 is found to be satisfied when the spectral index ν of the energy spectrum E(k) ∝ k^-ν of the random flow is less than 3.

  3. Min and max are the only continuous ampersand-, V-operations for finite logics

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik

    1992-01-01

    Experts usually express their degrees of belief in their statements by the words of a natural language (like 'maybe', 'perhaps', etc.). If an expert system contains the degrees of beliefs t(A) and t(B) that correspond to the statements A and B, and a user asks this expert system whether 'A&B' is true, then it is necessary to come up with a reasonable estimate for the degree of belief of A&B. The operation that processes t(A) and t(B) into such an estimate t(A&B) is called an &-operation. Many different &-operations have been proposed. Which of them to choose? This can be (in principle) done by interviewing experts and eliciting a &-operation from them, but such a process is very time-consuming and therefore, not always possible. So, usually, to choose a &-operation, the finite set of actually possible degrees of belief is extended to an infinite set (e.g., to an interval (0,1)), define an operation there, and then restrict this operation to the finite set. Only this original finite set is considered. It is shown that a reasonable assumption that an &-operation is continuous (i.e., that gradual change in t(A) and t(B) must lead to a gradual change in t(A&B)), uniquely determines min as an &-operation. Likewise, max is the only continuous V-operation. These results are in good accordance with the experimental analysis of 'and' and 'or' in human beliefs.

  4. Nonlinear effects associated with fast magnetosonic waves and turbulent magnetic amplification in laboratory and astrophysical plasmas

    NASA Astrophysics Data System (ADS)

    Tiwary, PremPyari; Sharma, Swati; Sharma, Prachi; Singh, Ram Kishor; Uma, R.; Sharma, R. P.

    2016-12-01

    This paper presents the spatio-temporal evolution of magnetic field due to the nonlinear coupling between fast magnetosonic wave (FMSW) and low frequency slow Alfvén wave (SAW). The dynamical equations of finite frequency FMSW and SAW in the presence of ponderomotive force of FMSW (pump wave) has been presented. Numerical simulation has been carried out for the nonlinear coupled equations of finite frequency FMSW and SAW. A systematic scan of the nonlinear behavior/evolution of the pump FMSW has been done for one of the set of parameters chosen in this paper, using the coupled dynamical equations. Filamentation of fast magnetosonic wave has been considered to be responsible for the magnetic turbulence during the laser plasma interaction. The results show that the formation and growth of localized structures depend on the background magnetic field but the order of amplification does not get affected by the magnitude of the background magnetic field. In this paper, we have shown the relevance of our model for two different parameters used in laboratory and astrophysical phenomenon. We have used one set of parameters pertaining to experimental observations in the study of fast ignition of laser fusion and hence studied the turbulent structures in stellar environment. The other set corresponds to the study of magnetic field amplification in the clumpy medium surrounding the supernova remnant Cassiopeia A. The results indicate considerable randomness in the spatial structure of the magnetic field profile in both the cases and gives a sufficient indication of turbulence. The turbulent spectra have been studied and the break point has been found around k which is consistent with the observations in both the cases. The nonlinear wave-wave interaction presented in this paper may be important in understanding the turbulence in the laboratory as well as the astrophysical phenomenon.

  5. The random fractional matching problem

    NASA Astrophysics Data System (ADS)

    Lucibello, Carlo; Malatesta, Enrico M.; Parisi, Giorgio; Sicuro, Gabriele

    2018-05-01

    We consider two formulations of the random-link fractional matching problem, a relaxed version of the more standard random-link (integer) matching problem. In one formulation, we allow each node to be linked to itself in the optimal matching configuration. In the other one, on the contrary, such a link is forbidden. Both problems have the same asymptotic average optimal cost of the random-link matching problem on the complete graph. Using a replica approach and previous results of Wästlund (2010 Acta Mathematica 204 91–150), we analytically derive the finite-size corrections to the asymptotic optimal cost. We compare our results with numerical simulations and we discuss the main differences between random-link fractional matching problems and the random-link matching problem.

  6. Statistical mechanics of a single particle in a multiscale random potential: Parisi landscapes in finite-dimensional Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Bouchaud, Jean-Philippe

    2008-08-01

    We construct an N-dimensional Gaussian landscape with multiscale, translation invariant, logarithmic correlations and investigate the statistical mechanics of a single particle in this environment. In the limit of high dimension N → ∞ the free energy of the system and overlap function are calculated exactly using the replica trick and Parisi's hierarchical ansatz. In the thermodynamic limit, we recover the most general version of the Derrida's generalized random energy model (GREM). The low-temperature behaviour depends essentially on the spectrum of length scales involved in the construction of the landscape. If the latter consists of K discrete values, the system is characterized by a K-step replica symmetry breaking solution. We argue that our construction is in fact valid in any finite spatial dimensions N >= 1. We discuss the implications of our results for the singularity spectrum describing multifractality of the associated Boltzmann-Gibbs measure. Finally we discuss several generalizations and open problems, such as the dynamics in such a landscape and the construction of a generalized multifractal random walk.

  7. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  8. Two-particle problem in comblike structures

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Cassi, Davide; Cattivelli, Luca; Sartori, Fabio

    2016-05-01

    Encounters between walkers performing a random motion on an appropriate structure can describe a wide variety of natural phenomena ranging from pharmacokinetics to foraging. On homogeneous structures the asymptotic encounter probability between two walkers is (qualitatively) independent of whether both walkers are moving or one is kept fixed. On infinite comblike structures this is no longer the case and here we deepen the mechanisms underlying the emergence of a finite probability that two random walkers will never meet, while one single random walker is certain to visit any site. In particular, we introduce an analytical approach to address this problem and even more general problems such as the case of two walkers with different diffusivity, particles walking on a finite comb and on arbitrary bundled structures, possibly in the presence of loops. Our investigations are both analytical and numerical and highlight that, in general, the outcome of a reaction involving two reactants on a comblike architecture can strongly differ according to whether both reactants are moving (no matter their relative diffusivities) or only one is moving and according to the density of shortcuts among the branches.

  9. Random Walks in a One-Dimensional Lévy Random Environment

    NASA Astrophysics Data System (ADS)

    Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena

    2016-04-01

    We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.

  10. Random Sequence for Optimal Low-Power Laser Generated Ultrasound

    NASA Astrophysics Data System (ADS)

    Vangi, D.; Virga, A.; Gulino, M. S.

    2017-08-01

    Low-power laser generated ultrasounds are lately gaining importance in the research world, thanks to the possibility of investigating a mechanical component structural integrity through a non-contact and Non-Destructive Testing (NDT) procedure. The ultrasounds are, however, very low in amplitude, making it necessary to use pre-processing and post-processing operations on the signals to detect them. The cross-correlation technique is used in this work, meaning that a random signal must be used as laser input. For this purpose, a highly random and simple-to-create code called T sequence, capable of enhancing the ultrasound detectability, is introduced (not previously available at the state of the art). Several important parameters which characterize the T sequence can influence the process: the number of pulses Npulses , the pulse duration δ and the distance between pulses dpulses . A Finite Element FE model of a 3 mm steel disk has been initially developed to analytically study the longitudinal ultrasound generation mechanism and the obtainable outputs. Later, experimental tests have shown that the T sequence is highly flexible for ultrasound detection purposes, making it optimal to use high Npulses and δ but low dpulses . In the end, apart from describing all phenomena that arise in the low-power laser generation process, the results of this study are also important for setting up an effective NDT procedure using this technology.

  11. Effect of Finite Computational Domain on Turbulence Scaling Law in Both Physical and Spectral Spaces

    NASA Technical Reports Server (NTRS)

    Hou, Thomas Y.; Wu, Xiao-Hui; Chen, Shiyi; Zhou, Ye

    1998-01-01

    The well-known translation between the power law of energy spectrum and that of the correlation function or the second order structure function has been widely used in analyzing random data. Here, we show that the translation is valid only in proper scaling regimes. The regimes of valid translation are different for the correlation function and the structure function. Indeed, they do not overlap. Furthermore, in practice, the power laws exist only for a finite range of scales. We show that this finite range makes the translation inexact even in the proper scaling regime. The error depends on the scaling exponent. The current findings are applicable to data analysis in fluid turbulence and other stochastic systems.

  12. Transport properties of bilayer graphene due to charged impurity scattering: Temperature-dependent screening and substrate effects

    NASA Astrophysics Data System (ADS)

    Linh, Dang Khanh; Khanh, Nguyen Quoc

    2018-03-01

    We calculate the zero-temperature conductivity of bilayer graphene (BLG) impacted by Coulomb impurity scattering using four different screening models: unscreened, Thomas-Fermi (TF), overscreened and random phase approximation (RPA). We also calculate the conductivity and thermal conductance of BLG using TF, zero- and finite-temperature RPA screening functions. We find large differences between the results of the models and show that TF and finite-temperature RPA give similar results for diffusion thermopower Sd. Using the finite-temperature RPA, we calculate temperature and density dependence of Sd in BLG on SiO2, HfO2 substrates and suspended BLG for different values of interlayer distance c and distance between the first layer and the substrate d.

  13. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Coevolutionary dynamics in large, but finite populations

    NASA Astrophysics Data System (ADS)

    Traulsen, Arne; Claussen, Jens Christian; Hauert, Christoph

    2006-07-01

    Coevolving and competing species or game-theoretic strategies exhibit rich and complex dynamics for which a general theoretical framework based on finite populations is still lacking. Recently, an explicit mean-field description in the form of a Fokker-Planck equation was derived for frequency-dependent selection with two strategies in finite populations based on microscopic processes [A. Traulsen, J. C. Claussen, and C. Hauert, Phys. Rev. Lett. 95, 238701 (2005)]. Here we generalize this approach in a twofold way: First, we extend the framework to an arbitrary number of strategies and second, we allow for mutations in the evolutionary process. The deterministic limit of infinite population size of the frequency-dependent Moran process yields the adjusted replicator-mutator equation, which describes the combined effect of selection and mutation. For finite populations, we provide an extension taking random drift into account. In the limit of neutral selection, i.e., whenever the process is determined by random drift and mutations, the stationary strategy distribution is derived. This distribution forms the background for the coevolutionary process. In particular, a critical mutation rate uc is obtained separating two scenarios: above uc the population predominantly consists of a mixture of strategies whereas below uc the population tends to be in homogeneous states. For one of the fundamental problems in evolutionary biology, the evolution of cooperation under Darwinian selection, we demonstrate that the analytical framework provides excellent approximations to individual based simulations even for rather small population sizes. This approach complements simulation results and provides a deeper, systematic understanding of coevolutionary dynamics.

  15. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  16. Some functional limit theorems for compound Cox processes

    NASA Astrophysics Data System (ADS)

    Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.

    2016-06-01

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  17. Random Resistor Network Model of Minimal Conductivity in Graphene

    NASA Astrophysics Data System (ADS)

    Cheianov, Vadim V.; Fal'Ko, Vladimir I.; Altshuler, Boris L.; Aleiner, Igor L.

    2007-10-01

    Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

  18. Some functional limit theorems for compound Cox processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.

    2016-06-08

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  19. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less

  20. Optimal Estimation of Clock Values and Trends from Finite Data

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    2005-01-01

    We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.

  1. A random rule model of surface growth

    NASA Astrophysics Data System (ADS)

    Mello, Bernardo A.

    2015-02-01

    Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.

  2. Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.

  3. Full Wave Analysis of RF Signal Attenuation in a Lossy Rough Surface Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2005-10-31

    We present a computational study of signal propagation and attenuation of a 200 MHz planar loop antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The numerical technique is first verified against theoretical results for a planar loop antenna in a smooth lossy cave. The simulation is then performed for a series of random rough surface meshes in ordermore » to generate statistical data for the propagation and attenuation properties of the antenna in a cave environment. Results for the mean and variance of the power spectral density of the electric field are presented and discussed.« less

  4. Scattering of Gaussian Beams by Disordered Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.

    2016-01-01

    A frequently observed characteristic of electromagnetic scattering by a disordered particulate medium is the absence of pronounced speckles in angular patterns of the scattered light. It is known that such diffuse speckle-free scattering patterns can be caused by averaging over randomly changing particle positions and/or over a finite spectral range. To get further insight into the possible physical causes of the absence of speckles, we use the numerically exact superposition T-matrix solver of the Maxwell equations and analyze the scattering of plane-wave and Gaussian beams by representative multi-sphere groups. We show that phase and amplitude variations across an incident Gaussian beam do not serve to extinguish the pronounced speckle pattern typical of plane-wave illumination of a fixed multi-particle group. Averaging over random particle positions and/or over a finite spectral range is still required to generate the classical diffuse speckle-free regime.

  5. Electromagnetic wave extinction within a forested canopy

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1989-01-01

    A forested canopy is modeled by a collection of randomly oriented finite-length cylinders shaded by randomly oriented and distributed disk- or needle-shaped leaves. For a plane wave exciting the forested canopy, the extinction coefficient is formulated in terms of the extinction cross sections (ECSs) in the local frame of each forest component and the Eulerian angles of orientation (used to describe the orientation of each component). The ECSs in the local frame for the finite-length cylinders used to model the branches are obtained by using the forward-scattering theorem. ECSs in the local frame for the disk- and needle-shaped leaves are obtained by the summation of the absorption and scattering cross-sections. The behavior of the extinction coefficients with the incidence angle is investigated numerically for both deciduous and coniferous forest. The dependencies of the extinction coefficients on the orientation of the leaves are illustrated numerically.

  6. Entropy of finite random binary sequences with weak long-range correlations.

    PubMed

    Melnik, S S; Usatenko, O V

    2014-11-01

    We study the N-step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.

  7. Entropy of finite random binary sequences with weak long-range correlations

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2014-11-01

    We study the N -step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.

  8. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  9. Evidence for a Finite-Temperature Insulator.

    PubMed

    Ovadia, M; Kalok, D; Tamir, I; Mitra, S; Sacépé, B; Shahar, D

    2015-08-27

    In superconductors the zero-resistance current-flow is protected from dissipation at finite temperatures (T) by virtue of the short-circuit condition maintained by the electrons that remain in the condensed state. The recently suggested finite-T insulator and the "superinsulating" phase are different because any residual mechanism of conduction will eventually become dominant as the finite-T insulator sets-in. If the residual conduction is small it may be possible to observe the transition to these intriguing states. We show that the conductivity of the high magnetic-field insulator terminating superconductivity in amorphous indium-oxide exhibits an abrupt drop, and seem to approach a zero conductance at T < 0.04 K. We discuss our results in the light of theories that lead to a finite-T insulator.

  10. A probabilistic Hu-Washizu variational principle

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Besterfield, G. H.

    1987-01-01

    A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.

  11. Prediction of response of aircraft panels subjected to acoustic and thermal loads

    NASA Technical Reports Server (NTRS)

    Mei, Chuh

    1992-01-01

    The primary effort of this research project has been focused on the development of analytical methods for the prediction of random response of structural panels subjected to combined and intense acoustic and thermal loads. The accomplishments on various acoustic fatigue research activities are described first, then followed by publications and theses. Topics covered include: transverse shear deformation; finite element models of vibrating composite laminates; large deflection vibration modeling; finite element analysis of thermal buckling; and prediction of three dimensional duct using boundary element method.

  12. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  13. A Small and Slim Coaxial Probe for Single Rice Grain Moisture Sensing

    PubMed Central

    You, Kok Yeow; Mun, Hou Kit; You, Li Ling; Salleh, Jamaliah; Abbas, Zulkifly

    2013-01-01

    A moisture detection of single rice grains using a slim and small open-ended coaxial probe is presented. The coaxial probe is suitable for the nondestructive measurement of moisture values in the rice grains ranging from from 9.5% to 26%. Empirical polynomial models are developed to predict the gravimetric moisture content of rice based on measured reflection coefficients using a vector network analyzer. The relationship between the reflection coefficient and relative permittivity were also created using a regression method and expressed in a polynomial model, whose model coefficients were obtained by fitting the data from Finite Element-based simulation. Besides, the designed single rice grain sample holder and experimental set-up were shown. The measurement of single rice grains in this study is more precise compared to the measurement in conventional bulk rice grains, as the random air gap present in the bulk rice grains is excluded. PMID:23493127

  14. Additive mixed effect model for recurrent gap time data.

    PubMed

    Ding, Jieli; Sun, Liuquan

    2017-04-01

    Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.

  15. Information sharing and sorting in a community

    NASA Astrophysics Data System (ADS)

    Bhattacherjee, Biplab; Manna, S. S.; Mukherjee, Animesh

    2013-06-01

    We present the results of a detailed numerical study of a model for the sharing and sorting of information in a community consisting of a large number of agents. The information gathering takes place in a sequence of mutual bipartite interactions where randomly selected pairs of agents communicate with each other to enhance their knowledge and sort out the common information. Although our model is less restricted compared to the well-established naming game, the numerical results strongly indicate that the whole set of exponents characterizing this model are different from those of the naming game and they assume nontrivial values. Finally, it appears that in analogy to the emergence of clusters in the phenomenon of percolation, one can define clusters of agents here having the same information. We have studied in detail the growth of the largest cluster in this article and performed its finite-size scaling analysis.

  16. Multi-objective robust design of energy-absorbing components using coupled process-performance simulations

    NASA Astrophysics Data System (ADS)

    Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud

    2014-02-01

    The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.

  17. Improved Life Prediction of Turbine Engine Components Using a Finite Element Based Software Called Zencrack

    DTIC Science & Technology

    2003-09-01

    application .................................................. 5-42 5.10 Different materials within crack-block...5-30 Figure 5-29 - Application of required user edge node sets... applications . Users have at their disposal all of the capabilities within these finite element programs and may, if desired, include any number of

  18. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  19. Optimum element density studies for finite-element thermal analysis of hypersonic aircraft structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy; Muramoto, Kyle M.

    1990-01-01

    Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.

  20. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  1. Ensembles of physical states and random quantum circuits on graphs

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo

    2012-11-01

    In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).

  2. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  3. The cyclic and fractal seismic series preceding an mb 4.8 earthquake on 1980 February 14 near the Virgin Islands

    USGS Publications Warehouse

    Varnes, D.J.; Bufe, C.G.

    1996-01-01

    Seismic activity in the 10 months preceding the 1980 February 14, mb 4.8 earthquake in the Virgin Islands, reported on by Frankel in 1982, consisted of four principal cycles. Each cycle began with a relatively large event or series of closely spaced events, and the duration of the cycles progressively shortened by a factor of about 3/4. Had this regular shortening of the cycles been recognized prior to the earthquake, the time of the next episode of setsmicity (the main shock) might have been closely estimated 41 days in advance. That this event could be much larger than the previous events is indicated from time-to-failure analysis of the accelerating rise in released seismic energy, using a non-linear time- and slip-predictable foreshock model. Examination of the timing of all events in the sequence shows an even higher degree of order. Rates of seismicity, measured by consecutive interevent times, when plotted on an iteration diagram of a rate versus the succeeding rate, form a triangular circulating trajectory. The trajectory becomes an ascending helix if extended in a third dimension, time. This construction reveals additional and precise relations among the time intervals between times of relatively high or relatively low rates of seismic activity, including period halving and doubling. The set of 666 time intervals between all possible pairs of the 37 recorded events appears to be a fractal; the set of time points that define the intervals has a finite, non-integer correlation dimension of 0.70. In contrast, the average correlation dimension of 50 random sequences of 37 events is significantly higher, dose to 1.0. In a similar analysis, the set of distances between pairs of epicentres has a fractal correlation dimension of 1.52. Well-defined cycles, numerous precise ratios among time intervals, and a non-random temporal fractal dimension suggest that the seismic series is not a random process, but rather the product of a deterministic dynamic system.

  4. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  5. Emergence of distributed coordination in the Kolkata Paise Restaurant problem with finite information

    NASA Astrophysics Data System (ADS)

    Ghosh, Diptesh; Chakrabarti, Anindya S.

    2017-10-01

    In this paper, we study a large-scale distributed coordination problem and propose efficient adaptive strategies to solve the problem. The basic problem is to allocate finite number of resources to individual agents in the absence of a central planner such that there is as little congestion as possible and the fraction of unutilized resources is reduced as far as possible. In the absence of a central planner and global information, agents can employ adaptive strategies that uses only a finite knowledge about the competitors. In this paper, we show that a combination of finite information sets and reinforcement learning can increase the utilization fraction of resources substantially.

  6. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  7. Stability and Existence Results for Quasimonotone Quasivariational Inequalities in Finite Dimensional Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellani, Marco; Giuli, Massimiliano, E-mail: massimiliano.giuli@univaq.it

    2016-02-15

    We study pseudomonotone and quasimonotone quasivariational inequalities in a finite dimensional space. In particular we focus our attention on the closedness of some solution maps associated to a parametric quasivariational inequality. From this study we derive two results on the existence of solutions of the quasivariational inequality. On the one hand, assuming the pseudomonotonicity of the operator, we get the nonemptiness of the set of the classical solutions. On the other hand, we show that the quasimonoticity of the operator implies the nonemptiness of the set of nonzero solutions. An application to traffic network is also considered.

  8. Convergence to equilibrium under a random Hamiltonian.

    PubMed

    Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  9. Convergence to equilibrium under a random Hamiltonian

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  10. Lévy walks

    NASA Astrophysics Data System (ADS)

    Zaburdaev, V.; Denisov, S.; Klafter, J.

    2015-04-01

    Random walk is a fundamental concept with applications ranging from quantum physics to econometrics. Remarkably, one specific model of random walks appears to be ubiquitous across many fields as a tool to analyze transport phenomena in which the dispersal process is faster than dictated by Brownian diffusion. The Lévy-walk model combines two key features, the ability to generate anomalously fast diffusion and a finite velocity of a random walker. Recent results in optics, Hamiltonian chaos, cold atom dynamics, biophysics, and behavioral science demonstrate that this particular type of random walk provides significant insight into complex transport phenomena. This review gives a self-consistent introduction to Lévy walks, surveys their existing applications, including latest advances, and outlines further perspectives.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, L.; Zaslawsky, M.

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  12. Solid/FEM integration at SNLA

    NASA Technical Reports Server (NTRS)

    Chavez, Patrick F.

    1987-01-01

    The effort at Sandia National Labs. on the methodologies and techniques being used to generate strict hexahedral finite element meshes from a solid model is described. The functionality of the modeler is used to decompose the solid into a set of nonintersecting meshable finite element primitives. The description of the decomposition is exported, via a Boundary Representative format, to the meshing program which uses the information for complete finite element model specification. Particular features of the program are discussed in some detail along with future plans for development which includes automation of the decomposition using artificial intelligence techniques.

  13. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  14. Symbolic Dynamics, Flower Automata and Infinite Traces

    NASA Astrophysics Data System (ADS)

    Foryś, Wit; Oprocha, Piotr; Bakalarski, Slawomir

    Considering a finite alphabet as a set of allowed instructions, we can identify finite words with basic actions or programs. Hence infinite paths on a flower automaton can represent order in which these programs are executed and a flower shift related with it represents list of instructions to be executed at some mid-point of the computation.

  15. Nonlinear Control Systems

    DTIC Science & Technology

    2007-03-01

    Finite -dimensional regulators for a class of infinite dimensional systems ,” Systems and Control Letters, 3 (1983), 7-12. [11] B...semiglobal stabilizability by encoded state feedback,” to appear in Systems and Control Letters. 22 29. C. De Persis, A. Isidori, “Global stabilization of...nonequilibrium setting, for both finite and infinite dimensional control systems . Our objectives for distributed parameter systems included

  16. On the statistical mechanics of the 2D stochastic Euler equation

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2011-12-01

    The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.

  17. A multi-assets artificial stock market with zero-intelligence traders

    NASA Astrophysics Data System (ADS)

    Ponta, L.; Raberto, M.; Cincotti, S.

    2011-01-01

    In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.

  18. Sufficient condition for finite-time singularity and tendency towards self-similarity in a high-symmetry flow

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    A highly symmetric Euler flow, first proposed by Kida (1985), and recently simulated by Boratav and Pelz (1994) is considered. It is found that the fourth order spatial derivative of the pressure (pxxxx) at the origin is most probably positive. It is demonstrated that if pxxxx grows fast enough, there must be a finite-time singularity (FTS). For a random energy spectrum E(k) ∞ k-v, a FTS can occur if the spectral index v<3. Furthermore, a positive pxxxx has the dynamical consequence of reducing the third derivative of the velocity uxxx at the origin. Since the expectation value of uxxx is zero for a random distribution of energy, an ever decreasing uxxx means that the Kida flow has an intrinsic tendency to deviate from a random state. By assuming that uxxx reaches the minimum value for a given spectral profile, the velocity and pressure are found to have locally self-similar forms similar in shape to what are found in numerical simulations. Such a quasi self-similar solution relaxes the requirement for FTS to v<6. A special self-similar solution that satisfies Kelvin's circulation theorem and exhibits a FTS is found for v=2.

  19. A new phase of disordered phonons modelled by random matrices

    NASA Astrophysics Data System (ADS)

    Schmittner, Sebastian; Zirnbauer, Martin

    2015-03-01

    Starting from the clean harmonic crystal and not invoking two-level systems, we propose a model for phonons in a disordered solid. In this model the strength of mass and spring constant disorder can be increased separately. Both types of disorder are modelled by random matrices that couple the degrees of freedom locally. Treated in coherent potential approximation (CPA), the speed of sound decreases with increasing disorder until it reaches zero at finite disorder strength. There, a critical transition to a strong disorder phase occurs. In this novel phase, we find the density of states at zero energy in three dimensions to be finite, leading to a linear temperature dependence of the heat capacity, as observed experimentally for vitreous systems. For any disorder strength, our model is stable, i.e. masses and spring constants are positive, and there are no runaway dynamics. This is ensured by using appropriate probability distributions, inspired by Wishart ensembles, for the random matrices. The CPA self-consistency equations are derived in a very accessible way using planar diagrams. The talk focuses on the model and the results. The first author acknowledges financial support by the Deutsche Telekom Stiftung.

  20. Unraveling spurious properties of interaction networks with tailored random networks.

    PubMed

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.

  1. Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks

    PubMed Central

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis. PMID:21850239

  2. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  3. Observer-based robust finite time H∞ sliding mode control for Markovian switching systems with mode-dependent time-varying delay and incomplete transition rate.

    PubMed

    Gao, Lijun; Jiang, Xiaoxiao; Wang, Dandan

    2016-03-01

    This paper investigates the problem of robust finite time H∞ sliding mode control for a class of Markovian switching systems. The system is subjected to the mode-dependent time-varying delay, partly unknown transition rate and unmeasurable state. The main difficulty is that, a sliding mode surface cannot be designed based on the unknown transition rate and unmeasurable state directly. To overcome this obstacle, the set of modes is firstly divided into two subsets standing for known transition rate subset and unknown one, based on which a state observer is established. A component robust finite-time sliding mode controller is also designed to cope with the effect of partially unknown transition rate. It is illustrated that the reachability, finite-time stability, finite-time boundedness, finite-time H∞ state feedback stabilization of sliding mode dynamics can be ensured despite the unknown transition rate. Finally, the simulation results verify the effectiveness of robust finite time control problem. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Determining Definitions for Comparing Cardinalities

    ERIC Educational Resources Information Center

    Shipman, B. A.

    2012-01-01

    Through a series of six guided classroom discoveries, students create, via targeted questions, a definition for deciding when two sets have the same cardinality. The program begins by developing basic facts about cardinalities of finite sets. Extending two of these facts to infinite sets yields two statements on comparing infinite cardinalities…

  5. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data

    PubMed Central

    2010-01-01

    Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements. PMID:20205909

  6. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data.

    PubMed

    Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude

    2010-01-26

    In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements.

  7. Finite element analysis of thrust angle contact ball slewing bearing

    NASA Astrophysics Data System (ADS)

    Deng, Biao; Guo, Yuan; Zhang, An; Tang, Shengjin

    2017-12-01

    In view of the large heavy slewing bearing no longer follows the rigid ring hupothesis under the load condition, the entity finite element model of thrust angular contact ball bearing was established by using finite element analysis software ANSYS. The boundary conditions of the model were set according to the actual condition of slewing bearing, the internal stress state of the slewing bearing was obtained by solving and calculation, and the calculated results were compared with the numerical results based on the rigid ring assumption. The results show that more balls are loaded in the result of finite element method, and the maximum contact stresses between the ball and raceway have some reductions. This is because the finite element method considers the ferrule as an elastic body. The ring will produce structure deformation in the radial plane when the heavy load slewing bearings are subjected to external loads. The results of the finite element method are more in line with the actual situation of the slewing bearing in the engineering.

  8. Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph; Kashefi, Elham

    2012-02-01

    Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's inputs, outputs and computation remain private. Recently we proposed a universal unconditionally secure BQC scheme, based on the conceptual framework of the measurement-based quantum computing model, where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Here we present a refinement of the scheme which vastly expands the class of quantum circuits which can be directly implemented as a blind computation, by introducing a new class of resource states which we term dotted-complete graph states and expanding the set of single qubit states the client is required to prepare. These two modifications significantly simplify the overall protocol and remove the previously present restriction that only nearest-neighbor circuits could be implemented as blind computations directly. As an added benefit, the refined protocol admits a substantially more intuitive and simplified verification mechanism, allowing the correctness of a blind computation to be verified with arbitrarily small probability of error.

  9. Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Luo, Li-Shi

    2007-01-01

    In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested.

  10. Modeling for Ultrasonic Health Monitoring of Foams with Embedded Sensors

    NASA Technical Reports Server (NTRS)

    Wang, L.; Rokhlin, S. I.; Rokhlin, Stanislav, I.

    2005-01-01

    In this report analytical and numerical methods are proposed to estimate the effective elastic properties of regular and random open-cell foams. The methods are based on the principle of minimum energy and on structural beam models. The analytical solutions are obtained using symbolic processing software. The microstructure of the random foam is simulated using Voronoi tessellation together with a rate-dependent random close-packing algorithm. The statistics of the geometrical properties of random foams corresponding to different packing fractions have been studied. The effects of the packing fraction on elastic properties of the foams have been investigated by decomposing the compliance into bending and axial compliance components. It is shown that the bending compliance increases and the axial compliance decreases when the packing fraction increases. Keywords: Foam; Elastic properties; Finite element; Randomness

  11. On the explicit construction of Parisi landscapes in finite dimensional Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Fyodorov, Y. V.; Bouchaud, J.-P.

    2007-12-01

    An N-dimensional Gaussian landscape with multiscale translation-invariant logarithmic correlations has been constructed, and the statistical mechanics of a single particle in this environment has been investigated. In the limit of a high dimensional N → ∞, the free energy of the system in the thermodynamic limit coincides with the most general version of Derrida’s generalized random energy model. The low-temperature behavior depends essentially on the spectrum of length scales involved in the construction of the landscape. The construction is argued to be valid in any finite spatial dimensions N ≥1.

  12. Rapid magnetic reconnection caused by finite amplitude fluctuations

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Lamkin, S. L.

    1985-01-01

    The nonlinear dynamics of the magnetohydrodynamic sheet pinch have been investigated as an unforced initial value problem for large scale Reynolds numbers up to 1000. Reconnection is triggered by adding to the sheet pinch a small but finite level of broadband random perturbations. Effects of turbulence in the solutions include the production of reconnected magnetic islands at rates that are insensitive to resistivity at early times. This is explained by noting that electric field fluctuations near the X point produce irregularities in the vector potential, sometimes taking the form of 'magnetic bubbles', which allow rapid change of field topology.

  13. Epidemic Threshold in Structured Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    EguíLuz, VíCtor M.; Klemm, Konstantin

    2002-08-01

    We analyze the spreading of viruses in scale-free networks with high clustering and degree correlations, as found in the Internet graph. For the susceptible-infected-susceptible model of epidemics the prevalence undergoes a phase transition at a finite threshold of the transmission probability. Comparing with the absence of a finite threshold in networks with purely random wiring, our result suggests that high clustering (modularity) and degree correlations protect scale-free networks against the spreading of viruses. We introduce and verify a quantitative description of the epidemic threshold based on the connectivity of the neighborhoods of the hubs.

  14. Generalized epidemic process on modular networks.

    PubMed

    Chung, Kihong; Baek, Yongjoo; Kim, Daniel; Ha, Meesoon; Jeong, Hawoong

    2014-05-01

    Social reinforcement and modular structure are two salient features observed in the spreading of behavior through social contacts. In order to investigate the interplay between these two features, we study the generalized epidemic process on modular networks with equal-sized finite communities and adjustable modularity. Using the analytical approach originally applied to clique-based random networks, we show that the system exhibits a bond-percolation type continuous phase transition for weak social reinforcement, whereas a discontinuous phase transition occurs for sufficiently strong social reinforcement. Our findings are numerically verified using the finite-size scaling analysis and the crossings of the bimodality coefficient.

  15. Bi-stability resistant to fluctuations

    NASA Astrophysics Data System (ADS)

    Caruel, M.; Truskinovsky, L.

    2017-12-01

    We study a simple micro-mechanical device that does not lose its snap-through behavior in an environment dominated by fluctuations. The main idea is to have several degrees of freedom that can cooperatively resist the de-synchronizing effect of random perturbations. As an inspiration we use the power stroke machinery of skeletal muscles, which ensures at sub-micron scales and finite temperatures a swift recovery of an abruptly applied slack. In addition to hypersensitive response at finite temperatures, our prototypical Brownian snap spring also exhibits criticality at special values of parameters which is another potentially interesting property for micro-scale engineering applications.

  16. An analytic treatment of gravitational microlensing for sources of finite size at large optical depths

    NASA Technical Reports Server (NTRS)

    Deguchi, Shuji; Watson, William D.

    1988-01-01

    Statistical methods are developed for gravitational lensing in order to obtain analytic expressions for the average surface brightness that include the effects of microlensing by stellar (or other compact) masses within the lensing galaxy. The primary advance here is in utilizing a Markoff technique to obtain expressions that are valid for sources of finite size when the surface density of mass in the lensing galaxy is large. The finite size of the source is probably the key consideration for the occurrence of microlensing by individual stars. For the intensity from a particular location, the parameter which governs the importance of microlensing is determined. Statistical methods are also formulated to assess the time variation of the surface brightness due to the random motion of the masses that cause the microlensing.

  17. Nilpotent symmetries in supergroup field cosmology

    NASA Astrophysics Data System (ADS)

    Upadhyay, Sudhaker

    2015-06-01

    In this paper, we study the gauge invariance of the third quantized supergroup field cosmology which is a model for multiverse. Further, we propose both the infinitesimal (usual) as well as the finite superfield-dependent BRST symmetry transformations which leave the effective theory invariant. The effects of finite superfield-dependent BRST transformations on the path integral (so-called void functional in the case of third quantization) are implemented. Within the finite superfield-dependent BRST formulation, the finite superfield-dependent BRST transformations with specific parameter switch the void functional from one gauge to another. We establish this result for the most general gauge with the help of explicit calculations which holds for all possible sets of gauge choices at both the classical and the quantum levels.

  18. Quenched dynamics of classical isolated systems: the spherical spin model with two-body random interactions or the Neumann integrable model

    NASA Astrophysics Data System (ADS)

    Cugliandolo, Leticia F.; Lozano, Gustavo S.; Nessi, Nicolás; Picco, Marco; Tartaglia, Alessandro

    2018-06-01

    We study the Hamiltonian dynamics of the spherical spin model with fully-connected two-body random interactions. In the statistical physics framework, the potential energy is of the so-called p  =  2 kind, closely linked to the scalar field theory. Most importantly for our setting, the energy conserving dynamics are equivalent to the ones of the Neumann integrable model. We take initial conditions from the Boltzmann equilibrium measure at a temperature that can be above or below the static phase transition, typical of a disordered (paramagnetic) or of an ordered (disguised ferromagnetic) equilibrium phase. We subsequently evolve the configurations with Newton dynamics dictated by a different Hamiltonian, obtained from an instantaneous global rescaling of the elements in the interaction random matrix. In the limit of infinitely many degrees of freedom, , we identify three dynamical phases depending on the parameters that characterise the initial state and the final Hamiltonian. We next set the analysis of the system with finite number of degrees of freedom in terms of N non-linearly coupled modes. We argue that in the limit the modes decouple at long times. We evaluate the mode temperatures and we relate them to the frequency-dependent effective temperature measured with the fluctuation-dissipation relation in the frequency domain, similarly to what was recently proposed for quantum integrable cases. Finally, we analyse the N  ‑  1 integrals of motion, notably, their scaling with N, and we use them to show that the system is out of equilibrium in all phases, even for parameters that show an apparent Gibbs–Boltzmann behaviour of the global observables. We elaborate on the role played by these constants of motion after the quench and we briefly discuss the possible description of the asymptotic dynamics in terms of a generalised Gibbs ensemble.

  19. Commentary on Steinley and Brusco (2011): Recommendations and Cautions

    ERIC Educational Resources Information Center

    McLachlan, Geoffrey J.

    2011-01-01

    I discuss the recommendations and cautions in Steinley and Brusco's (2011) article on the use of finite models to cluster a data set. In their article, much use is made of comparison with the "K"-means procedure. As noted by researchers for over 30 years, the "K"-means procedure can be viewed as a special case of finite mixture modeling in which…

  20. Material nonlinear analysis via mixed-iterative finite element method

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1992-01-01

    The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.

  1. Second-order numerical solution of time-dependent, first-order hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Shah, Patricia L.; Hardin, Jay

    1995-01-01

    A finite difference scheme is developed to find an approximate solution of two similar hyperbolic equations, namely a first-order plane wave and spherical wave problem. Finite difference approximations are made for both the space and time derivatives. The result is a conditionally stable equation yielding an exact solution when the Courant number is set to one.

  2. Anatomically Realistic Three-Dimensional Meshes of the Pelvic Floor & Anal Canal for Finite Element Analysis

    PubMed Central

    Noakes, Kimberley F.; Bissett, Ian P.; Pullan, Andrew J.; Cheng, Leo K.

    2014-01-01

    Three anatomically realistic meshes, suitable for finite element analysis, of the pelvic floor and anal canal regions have been developed to provide a framework with which to examine the mechanics, via finite element analysis of normal function within the pelvic floor. Two cadaver-based meshes were produced using the Visible Human Project (male and female) cryosection data sets, and a third mesh was produced based on MR image data from a live subject. The Visible Man (VM) mesh included 10 different pelvic structures while the Visible Woman and MRI meshes contained 14 and 13 structures respectively. Each image set was digitized and then finite element meshes were created using an iterative fitting procedure with smoothing constraints calculated from ‘L’-curves. These weights produced accurate geometric meshes of each pelvic structure with average Root Mean Square (RMS) fitting errors of less than 1.15 mm. The Visible Human cadaveric data provided high resolution images, however, the cadaveric meshes lacked the normal dynamic form of living tissue and suffered from artifacts related to postmortem changes. The lower resolution MRI mesh was able to accurately portray structure of the living subject and paves the way for dynamic, functional modeling. PMID:18317929

  3. Finite-difference modeling of the electroseismic logging in a fluid-saturated porous formation

    NASA Astrophysics Data System (ADS)

    Guan, Wei; Hu, Hengshan

    2008-05-01

    In a fluid-saturated porous medium, an electromagnetic (EM) wavefield induces an acoustic wavefield due to the electrokinetic effect. A potential geophysical application of this effect is electroseismic (ES) logging, in which the converted acoustic wavefield is received in a fluid-filled borehole to evaluate the parameters of the porous formation around the borehole. In this paper, a finite-difference scheme is proposed to model the ES logging responses to a vertical low frequency electric dipole along the borehole axis. The EM field excited by the electric dipole is calculated separately by finite-difference first, and is considered as a distributed exciting source term in a set of extended Biot's equations for the converted acoustic wavefield in the formation. This set of equations is solved by a modified finite-difference time-domain (FDTD) algorithm that allows for the calculation of dynamic permeability so that it is not restricted to low-frequency poroelastic wave problems. The perfectly matched layer (PML) technique without splitting the fields is applied to truncate the computational region. The simulated ES logging waveforms approximately agree with those obtained by the analytical method. The FDTD algorithm applies also to acoustic logging simulation in porous formations.

  4. Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamini, Vittorino

    2010-02-15

    Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less

  5. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  6. Infinity Computer and Calculus

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.

    2007-09-01

    Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this survey talk, a new computational methodology (that is not related to nonstandard analysis) is described. It is based on the principle `The part is less than the whole' applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework. The new methodology allows us to introduce the Infinity Computer working with all these numbers (its simulator is presented during the lecture). The new computational paradigm both gives possibilities to execute computations of a new type and simplifies fields of mathematics where infinity and/or infinitesimals are encountered. Numerous examples of the usage of the introduced computational tools are given during the lecture.

  7. Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling

    NASA Astrophysics Data System (ADS)

    Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo

    2017-06-01

    Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.

  8. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S.more » These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].« less

  9. Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.

    PubMed

    Forti, Mauro; Nistri, Paolo; Quincampoix, Marc

    2006-11-01

    This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.

  10. A simple level set method for solving Stefan problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Merriman, B.; Osher, S.

    1997-07-15

    Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.

  11. Listing triangles in expected linear time on a class of power law graphs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann

    Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less

  12. Random pure states: Quantifying bipartite entanglement beyond the linear statistics.

    PubMed

    Vivo, Pierpaolo; Pato, Mauricio P; Oshanin, Gleb

    2016-05-01

    We analyze the properties of entangled random pure states of a quantum system partitioned into two smaller subsystems of dimensions N and M. Framing the problem in terms of random matrices with a fixed-trace constraint, we establish, for arbitrary N≤M, a general relation between the n-point densities and the cross moments of the eigenvalues of the reduced density matrix, i.e., the so-called Schmidt eigenvalues, and the analogous functionals of the eigenvalues of the Wishart-Laguerre ensemble of the random matrix theory. This allows us to derive explicit expressions for two-level densities, and also an exact expression for the variance of von Neumann entropy at finite N,M. Then, we focus on the moments E{K^{a}} of the Schmidt number K, the reciprocal of the purity. This is a random variable supported on [1,N], which quantifies the number of degrees of freedom effectively contributing to the entanglement. We derive a wealth of analytical results for E{K^{a}} for N=2 and 3 and arbitrary M, and also for square N=M systems by spotting for the latter a connection with the probability P(x_{min}^{GUE}≥sqrt[2N]ξ) that the smallest eigenvalue x_{min}^{GUE} of an N×N matrix belonging to the Gaussian unitary ensemble is larger than sqrt[2N]ξ. As a by-product, we present an exact asymptotic expansion for P(x_{min}^{GUE}≥sqrt[2N]ξ) for finite N as ξ→∞. Our results are corroborated by numerical simulations whenever possible, with excellent agreement.

  13. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  14. Analytic Regularity and Polynomial Approximation of Parametric and Stochastic Elliptic PDEs

    DTIC Science & Technology

    2010-05-31

    Todor : Finite elements for elliptic problems with stochastic coefficients Comp. Meth. Appl. Mech. Engg. 194 (2005) 205-228. [14] R. Ghanem and P. Spanos...for elliptic partial differential equations with random input data SIAM J. Num. Anal. 46(2008), 2411–2442. [20] R. Todor , Robust eigenvalue computation...for smoothing operators, SIAM J. Num. Anal. 44(2006), 865– 878. [21] Ch. Schwab and R.A. Todor , Karhúnen-Loève Approximation of Random Fields by

  15. Coordinated Search for a Random Walk Target Motion

    NASA Astrophysics Data System (ADS)

    El-Hadidy, Mohamed Abd Allah; Abou-Gabal, Hamdy M.

    This paper presents the cooperation between two searchers at the origin to find a Random Walk moving target on the real line. No information is not available about the target’s position all the time. Rather than finding the conditions that make the expected value of the first meeting time between one of the searchers and the target is finite, we show the existence of the optimal search strategy which minimizes this first meeting time. The effectiveness of this model is illustrated using a numerical example.

  16. Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers

    NASA Astrophysics Data System (ADS)

    Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos

    2017-01-01

    We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.

  17. Pretest 3-D finite element modeling of the wedge pillar portion of the WIPP (Waste Isolation Pilot Plant) Geomechanical Evaluation (Room G) in situ experiment. [Waste Isolation Pilot Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preece, D.S.

    Pretest 3-D finite element calculations have been performed on the wedge pillar portion of the WIPP Geomechanical Evaluation Experiment. The wedge pillar separates two drifts that intersect at an angle of 7.5/sup 0/. Purpose of the experiment is to provide data on the creep behavior of the wedge and progressive failure at the tip. The first set of calculations utilized a symmetry plane on the center-line of the wedge which allowed treatment of the entire configuration by modeling half of the geometry. Two 3-D calculations in this first set were performed with different drift widths to study the influence ofmore » drift size on closure and maximum stress. A cross-section perpendicular to the wedge was also analyzed with 2-D finite element models and the results compared to the 3-D results. In another set of 3-D calculations both drifts were modeled but with less distance between the drifts and the outer boundaries. Results of these calculations are compared with results from the other calculations to better understand the influence of boundary conditions.« less

  18. Modular Extensions of Unitary Braided Fusion Categories and 2+1D Topological/SPT Orders with Symmetries

    NASA Astrophysics Data System (ADS)

    Lan, Tian; Kong, Liang; Wen, Xiao-Gang

    2017-04-01

    A finite bosonic or fermionic symmetry can be described uniquely by a symmetric fusion category E. In this work, we propose that 2+1D topological/SPT orders with a fixed finite symmetry E are classified, up to {E_8} quantum Hall states, by the unitary modular tensor categories C over E and the modular extensions of each C. In the case C=E, we prove that the set M_{ext}(E) of all modular extensions of E has a natural structure of a finite abelian group. We also prove that the set M_{ext}(C) of all modular extensions of E, if not empty, is equipped with a natural M_{ext}(C)-action that is free and transitive. Namely, the set M_{ext}(C) is an M_{ext}(E)-torsor. As special cases, we explain in detail how the group M_{ext}(E) recovers the well-known group-cohomology classification of the 2+1D bosonic SPT orders and Kitaev's 16 fold ways. We also discuss briefly the behavior of the group M_{ext}(E) under the symmetry-breaking processes and its relation to Witt groups.

  19. Bistatic scattering from a three-dimensional object above a two-dimensional randomly rough surface modeled with the parallel FDTD approach.

    PubMed

    Guo, L-X; Li, J; Zeng, H

    2009-11-01

    We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.

  20. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  1. The Grand Tour via Geodesic Interpolation of 2-frames

    NASA Technical Reports Server (NTRS)

    Asimov, Daniel; Buja, Andreas

    1994-01-01

    Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.

  2. Causal mediation analysis with multiple mediators in the presence of treatment noncompliance.

    PubMed

    Park, Soojin; Kürüm, Esra

    2018-05-20

    Randomized experiments are often complicated because of treatment noncompliance. This challenge prevents researchers from identifying the mediated portion of the intention-to-treated (ITT) effect, which is the effect of the assigned treatment that is attributed to a mediator. One solution suggests identifying the mediated ITT effect on the basis of the average causal mediation effect among compliers when there is a single mediator. However, considering the complex nature of the mediating mechanisms, it is natural to assume that there are multiple variables that mediate through the causal path. Motivated by an empirical analysis of a data set collected in a randomized interventional study, we develop a method to estimate the mediated portion of the ITT effect when both multiple dependent mediators and treatment noncompliance exist. This enables researchers to make an informed decision on how to strengthen the intervention effect by identifying relevant mediators despite treatment noncompliance. We propose a nonparametric estimation procedure and provide a sensitivity analysis for key assumptions. We conduct a Monte Carlo simulation study to assess the finite sample performance of the proposed approach. The proposed method is illustrated by an empirical analysis of JOBS II data, in which a job training intervention was used to prevent mental health deterioration among unemployed individuals. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Finite-time and fixed-time synchronization analysis of inertial memristive neural networks with time-varying delays.

    PubMed

    Wei, Ruoyu; Cao, Jinde; Alsaedi, Ahmed

    2018-02-01

    This paper investigates the finite-time synchronization and fixed-time synchronization problems of inertial memristive neural networks with time-varying delays. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, several sufficient conditions are derived to ensure finite-time synchronization of inertial memristive neural networks. Then, for the purpose of making the setting time independent of initial condition, we consider the fixed-time synchronization. A novel criterion guaranteeing the fixed-time synchronization of inertial memristive neural networks is derived. Finally, three examples are provided to demonstrate the effectiveness of our main results.

  4. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  5. Cantorian Set Theory and Teaching Prospective Teachers

    ERIC Educational Resources Information Center

    Narli, Serkan; Baser, Nes'e

    2008-01-01

    Infinity has contradictions arising from its nature. Since mind is actually adapted to finite realities attained by behaviors in space and time, when one starts to deal with real infinity, contradictions will arise. In particular, Cantorian Set Theory for it involves the notion of "equivalence of a set to one of its proper subsets,"…

  6. Cantorian Set Theory and Teaching Prospective Teachers

    ERIC Educational Resources Information Center

    Narli, Serkan; Baser, Nes'e

    2008-01-01

    Infinity has contradictions arising from its nature. Since mind is actually adapted to finite realities attained by behaviors in space and time, when one starts to deal with real infinity, contradictions will arise. In particular, Cantorian Set Theory, for it involves the notion of "equivalence of a set to one of its proper subsets," causes…

  7. Cavity master equation for the continuous time dynamics of discrete-spin models.

    PubMed

    Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  8. Cavity master equation for the continuous time dynamics of discrete-spin models

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  9. Response of moderately thick laminated cross-ply composite shells subjected to random excitation

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaak; Cederbaum, Gabriel; Librescu, Liviu

    1989-01-01

    This study deals with the dynamic response of transverse shear deformable laminated shells subjected to random excitation. The analysis encompasses the following problems: (1) the dynamic response of circular cylindrical shells of finite length excited by an axisymmetric uniform ring loading, stationary in time, and (2) the response of spherical and cylindrical panels subjected to stationary random loadings with uniform spatial distribution. The associated equations governing the structural theory of shells are derived upon discarding the classical Love-Kirchhoff (L-K) assumptions. In this sense, the theory is formulated in the framework of the first-order transverse shear deformation theory (FSDT).

  10. Numerical approach for finite volume three-body interaction

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Gasparian, Vladimir

    2018-01-01

    In the present work, we study a numerical approach to one dimensional finite volume three-body interaction, the method is demonstrated by considering a toy model of three spinless particles interacting with pair-wise δ -function potentials. The numerical results are compared with the exact solutions of three spinless bosons interaction when the strength of short-range interactions are set equal for all pairs.

  11. Symbolic Dynamics and Grammatical Complexity

    NASA Astrophysics Data System (ADS)

    Hao, Bai-Lin; Zheng, Wei-Mou

    The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results

  12. The accuracy of the Gaussian-and-finite-element-Coulomb (GFC) method for the calculation of Coulomb integrals.

    PubMed

    Przybytek, Michal; Helgaker, Trygve

    2013-08-07

    We analyze the accuracy of the Coulomb energy calculated using the Gaussian-and-finite-element-Coulomb (GFC) method. In this approach, the electrostatic potential associated with the molecular electronic density is obtained by solving the Poisson equation and then used to calculate matrix elements of the Coulomb operator. The molecular electrostatic potential is expanded in a mixed Gaussian-finite-element (GF) basis set consisting of Gaussian functions of s symmetry centered on the nuclei (with exponents obtained from a full optimization of the atomic potentials generated by the atomic densities from symmetry-averaged restricted open-shell Hartree-Fock theory) and shape functions defined on uniform finite elements. The quality of the GF basis is controlled by means of a small set of parameters; for a given width of the finite elements d, the highest accuracy is achieved at smallest computational cost when tricubic (n = 3) elements are used in combination with two (γ(H) = 2) and eight (γ(1st) = 8) Gaussians on hydrogen and first-row atoms, respectively, with exponents greater than a given threshold (αmin (G)=0.5). The error in the calculated Coulomb energy divided by the number of atoms in the system depends on the system type but is independent of the system size or the orbital basis set, vanishing approximately like d(4) with decreasing d. If the boundary conditions for the Poisson equation are calculated in an approximate way, the GFC method may lose its variational character when the finite elements are too small; with larger elements, it is less sensitive to inaccuracies in the boundary values. As it is possible to obtain accurate boundary conditions in linear time, the overall scaling of the GFC method for large systems is governed by another computational step-namely, the generation of the three-center overlap integrals with three Gaussian orbitals. The most unfavorable (nearly quadratic) scaling is observed for compact, truly three-dimensional systems; however, this scaling can be reduced to linear by introducing more effective techniques for recognizing significant three-center overlap distributions.

  13. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  14. Calculating the Malliavin derivative of some stochastic mechanics problems

    PubMed Central

    Hauseux, Paul; Hale, Jack S.

    2017-01-01

    The Malliavin calculus is an extension of the classical calculus of variations from deterministic functions to stochastic processes. In this paper we aim to show in a practical and didactic way how to calculate the Malliavin derivative, the derivative of the expectation of a quantity of interest of a model with respect to its underlying stochastic parameters, for four problems found in mechanics. The non-intrusive approach uses the Malliavin Weight Sampling (MWS) method in conjunction with a standard Monte Carlo method. The models are expressed as ODEs or PDEs and discretised using the finite difference or finite element methods. Specifically, we consider stochastic extensions of; a 1D Kelvin-Voigt viscoelastic model discretised with finite differences, a 1D linear elastic bar, a hyperelastic bar undergoing buckling, and incompressible Navier-Stokes flow around a cylinder, all discretised with finite elements. A further contribution of this paper is an extension of the MWS method to the more difficult case of non-Gaussian random variables and the calculation of second-order derivatives. We provide open-source code for the numerical examples in this paper. PMID:29261776

  15. [Study on the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome based on the three dimensional finite element model].

    PubMed

    Zhang, Ming-cai; Lü, Si-zhe; Cheng, Ying-wu; Gu, Li-xu; Zhan, Hong-sheng; Shi, Yin-yu; Wang, Xiang; Huang, Shi-rong

    2011-02-01

    To study the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome using three dimensional finite element model. A patient with cervical spondylosis was randomly chosen, who was male, 28 years old, and diagnosed as cervical vertebra semidislocation by dynamic and static palpation and X-ray, and scanned from C(1) to C(7) by 0.75 mm slice thickness of CT. Based on the CT data, the software was used to construct the three dimensional finite element model of cervical vertebra semidislocation (C(4)-C(6)). Based on the model,virtual manipulation was used to correct the vertebra semidislocation by the software, and the stress distribution was analyzed. The result of finite element analysis showed that the stress distribution of C(5-6) facet joint and intervertebral disc changed after virtual manipulation. The vertebra semidislocation leads to the abnormal stress distribution of facet joint and intervertebral disc.

  16. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  17. A New Source Biasing Approach in ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevill, Aaron M; Mosher, Scott W

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less

  18. NASTRAN computer system level 12.1

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1971-01-01

    Program uses finite element displacement method for solving linear response of large, three-dimensional structures subject to static, dynamic, thermal, and random loadings. Program adapts to computers of different manufacture, permits up-dating and extention, allows interchange of output and input information between users, and is extensively documented.

  19. Numerical simulation of rarefied gas flow through a slit

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong

    1990-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.

  20. Binary tree eigen solver in finite element analysis

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.

    1993-01-01

    This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.

  1. Analysis of turbulent free-jet hydrogen-air diffusion flames with finite chemical reaction rates

    NASA Technical Reports Server (NTRS)

    Sislian, J. P.; Glass, I. I.; Evans, J. S.

    1979-01-01

    A numerical analysis is presented of the nonequilibrium flow field resulting from the turbulent mixing and combustion of an axisymmetric hydrogen jet in a supersonic parallel ambient air stream. The effective turbulent transport properties are determined by means of a two-equation model of turbulence. The finite-rate chemistry model considers eight elementary reactions among six chemical species: H, O, H2O, OH, O2 and H2. The governing set of nonlinear partial differential equations was solved by using an implicit finite-difference procedure. Radial distributions were obtained at two downstream locations for some important variables affecting the flow development, such as the turbulent kinetic energy and its dissipation rate. The results show that these variables attain their peak values on the axis of symmetry. The computed distribution of velocity, temperature, and mass fractions of the chemical species gives a complete description of the flow field. The numerical predictions were compared with two sets of experimental data. Good qualitative agreement was obtained.

  2. Numerical analysis for finite-range multitype stochastic contact financial market dynamic systems

    NASA Astrophysics Data System (ADS)

    Yang, Ge; Wang, Jun; Fang, Wen

    2015-04-01

    In an attempt to reproduce and study the dynamics of financial markets, a random agent-based financial price model is developed and investigated by the finite-range multitype contact dynamic system, in which the interaction and dispersal of different types of investment attitudes in a stock market are imitated by viruses spreading. With different parameters of birth rates and finite-range, the normalized return series are simulated by Monte Carlo simulation method and numerical studied by power-law distribution analysis and autocorrelation analysis. To better understand the nonlinear dynamics of the return series, a q-order autocorrelation function and a multi-autocorrelation function are also defined in this work. The comparisons of statistical behaviors of return series from the agent-based model and the daily historical market returns of Shanghai Composite Index and Shenzhen Component Index indicate that the proposed model is a reasonable qualitative explanation for the price formation process of stock market systems.

  3. Realistic finite temperature simulations of magnetic systems using quantum statistics

    NASA Astrophysics Data System (ADS)

    Bergqvist, Lars; Bergman, Anders

    2018-01-01

    We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.

  4. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  5. A unique set of micromechanics equations for high temperature metal matrix composites

    NASA Technical Reports Server (NTRS)

    Hopkins, D. A.; Chamis, C. C.

    1985-01-01

    A unique set of micromechanic equations is presented for high temperature metal matrix composites. The set includes expressions to predict mechanical properties, thermal properties and constituent microstresses for the unidirectional fiber reinforced ply. The equations are derived based on a mechanics of materials formulation assuming a square array unit cell model of a single fiber, surrounding matrix and an interphase to account for the chemical reaction which commonly occurs between fiber and matrix. A three-dimensional finite element analysis was used to perform a preliminary validation of the equations. Excellent agreement between properties predicted using the micromechanics equations and properties simulated by the finite element analyses are demonstrated. Implementation of the micromechanics equations as part of an integrated computational capability for nonlinear structural analysis of high temperature multilayered fiber composites is illustrated.

  6. Embedded random matrix ensembles from nuclear structure and their recent applications

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.; Chavda, N. D.

    Embedded random matrix ensembles generated by random interactions (of low body rank and usually two-body) in the presence of a one-body mean field, introduced in nuclear structure physics, are now established to be indispensable in describing statistical properties of a large number of isolated finite quantum many-particle systems. Lie algebra symmetries of the interactions, as identified from nuclear shell model and the interacting boson model, led to the introduction of a variety of embedded ensembles (EEs). These ensembles with a mean field and chaos generating two-body interaction generate in three different stages, delocalization of wave functions in the Fock space of the mean-field basis states. The last stage corresponds to what one may call thermalization and complex nuclei, as seen from many shell model calculations, lie in this region. Besides briefly describing them, their recent applications to nuclear structure are presented and they are (i) nuclear level densities with interactions; (ii) orbit occupancies; (iii) neutrinoless double beta decay nuclear transition matrix elements as transition strengths. In addition, their applications are also presented briefly that go beyond nuclear structure and they are (i) fidelity, decoherence, entanglement and thermalization in isolated finite quantum systems with interactions; (ii) quantum transport in disordered networks connected by many-body interactions with centrosymmetry; (iii) semicircle to Gaussian transition in eigenvalue densities with k-body random interactions and its relation to the Sachdev-Ye-Kitaev (SYK) model for majorana fermions.

  7. The role of fanatics in consensus formation

    NASA Astrophysics Data System (ADS)

    Gündüç, Semra

    2015-08-01

    A model of opinion dynamics with two types of agents as social actors are presented, using the Ising thermodynamic model as the dynamics template. The agents are considered as opportunists which live at sites and interact with the neighbors, or fanatics/missionaries which move from site to site randomly in persuasion of converting agents of opposite opinion with the help of opportunists. Here, the moving agents act as an external influence on the opportunists to convert them to the opposite opinion. It is shown by numerical simulations that such dynamics of opinion formation may explain some details of consensus formation even when one of the opinions are held by a minority. Regardless the distribution of the opinion, different size societies exhibit different opinion formation behavior and time scales. In order to understand general behavior, the scaling relations obtained by comparing opinion formation processes observed in societies with varying population and number of randomly moving agents are studied. For the proposed model two types of scaling relations are observed. In fixed size societies, increasing the number of randomly moving agents give a scaling relation for the time scale of the opinion formation process. The second type of scaling relation is due to the size dependent information propagation in finite but large systems, namely finite-size scaling.

  8. Solvable continuous-time random walk model of the motion of tracer particles through porous media.

    PubMed

    Fouxon, Itzhak; Holzner, Markus

    2016-08-01

    We consider the continuous-time random walk (CTRW) model of tracer motion in porous medium flows based on the experimentally determined distributions of pore velocity and pore size reported by Holzner et al. [M. Holzner et al., Phys. Rev. E 92, 013015 (2015)PLEEE81539-375510.1103/PhysRevE.92.013015]. The particle's passing through one channel is modeled as one step of the walk. The step (channel) length is random and the walker's velocity at consecutive steps of the walk is conserved with finite probability, mimicking that at the turning point there could be no abrupt change of velocity. We provide the Laplace transform of the characteristic function of the walker's position and reductions for different cases of independence of the CTRW's step duration τ, length l, and velocity v. We solve our model with independent l and v. The model incorporates different forms of the tail of the probability density of small velocities that vary with the model parameter α. Depending on that parameter, all types of anomalous diffusion can hold, from super- to subdiffusion. In a finite interval of α, ballistic behavior with logarithmic corrections holds, which was observed in a previously introduced CTRW model with independent l and τ. Universality of tracer diffusion in the porous medium is considered.

  9. The SPAR thermal analyzer: Present and future

    NASA Astrophysics Data System (ADS)

    Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.

    The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.

  10. The SPAR thermal analyzer: Present and future

    NASA Technical Reports Server (NTRS)

    Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.

    1982-01-01

    The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.

  11. Noise-Driven Phenotypic Heterogeneity with Finite Correlation Time in Clonal Populations.

    PubMed

    Lee, UnJin; Skinner, John J; Reinitz, John; Rosner, Marsha Rich; Kim, Eun-Jin

    2015-01-01

    There has been increasing awareness in the wider biological community of the role of clonal phenotypic heterogeneity in playing key roles in phenomena such as cellular bet-hedging and decision making, as in the case of the phage-λ lysis/lysogeny and B. Subtilis competence/vegetative pathways. Here, we report on the effect of stochasticity in growth rate, cellular memory/intermittency, and its relation to phenotypic heterogeneity. We first present a linear stochastic differential model with finite auto-correlation time, where a randomly fluctuating growth rate with a negative average is shown to result in exponential growth for sufficiently large fluctuations in growth rate. We then present a non-linear stochastic self-regulation model where the loss of coherent self-regulation and an increase in noise can induce a shift from bounded to unbounded growth. An important consequence of these models is that while the average change in phenotype may not differ for various parameter sets, the variance of the resulting distributions may considerably change. This demonstrates the necessity of understanding the influence of variance and heterogeneity within seemingly identical clonal populations, while providing a mechanism for varying functional consequences of such heterogeneity. Our results highlight the importance of a paradigm shift from a deterministic to a probabilistic view of clonality in understanding selection as an optimization problem on noise-driven processes, resulting in a wide range of biological implications, from robustness to environmental stress to the development of drug resistance.

  12. A Parallel Fast Sweeping Method for the Eikonal Equation

    NASA Astrophysics Data System (ADS)

    Baker, B.

    2017-12-01

    Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.

  13. Are randomly grown graphs really random?

    PubMed

    Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H

    2001-10-01

    We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.

  14. Managing distance and covariate information with point-based clustering.

    PubMed

    Whigham, Peter A; de Graaf, Brandon; Srivastava, Rashmi; Glue, Paul

    2016-09-01

    Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley's K and applied to the problem of clustering with deliberate self-harm (DSH), is presented. Point-based Monte-Carlo simulation of Ripley's K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years' emergency hospital presentations (n = 136) in a New Zealand town (population ~50,000). Study area was defined by residential (housing) land parcels representing a finite set of possible point addresses. Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley's K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for covariate measures that exhibit spatial clustering, such as deprivation, are crucial when assessing point-based clustering.

  15. A parallel finite element simulator for ion transport through three-dimensional ion channel systems.

    PubMed

    Tu, Bin; Chen, Minxin; Xie, Yan; Zhang, Linbo; Eisenberg, Bob; Lu, Benzhuo

    2013-09-15

    A parallel finite element simulator, ichannel, is developed for ion transport through three-dimensional ion channel systems that consist of protein and membrane. The coordinates of heavy atoms of the protein are taken from the Protein Data Bank and the membrane is represented as a slab. The simulator contains two components: a parallel adaptive finite element solver for a set of Poisson-Nernst-Planck (PNP) equations that describe the electrodiffusion process of ion transport, and a mesh generation tool chain for ion channel systems, which is an essential component for the finite element computations. The finite element method has advantages in modeling irregular geometries and complex boundary conditions. We have built a tool chain to get the surface and volume mesh for ion channel systems, which consists of a set of mesh generation tools. The adaptive finite element solver in our simulator is implemented using the parallel adaptive finite element package Parallel Hierarchical Grid (PHG) developed by one of the authors, which provides the capability of doing large scale parallel computations with high parallel efficiency and the flexibility of choosing high order elements to achieve high order accuracy. The simulator is applied to a real transmembrane protein, the gramicidin A (gA) channel protein, to calculate the electrostatic potential, ion concentrations and I - V curve, with which both primitive and transformed PNP equations are studied and their numerical performances are compared. To further validate the method, we also apply the simulator to two other ion channel systems, the voltage dependent anion channel (VDAC) and α-Hemolysin (α-HL). The simulation results agree well with Brownian dynamics (BD) simulation results and experimental results. Moreover, because ionic finite size effects can be included in PNP model now, we also perform simulations using a size-modified PNP (SMPNP) model on VDAC and α-HL. It is shown that the size effects in SMPNP can effectively lead to reduced current in the channel, and the results are closer to BD simulation results. Copyright © 2013 Wiley Periodicals, Inc.

  16. Spectral Ambiguity of Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  17. Remote sensing of earth terrain

    NASA Technical Reports Server (NTRS)

    Kong, Jin AU; Yueh, Herng-Aung; Shin, Robert T.

    1991-01-01

    Abstracts from 46 refereed journal and conference papers are presented for research on remote sensing of earth terrain. The topics covered related to remote sensing include the following: mathematical models, vegetation cover, sea ice, finite difference theory, electromagnetic waves, polarimetry, neural networks, random media, synthetic aperture radar, electromagnetic bias, and others.

  18. Finite-mode analysis by means of intensity information in fractional optical systems.

    PubMed

    Alieva, Tatiana; Bastiaans, Martin J

    2002-03-01

    It is shown how a coherent optical signal that contains only a finite number of Hermite-Gauss modes can be reconstructed from the knowledge of its Radon-Wigner transform-associated with the intensity distribution in a fractional-Fourier-transform optical system-at only two transversal points. The proposed method can be generalized to any fractional system whose generator transform has a complete orthogonal set of eigenfunctions.

  19. Comparison of Computational-Model and Experimental-Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  20. Comparison of Computational, Model and Experimental, Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  1. Bin packing problem solution through a deterministic weighted finite automaton

    NASA Astrophysics Data System (ADS)

    Zavala-Díaz, J. C.; Pérez-Ortega, J.; Martínez-Rebollar, A.; Almanza-Ortega, N. N.; Hidalgo-Reyes, M.

    2016-06-01

    In this article the solution of Bin Packing problem of one dimension through a weighted finite automaton is presented. Construction of the automaton and its application to solve three different instances, one synthetic data and two benchmarks are presented: N1C1W1_A.BPP belonging to data set Set_1; and BPP13.BPP belonging to hard28. The optimal solution of synthetic data is obtained. In the first benchmark the solution obtained is one more container than the ideal number of containers and in the second benchmark the solution is two more containers than the ideal solution (approximately 2.5%). The runtime in all three cases was less than one second.

  2. Localization in finite vibroimpact chains: Discrete breathers and multibreathers.

    PubMed

    Grinberg, Itay; Gendelman, Oleg V

    2016-09-01

    We explore the dynamics of strongly localized periodic solutions (discrete solitons or discrete breathers) in a finite one-dimensional chain of oscillators. Localization patterns with both single and multiple localization sites (breathers and multibreathers) are considered. The model involves parabolic on-site potential with rigid constraints (the displacement domain of each particle is finite) and a linear nearest-neighbor coupling. When the particle approaches the constraint, it undergoes an inelastic impact according to Newton's impact model. The rigid nonideal impact constraints are the only source of nonlinearity and damping in the system. We demonstrate that this vibro-impact model allows derivation of exact analytic solutions for the breathers and multibreathers with an arbitrary set of localization sites, both in conservative and in forced-damped settings. Periodic boundary conditions are considered; exact solutions for other types of boundary conditions are also available. Local character of the nonlinearity permits explicit derivation of a monodromy matrix for the breather solutions. Consequently, the stability of the derived breather and multibreather solutions can be efficiently studied in the framework of simple methods of linear algebra, and with rather moderate computational efforts. One reveals that that the finiteness of the chain fragment and possible proximity of the localization sites strongly affect both the existence and the stability patterns of these localized solutions.

  3. A finite element method for solving the shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Comblen, Richard; Legrand, Sébastien; Deleersnijder, Eric; Legat, Vincent

    Within the framework of ocean general circulation modeling, the present paper describes an efficient way to discretize partial differential equations on curved surfaces by means of the finite element method on triangular meshes. Our approach benefits from the inherent flexibility of the finite element method. The key idea consists in a dialog between a local coordinate system defined for each element in which integration takes place, and a nodal coordinate system in which all local contributions related to a vectorial degree of freedom are assembled. Since each element of the mesh and each degree of freedom are treated in the same way, the so-called pole singularity issue is fully circumvented. Applied to the shallow water equations expressed in primitive variables, this new approach has been validated against the standard test set defined by [Williamson, D.L., Drake, J.B., Hack, J.J., Jakob, R., Swarztrauber, P.N., 1992. A standard test set for numerical approximations to the shallow water equations in spherical geometry. Journal of Computational Physics 102, 211-224]. Optimal rates of convergence for the P1NC-P1 finite element pair are obtained, for both global and local quantities of interest. Finally, the approach can be extended to three-dimensional thin-layer flows in a straightforward manner.

  4. Consistent Initial Conditions for the DNS of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.; Blaisdell, G. A.

    1996-01-01

    Relationships between diverse thermodynamic quantities appropriate to weakly compressible turbulence are derived. It is shown that for turbulence of a finite turbulent Mach number there is a finite element of compressibility. A methodology for generating initial conditions for the fluctuating pressure, density and dilatational velocity is given which is consistent with finite Mach number effects. Use of these initial conditions gives rise to a smooth development of the flow, in contrast to cases in which these fields are specified arbitrarily or set to zero. Comparisons of the effect of different types of initial conditions are made using direct numerical simulation of decaying isotropic turbulence.

  5. Performance of low-rank QR approximation of the finite element Biot-Savart law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D A; Fasenfest, B J

    2006-01-12

    We are concerned with the computation of magnetic fields from known electric currents in the finite element setting. In finite element eddy current simulations it is necessary to prescribe the magnetic field (or potential, depending upon the formulation) on the conductor boundary. In situations where the magnetic field is due to a distributed current density, the Biot-Savart law can be used, eliminating the need to mesh the nonconducting regions. Computation of the Biot-Savart law can be significantly accelerated using a low-rank QR approximation. We review the low-rank QR method and report performance on selected problems.

  6. Random close packing of disks and spheres in confined geometries

    NASA Astrophysics Data System (ADS)

    Desmond, Kenneth W.; Weeks, Eric R.

    2009-11-01

    Studies of random close packing of spheres have advanced our knowledge about the structure of systems such as liquids, glasses, emulsions, granular media, and amorphous solids. In confined geometries, the structural properties of random-packed systems will change. To understand these changes, we study random close packing in finite-sized confined systems, in both two and three dimensions. Each packing consists of a 50-50 binary mixture with particle size ratio of 1.4. The presence of confining walls significantly lowers the overall maximum area fraction (or volume fraction in three dimensions). A simple model is presented, which quantifies the reduction in packing due to wall-induced structure. This wall-induced structure decays rapidly away from the wall, with characteristic length scales comparable to the small particle diameter.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoban, Matty J.; Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD; Wallman, Joel J.

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice ofmore » two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.« less

  8. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  9. Matching a Distribution by Matching Quantiles Estimation

    PubMed Central

    Sgouropoulos, Nikolaos; Yao, Qiwei; Yastremiz, Claudia

    2015-01-01

    Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least-squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real dataset is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO. PMID:26692592

  10. Damage Identification in Beam Structure using Spatial Continuous Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Janeliukstis, R.; Rucevskis, S.; Wesolowski, M.; Kovalovs, A.; Chate, A.

    2015-11-01

    In this paper the applicability of spatial continuous wavelet transform (CWT) technique for damage identification in the beam structure is analyzed by application of different types of wavelet functions and scaling factors. The proposed method uses exclusively mode shape data from the damaged structure. To examine limitations of the method and to ascertain its sensitivity to noisy experimental data, several sets of simulated data are analyzed. Simulated test cases include numerical mode shapes corrupted by different levels of random noise as well as mode shapes with different number of measurement points used for wavelet transform. A broad comparison of ability of different wavelet functions to detect and locate damage in beam structure is given. Effectiveness and robustness of the proposed algorithms are demonstrated experimentally on two aluminum beams containing single mill-cut damage. The modal frequencies and the corresponding mode shapes are obtained via finite element models for numerical simulations and by using a scanning laser vibrometer with PZT actuator as vibration excitation source for the experimental study.

  11. Enhanced photon indistinguishability in pulse-driven quantum emitters

    NASA Astrophysics Data System (ADS)

    Fotso, Herbert F.

    2017-04-01

    Photon indistinguishability is an essential ingredient for the realization of scalable quantum networks. For quantum bits in the solid state, this is hindered by spectral diffusion, the uncontrolled random drift of the emission/absorption spectrum as a result of fluctuations in the emitter's environment. We study optical properties of a quantum emitter in the solid state when it is driven by a periodic sequence of optical pulses with finite detuning with respect to the emitter. We find that a pulse sequence can effectively mitigate spectral diffusion and enhance photon indistinguishability. The bulk of the emission occurs at a set target frequency; Photon indistinguishability is enhanced and is restored to its optimal value after every even pulse. Also, for moderate values of the sequence period and of the detuning, both the emission spectrum and the absorption spectrum have lineshapes with little dependence on the detuning. We describe the solution and the evolution of the emission/absorption spectrum as a function time.

  12. Jump state estimation with multiple sensors with packet dropping and delaying channels

    NASA Astrophysics Data System (ADS)

    Dolz, Daniel; Peñarrocha, Ignacio; Sanchis, Roberto

    2016-03-01

    This work addresses the design of a state observer for systems whose outputs are measured through a communication network. The measurements from each sensor node are assumed to arrive randomly, scarcely and with a time-varying delay. The proposed model of the plant and the network measurement scenarios cover the cases of multiple sensors, out-of-sequence measurements, buffered measurements on a single packet and multirate sensor measurements. A jump observer is proposed that selects a different gain depending on the number of periods elapsed between successfully received measurements and on the available data. A finite set of gains is pre-calculated offline with a tractable optimisation problem, where the complexity of the observer implementation is a design parameter. The computational cost of the observer implementation is much lower than in the Kalman filter, whilst the performance is similar. Several examples illustrate the observer design for different measurement scenarios and observer complexity and show the achievable performance.

  13. A Survey of Recent Advances in Particle Filters and Remaining Challenges for Multitarget Tracking

    PubMed Central

    Wang, Xuedong; Sun, Shudong; Corchado, Juan M.

    2017-01-01

    We review some advances of the particle filtering (PF) algorithm that have been achieved in the last decade in the context of target tracking, with regard to either a single target or multiple targets in the presence of false or missing data. The first part of our review is on remarkable achievements that have been made for the single-target PF from several aspects including importance proposal, computing efficiency, particle degeneracy/impoverishment and constrained/multi-modal systems. The second part of our review is on analyzing the intractable challenges raised within the general multitarget (multi-sensor) tracking due to random target birth and termination, false alarm, misdetection, measurement-to-track (M2T) uncertainty and track uncertainty. The mainstream multitarget PF approaches consist of two main classes, one based on M2T association approaches and the other not such as the finite set statistics-based PF. In either case, significant challenges remain due to unknown tracking scenarios and integrated tracking management. PMID:29168772

  14. A general method for generating bathymetric data for hydrodynamic computer models

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  15. Robust estimation of the proportion of treatment effect explained by surrogate marker information.

    PubMed

    Parast, Layla; McDermott, Mary M; Tian, Lu

    2016-05-10

    In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  17. A quasi-Lagrangian finite element method for the Navier-Stokes equations in a time-dependent domain

    NASA Astrophysics Data System (ADS)

    Lozovskiy, Alexander; Olshanskii, Maxim A.; Vassilevski, Yuri V.

    2018-05-01

    The paper develops a finite element method for the Navier-Stokes equations of incompressible viscous fluid in a time-dependent domain. The method builds on a quasi-Lagrangian formulation of the problem. The paper provides stability and convergence analysis of the fully discrete (finite-difference in time and finite-element in space) method. The analysis does not assume any CFL time-step restriction, it rather needs mild conditions of the form $\\Delta t\\le C$, where $C$ depends only on problem data, and $h^{2m_u+2}\\le c\\,\\Delta t$, $m_u$ is polynomial degree of velocity finite element space. Both conditions result from a numerical treatment of practically important non-homogeneous boundary conditions. The theoretically predicted convergence rate is confirmed by a set of numerical experiments. Further we apply the method to simulate a flow in a simplified model of the left ventricle of a human heart, where the ventricle wall dynamics is reconstructed from a sequence of contrast enhanced Computed Tomography images.

  18. Global synchronization in finite time for fractional-order neural networks with discontinuous activations and time delays.

    PubMed

    Peng, Xiao; Wu, Huaiqin; Song, Ka; Shi, Jiaxin

    2017-10-01

    This paper is concerned with the global Mittag-Leffler synchronization and the synchronization in finite time for fractional-order neural networks (FNNs) with discontinuous activations and time delays. Firstly, the properties with respect to Mittag-Leffler convergence and convergence in finite time, which play a critical role in the investigation of the global synchronization of FNNs, are developed, respectively. Secondly, the novel state-feedback controller, which includes time delays and discontinuous factors, is designed to realize the synchronization goal. By applying the fractional differential inclusion theory, inequality analysis technique and the proposed convergence properties, the sufficient conditions to achieve the global Mittag-Leffler synchronization and the synchronization in finite time are addressed in terms of linear matrix inequalities (LMIs). In addition, the upper bound of the setting time of the global synchronization in finite time is explicitly evaluated. Finally, two examples are given to demonstrate the validity of the proposed design method and theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Current algebras, measures quasi-invariant under diffeomorphism groups, and infinite quantum systems with accumulation points

    NASA Astrophysics Data System (ADS)

    Sakuraba, Takao

    The approach to quantum physics via current algebra and unitary representations of the diffeomorphism group is established. This thesis studies possible infinite Bose gas systems using this approach. Systems of locally finite configurations and systems of configurations with accumulation points are considered, with the main emphasis on the latter. In Chapter 2, canonical quantization, quantization via current algebra and unitary representations of the diffeomorphism group are reviewed. In Chapter 3, a new definition of the space of configurations is proposed and an axiom for general configuration spaces is abstracted. Various subsets of the configuration space, including those specifying the number of points in a Borel set and those specifying the number of accumulation points in a Borel set are proved to be measurable using this axiom. In Chapter 4, known results on the space of locally finite configurations and Poisson measure are reviewed in the light of the approach developed in Chapter 3, including the approach to current algebra in the Poisson space by Albeverio, Kondratiev, and Rockner. Goldin and Moschella considered unitary representations of the group of diffeomorphisms of the line based on self-similar random processes, which may describe infinite quantum gas systems with clusters. In Chapter 5, the Goldin-Moschella theory is developed further. Their construction of measures quasi-invariant under diffeomorphisms is reviewed, and a rigorous proof of their conjectures is given. It is proved that their measures with distinct correlation parameters are mutually singular. A quasi-invariant measure constructed by Ismagilov on the space of configurations with accumulation points on the circle is proved to be singular with respect to the Goldin-Moschella measures. Finally a generalization of the Goldin-Moschella measures to the higher-dimensional case is studied, where the notion of covariance matrix and the notion of condition number play important roles. A rigorous construction of measures quasi-invariant under the group of diffeomorphisms of d-dimensional space stabilizing a point is given.

  20. Joint Target Detection and Tracking Filter for Chilbolton Advanced Meteorological Radar Data Processing

    NASA Astrophysics Data System (ADS)

    Pak, A.; Correa, J.; Adams, M.; Clark, D.; Delande, E.; Houssineau, J.; Franco, J.; Frueh, C.

    2016-09-01

    Recently, the growing number of inactive Resident Space Objects (RSOs), or space debris, has provoked increased interest in the field of Space Situational Awareness (SSA) and various investigations of new methods for orbital object tracking. In comparison with conventional tracking scenarios, state estimation of an orbiting object entails additional challenges, such as orbit determination and orbital state and covariance propagation in the presence of highly nonlinear system dynamics. The sensors which are available for detecting and tracking space debris are prone to multiple clutter measurements. Added to this problem, is the fact that it is unknown whether or not a space debris type target is present within such sensor measurements. Under these circumstances, traditional single-target filtering solutions such as Kalman Filters fail to produce useful trajectory estimates. The recent Random Finite Set (RFS) based Finite Set Statistical (FISST) framework has yielded filters which are more appropriate for such situations. The RFS based Joint Target Detection and Tracking (JoTT) filter, also known as the Bernoulli filter, is a single target, multiple measurements filter capable of dealing with cluttered and time-varying backgrounds as well as modeling target appearance and disappearance in the scene. Therefore, this paper presents the application of the Gaussian mixture-based JoTT filter for processing measurements from Chilbolton Advanced Meteorological Radar (CAMRa) which contain both defunct and operational satellites. The CAMRa is a fully-steerable radar located in southern England, which was recently modified to be used as a tracking asset in the European Space Agency SSA program. The experiments conducted show promising results regarding the capability of such filters in processing cluttered radar data. The work carried out in this paper was funded by the USAF Grant No. FA9550-15-1-0069, Chilean Conicyt - Fondecyt grant number 1150930, EU Erasmus Mundus MSc Scholarship, Defense Science and Technology Laboratory (DSTL), U. K., and the Chilean Conicyt, Fondecyt project grant number 1150930.

  1. Transmission and Andreev reflection in one-dimensional chain with randomly doped superconducting grains

    NASA Astrophysics Data System (ADS)

    Hu, Dong-Sheng; Xiong, Shi-Jie

    2002-11-01

    We investigate the transport properties and Andreev reflection in one-dimensional (1D) systems with randomly doped superconducting grains. The superconducting grains are described by the Bogoliubov-de Gene Hamiltonian and the conductance is calculated by using the transfer matrix method and Landauer-Büttiker formula. It is found that although the quasiparticle states are localized due to the randomness and the low dimensionality, the conductance is still kept finite in the thermodynamical limit due to the Andreev reflection. We also investigate the effect of correlation of disorder in such systems and the results show the delocalization of quasiparticle states and suppression of Andreev reflection in a wide energy window.

  2. The GPRIME approach to finite element modeling

    NASA Technical Reports Server (NTRS)

    Wallace, D. R.; Mckee, J. H.; Hurwitz, M. M.

    1983-01-01

    GPRIME, an interactive modeling system, runs on the CDC 6000 computers and the DEC VAX 11/780 minicomputer. This system includes three components: (1) GPRIME, a user friendly geometric language and a processor to translate that language into geometric entities, (2) GGEN, an interactive data generator for 2-D models; and (3) SOLIDGEN, a 3-D solid modeling program. Each component has a computer user interface of an extensive command set. All of these programs make use of a comprehensive B-spline mathematics subroutine library, which can be used for a wide variety of interpolation problems and other geometric calculations. Many other user aids, such as automatic saving of the geometric and finite element data bases and hidden line removal, are available. This interactive finite element modeling capability can produce a complete finite element model, producing an output file of grid and element data.

  3. A combinatorial approach to the design of vaccines.

    PubMed

    Martínez, Luis; Milanič, Martin; Legarreta, Leire; Medvedev, Paul; Malaina, Iker; de la Fuente, Ildefonso M

    2015-05-01

    We present two new problems of combinatorial optimization and discuss their applications to the computational design of vaccines. In the shortest λ-superstring problem, given a family S1,...,S(k) of strings over a finite alphabet, a set Τ of "target" strings over that alphabet, and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ target strings as substrings of S(i). In the shortest λ-cover superstring problem, given a collection X1,...,X(n) of finite sets of strings over a finite alphabet and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ elements of X(i) as substrings. The two problems are polynomially equivalent, and the shortest λ-cover superstring problem is a common generalization of two well known combinatorial optimization problems, the shortest common superstring problem and the set cover problem. We present two approaches to obtain exact or approximate solutions to the shortest λ-superstring and λ-cover superstring problems: one based on integer programming, and a hill-climbing algorithm. An application is given to the computational design of vaccines and the algorithms are applied to experimental data taken from patients infected by H5N1 and HIV-1.

  4. Probability distribution of the entanglement across a cut at an infinite-randomness fixed point

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Majumdar, Satya N.; Huse, David A.

    2017-03-01

    We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.

  5. Stability and dynamical properties of material flow systems on random networks

    NASA Astrophysics Data System (ADS)

    Anand, K.; Galla, T.

    2009-04-01

    The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.

  6. Movement patterns of Tenebrio beetles demonstrate empirically that correlated-random-walks have similitude with a Lévy walk.

    PubMed

    Reynolds, Andy M; Leprêtre, Lisa; Bohan, David A

    2013-11-07

    Correlated random walks are the dominant conceptual framework for modelling and interpreting organism movement patterns. Recent years have witnessed a stream of high profile publications reporting that many organisms perform Lévy walks; movement patterns that seemingly stand apart from the correlated random walk paradigm because they are discrete and scale-free rather than continuous and scale-finite. Our new study of the movement patterns of Tenebrio molitor beetles in unchanging, featureless arenas provides the first empirical support for a remarkable and deep theoretical synthesis that unites correlated random walks and Lévy walks. It demonstrates that the two models are complementary rather than competing descriptions of movement pattern data and shows that correlated random walks are a part of the Lévy walk family. It follows from this that vast numbers of Lévy walkers could be hiding in plain sight.

  7. Hopping transport through an array of Luttinger liquid stubs

    NASA Astrophysics Data System (ADS)

    Chudnovskiy, A. L.

    2004-01-01

    We consider a thermally activated transport across and array of parallel one-dimensional quantum wires of finite length (quantum stubs). The disorder enters as a random tunneling between the nearest-neighbor stubs as well as a random shift of the bottom of the energy band in each stub. Whereas one-particle wave functions are localized across the array, the plasmons are delocalized, which affects the variable-range hopping. A perturbative analytical expression for the low-temperature resistance across the array is obtained for a particular choice of plasmon dispersion.

  8. HiVy automated translation of stateflow designs for model checking verification

    NASA Technical Reports Server (NTRS)

    Pingree, Paula

    2003-01-01

    tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.

  9. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  10. The Ablowitz–Ladik system on a finite set of integers

    NASA Astrophysics Data System (ADS)

    Xia, Baoqiang

    2018-07-01

    We show how to solve initial-boundary value problems for integrable nonlinear differential–difference equations on a finite set of integers. The method we employ is the discrete analogue of the unified transform (Fokas method). The implementation of this method to the Ablowitz–Ladik system yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem, which has a jump matrix with explicit -dependence involving certain functions referred to as spectral functions. Some of these functions are defined in terms of the initial value, while the remaining spectral functions are defined in terms of two sets of boundary values. These spectral functions are not independent but satisfy an algebraic relation called global relation. We analyze the global relation to characterize the unknown boundary values in terms of the given initial and boundary values. We also discuss the linearizable boundary conditions.

  11. Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less

  12. Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction

    DOE PAGES

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    2017-03-29

    Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less

  13. The Development of a Finite Volume Method for Modeling Sound in Coastal Ocean Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Wen; Yang, Zhaoqing; Copping, Andrea E.

    : As the rapid growth of marine renewable energy and off-shore wind energy, there have been concerns that the noises generated from construction and operation of the devices may interfere marine animals’ communication. In this research, a underwater sound model is developed to simulate sound prorogation generated by marine-hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite volume and finite difference methods are developed to solve the 3D Helmholtz equation of sound propagation in the coastal environment. For finite volume method, the grid system consists of triangular grids in horizontal plane and sigma-layers in vertical dimension. A 3Dmore » sparse matrix solver with complex coefficients is formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method is applied to efficiently solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model is then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generated by human activities in a range-dependent setting, such as offshore wind energy platform constructions and tidal stream turbines. As a proof of concept, initial validation of the finite difference solver is presented for two coastal wedge problems. Validation of finite volume method will be reported separately.« less

  14. [Progression on finite element modeling method in scoliosis].

    PubMed

    Fan, Ning; Zang, Lei; Hai, Yong; Du, Peng; Yuan, Shuo

    2018-04-25

    Scoliosis is a complex spinal three-dimensional malformation with complicated pathogenesis, often associated with complications as thoracic deformity and shoulder imbalance. Because the acquisition of specimen or animal models are difficult, the biomechanical study of scoliosis is limited. In recent years, along with the development of the computer technology, software and image, the technology of establishing a finite element model of human spine is maturing and it has been providing strong support for the research of pathogenesis of scoliosis, the design and application of brace, and the selection of surgical methods. The finite element model method is gradually becoming an important tool in the biomechanical study of scoliosis. Establishing a high quality finite element model is the basis of analysis and future study. However, the finite element modeling process can be complex and modeling methods are greatly varied. Choosing the appropriate modeling method according to research objectives has become researchers' primary task. In this paper, the author reviews the national and international literature in recent years and concludes the finite element modeling methods in scoliosis, including data acquisition, establishment of the geometric model, the material properties, parameters setting, the validity of the finite element model validation and so on. Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.

  15. Air Vehicles Division Computational Structural Analysis Facilities Policy and Guidelines for Users

    DTIC Science & Technology

    2005-05-01

    34 Thermal " as appropriate and the tolerance set to "default". b) Create the model geometry. c) Create the finite elements. d) Create the...linear, non-linear, dynamic, thermal , acoustic analysis. The modelling of composite materials, creep, fatigue and plasticity are also covered...perform professional, high quality finite element analysis (FEA). FE analysts from many tasks within AVD are using the facilities to conduct FEA with

  16. Dispersion-relation-preserving finite difference schemes for computational acoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1993-01-01

    Time-marching dispersion-relation-preserving (DRP) schemes can be constructed by optimizing the finite difference approximations of the space and time derivatives in wave number and frequency space. A set of radiation and outflow boundary conditions compatible with the DRP schemes is constructed, and a sequence of numerical simulations is conducted to test the effectiveness of the DRP schemes and the radiation and outflow boundary conditions. Close agreement with the exact solutions is obtained.

  17. A comparison of two finite element models of tidal hydrodynamics using a North Sea data set

    USGS Publications Warehouse

    Walters, R.A.; Werner, F.E.

    1989-01-01

    Using the region of the English Channel and the southern bight of the North Sea, we systematically compare the results of two independent finite element models of tidal hydrodynamics. The model intercomparison provides a means for increasing our understanding of the relevant physical processes in the region in question as well as a means for the evaluation of certain algorithmic procedures of the two models. ?? 1989.

  18. Elastic Behavior of a Rubber Layer Bonded between Two Rigid Spheres.

    DTIC Science & Technology

    1988-05-01

    Cracking, Composites, Compressibility, Def ormition, Dilatancy, Elasticity, Elastomers , Failure, Fracture, Particle ’,-1tr1f6rcement, Rubber, Stress...Analysis. 2.AITRACT (Ca~mmi ON VOW...lds It 񔨾Y MtE fIdnt & bp04 bo ambwe - Finite element methods ( FEM ) have been employed to calculate the stresses...deformations set up by compression or extension of the layer, using finite element methods ( FEM ) and not invoking the condition of incompressibility

  19. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  20. Testing Spatial Symmetry Using Contingency Tables Based on Nearest Neighbor Relations

    PubMed Central

    Ceyhan, Elvan

    2014-01-01

    We consider two types of spatial symmetry, namely, symmetry in the mixed or shared nearest neighbor (NN) structures. We use Pielou's and Dixon's symmetry tests which are defined using contingency tables based on the NN relationships between the data points. We generalize these tests to multiple classes and demonstrate that both the asymptotic and exact versions of Pielou's first type of symmetry test are extremely conservative in rejecting symmetry in the mixed NN structure and hence should be avoided or only the Monte Carlo randomized version should be used. Under RL, we derive the asymptotic distribution for Dixon's symmetry test and also observe that the usual independence test seems to be appropriate for Pielou's second type of test. Moreover, we apply variants of Fisher's exact test on the shared NN contingency table for Pielou's second test and determine the most appropriate version for our setting. We also consider pairwise and one-versus-rest type tests in post hoc analysis after a significant overall symmetry test. We investigate the asymptotic properties of the tests, prove their consistency under appropriate null hypotheses, and investigate finite sample performance of them by extensive Monte Carlo simulations. The methods are illustrated on a real-life ecological data set. PMID:24605061

  1. Finite-Size Scaling Analysis of Binary Stochastic Processes and Universality Classes of Information Cascade Phase Transition

    NASA Astrophysics Data System (ADS)

    Mori, Shintaro; Hisakado, Masato

    2015-05-01

    We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.

  2. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  3. Finite-size scaling of clique percolation on two-dimensional Moore lattices

    NASA Astrophysics Data System (ADS)

    Dong, Jia-Qi; Shen, Zhou; Zhang, Yongwen; Huang, Zi-Gang; Huang, Liang; Chen, Xiaosong

    2018-05-01

    Clique percolation has attracted much attention due to its significance in understanding topological overlap among communities and dynamical instability of structured systems. Rich critical behavior has been observed in clique percolation on Erdős-Rényi (ER) random graphs, but few works have discussed clique percolation on finite dimensional systems. In this paper, we have defined a series of characteristic events, i.e., the historically largest size jumps of the clusters, in the percolating process of adding bonds and developed a new finite-size scaling scheme based on the interval of the characteristic events. Through the finite-size scaling analysis, we have found, interestingly, that, in contrast to the clique percolation on an ER graph where the critical exponents are parameter dependent, the two-dimensional (2D) clique percolation simply shares the same critical exponents with traditional site or bond percolation, independent of the clique percolation parameters. This has been corroborated by bridging two special types of clique percolation to site percolation on 2D lattices. Mechanisms for the difference of the critical behaviors between clique percolation on ER graphs and on 2D lattices are also discussed.

  4. Advances and trends in the development of computational models for tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Tanner, J. A.

    1985-01-01

    Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.

  5. Local existence of solutions to the Euler-Poisson system, including densities without compact support

    NASA Astrophysics Data System (ADS)

    Brauer, Uwe; Karp, Lavi

    2018-01-01

    Local existence and well posedness for a class of solutions for the Euler Poisson system is shown. These solutions have a density ρ which either falls off at infinity or has compact support. The solutions have finite mass, finite energy functional and include the static spherical solutions for γ = 6/5. The result is achieved by using weighted Sobolev spaces of fractional order and a new non-linear estimate which allows to estimate the physical density by the regularised non-linear matter variable. Gamblin also has studied this setting but using very different functional spaces. However we believe that the functional setting we use is more appropriate to describe a physical isolated body and more suitable to study the Newtonian limit.

  6. Source-Independent Quantum Random Number Generation

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  7. Systematic network coding for two-hop lossy transmissions

    NASA Astrophysics Data System (ADS)

    Li, Ye; Blostein, Steven; Chan, Wai-Yip

    2015-12-01

    In this paper, we consider network transmissions over a single or multiple parallel two-hop lossy paths. These scenarios occur in applications such as sensor networks or WiFi offloading. Random linear network coding (RLNC), where previously received packets are re-encoded at intermediate nodes and forwarded, is known to be a capacity-achieving approach for these networks. However, a major drawback of RLNC is its high encoding and decoding complexity. In this work, a systematic network coding method is proposed. We show through both analysis and simulation that the proposed method achieves higher end-to-end rate as well as lower computational cost than RLNC for finite field sizes and finite-sized packet transmissions.

  8. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  9. Who wins? Study of long-run trader survival in an artificial stock market

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; M. Focardi, Sergio; Marchesi, Michele; Raberto, Marco

    2003-06-01

    We introduce a multi-asset artificial financial market with finite amount of cash and number of stocks. The background trading is characterized by a random trading strategy constrained by the finiteness of resources and by market volatility. Stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Three active trading strategies have been introduced and studied in two different market conditions: steady market and growing market with asset inflation. We show that the profitability of each strategy depends both on the periodicity of portfolio reallocation and on the market condition. The best performing strategy is the one that exploits the mean reversion characteristic of asset price processes.

  10. Scattering theory of efficient quantum transport across finite networks

    NASA Astrophysics Data System (ADS)

    Walschaers, Mattia; Mulet, Roberto; Buchleitner, Andreas

    2017-11-01

    We present a scattering theory for the efficient transmission of an excitation across a finite network with designed disorder. We show that the presence of randomly positioned network sites allows significant acceleration of the excitation transfer processes as compared to a dimer structure, but only if the disordered Hamiltonians are constrained to be centrosymmetric and exhibit a dominant doublet in their spectrum. We identify the cause of this efficiency enhancement to be the constructive interplay between disorder-induced fluctuations of the dominant doublet’s splitting and the coupling strength between the input and output sites to the scattering channels. We find that the characteristic strength of these fluctuations together with the channel coupling fully control the transfer efficiency.

  11. On the long range propagation of sound over irregular terrain

    NASA Technical Reports Server (NTRS)

    Howe, M. S.

    1984-01-01

    The theory of sound propagation over randomly irregular, nominally plane terrain of finite impedance is discussed. The analysis is an extension of the theory of coherent scatter originally proposed by Biot for an irregular rigid surface. It combines Biot's approach, wherein the surface irregularities are modeled by a homogeneous distribution of hemispherical bosses, with more conventional analyses in which the ground is modeled as a smooth plane of finite impedance. At sufficiently low frequencies the interaction of the surface irregularities with the nearfield of a ground-based source leads to the production of surface waves, which are effective in penetrating the ground shadow zone predicted for a smooth surface of the same impedance.

  12. The Role of Second Phase Hard Particles on Hole Stretchability of two AA6xxx Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiaohua; Sun, Xin; Golovashchenko, Sergey F.

    The hole stretchability of two Aluminum Alloys (AA6111 and AA6022) are studied by using a two stages integrated finite element framework where the edge geometry and edge damages from the hole piercing processes were considered in the subsequent hole expansion processes. Experimentally it has been found that AA6022 has higher hole expansion ratios than those of AA6111. This observation has been nicely captured by finite element simulations. The main cause of differences have been identified to the volume fractions of the random distributed second phase hard particles which play a critical role in determining the fracture strains of the materials.

  13. Finite GUE Distribution with Cut-Off at a Shock

    NASA Astrophysics Data System (ADS)

    Ferrari, P. L.

    2018-03-01

    We consider the totally asymmetric simple exclusion process with initial conditions generating a shock. The fluctuations of particle positions are asymptotically governed by the randomness around the two characteristic lines joining at the shock. Unlike in previous papers, we describe the correlation in space-time without employing the mapping to the last passage percolation, which fails to exists already for the partially asymmetric model. We then consider a special case, where the asymptotic distribution is a cut-off of the distribution of the largest eigenvalue of a finite GUE matrix. Finally we discuss the strength of the probabilistic and physically motivated approach and compare it with the mathematical difficulties of a direct computation.

  14. An adaptive approach to the physical annealing strategy for simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  15. Eshelby's problem of a spherical inclusion eccentrically embedded in a finite spherical body

    PubMed Central

    He, Q.-C.

    2017-01-01

    Resorting to the superposition principle, the solution of Eshelby's problem of a spherical inclusion located eccentrically inside a finite spherical domain is obtained in two steps: (i) the solution to the problem of a spherical inclusion in an infinite space; (ii) the solution to the auxiliary problem of the corresponding finite spherical domain subjected to appropriate boundary conditions. Moreover, a set of functions called the sectional and harmonic deviators are proposed and developed to work out the auxiliary solution in a series form, including the displacement and Eshelby tensor fields. The analytical solutions are explicitly obtained and illustrated when the geometric and physical parameters and the boundary condition are specified. PMID:28293141

  16. Randomized shortest-path problems: two related models.

    PubMed

    Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh

    2009-08-01

    This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.

  17. Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices

    NASA Astrophysics Data System (ADS)

    Vishnepolsky, Rachel

    A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.

  18. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  19. Modal Parameter Identification and Numerical Simulation for Self-anchored Suspension Bridges Based on Ambient Vibration

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Sun, Li Guo

    2018-06-01

    This paper chooses the Nanjing-Hangzhou high speed overbridge, a self-anchored suspension bridge, as the research target, trying to identify the dynamic characteristic parameters of the bridge by using the peak-picking method to analyze the velocity response data under ambient excitation collected by 7 vibration pickup sensors set on the bridge deck. The ABAQUS is used to set up a three-dimensional finite element model for the full bridge and amends the finite element model of the suspension bridge based on the identified modal parameter, and suspender force picked by the PDV100 laser vibrometer. The study shows that the modal parameter can well be identified by analyzing the bridge vibration velocity collected by 7 survey points. The identified modal parameter and measured suspender force can be used as the basis of the amendment of the finite element model of the suspension bridge. The amended model can truthfully reflect the structural physical features and it can also be the benchmark model for the long-term health monitoring and condition assessment of the bridge.

  20. Automatic differentiation evaluated as a tool for rotorcraft design and optimization

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.

  1. Numerical simulation of temperature field in K9 glass irradiated by ultraviolet pulse laser

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Fang, Xiaodong

    2015-10-01

    The optical component of photoelectric system was easy to be damaged by irradiation of high power pulse laser, so the effect of high power pulse laser irradiation on K9 glass was researched. A thermodynamic model of K9 glass irradiated by ultraviolet pulse laser was established using the finite element software ANSYS. The article analyzed some key problems in simulation process of ultraviolet pulse laser damage of K9 glass based on ANSYS from the finite element models foundation, meshing, loading of pulse laser, setting initial conditions and boundary conditions and setting the thermal physical parameters of material. The finite element method (FEM) model was established and a numerical analysis was performed to calculate temperature field in K9 glass irradiated by ultraviolet pulse laser. The simulation results showed that the temperature of irradiation area exceeded the melting point of K9 glass, while the incident laser energy was low. The thermal damage dominated in the damage mechanism of K9 glass, the melting phenomenon should be much more distinct.

  2. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    NASA Astrophysics Data System (ADS)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  3. Stabilization of a locally minimal forest

    NASA Astrophysics Data System (ADS)

    Ivanov, A. O.; Mel'nikova, A. E.; Tuzhilin, A. A.

    2014-03-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles.

  4. Exact finite volume expectation values of \\overline{Ψ}Ψ in the massive Thirring model from light-cone lattice correlators

    NASA Astrophysics Data System (ADS)

    Hegedűs, Árpád

    2018-03-01

    In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.

  5. Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam

    NASA Astrophysics Data System (ADS)

    Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa

    2017-08-01

    In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.

  6. Computing Finite-Time Lyapunov Exponents with Optimally Time Dependent Reduction

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Farazmand, Mohammad; Sapsis, Themis; Haller, George

    2016-11-01

    We present a method to compute Finite-Time Lyapunov Exponents (FTLE) of a dynamical system using Optimally Time-Dependent (OTD) reduction recently introduced by H. Babaee and T. P. Sapsis. The OTD modes are a set of finite-dimensional, time-dependent, orthonormal basis {ui (x , t) } |i=1N that capture the directions associated with transient instabilities. The evolution equation of the OTD modes is derived from a minimization principle that optimally approximates the most unstable directions over finite times. To compute the FTLE, we evolve a single OTD mode along with the nonlinear dynamics. We approximate the FTLE from the reduced system obtained from projecting the instantaneous linearized dynamics onto the OTD mode. This results in a significant reduction in the computational cost compared to conventional methods for computing FTLE. We demonstrate the efficiency of our method for double Gyre and ABC flows. ARO project 66710-EG-YIP.

  7. A Markov model for the temporal dynamics of balanced random networks of finite size

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644

  8. Time-evolution of grain size distributions in random nucleation and growth crystallization processes

    NASA Astrophysics Data System (ADS)

    Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.

    2010-02-01

    We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.

  9. On the fluctuations of sums of independent random variables.

    PubMed

    Feller, W

    1969-07-01

    If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.

  10. Defining fitness in an uncertain world.

    PubMed

    Crewe, Paul; Gratwick, Richard; Grafen, Alan

    2018-04-01

    The recently elucidated definition of fitness employed by Fisher in his fundamental theorem of natural selection is combined with reproductive values as appropriately defined in the context of both random environments and continuing fluctuations in the distribution over classes in a class-structured population. We obtain astonishingly simple results, generalisations of the Price Equation and the fundamental theorem, that show natural selection acting only through the arithmetic expectation of fitness over all uncertainties, in contrast to previous studies with fluctuating demography, in which natural selection looks rather complicated. Furthermore, our setting permits each class to have its characteristic ploidy, thus covering haploidy, diploidy and haplodiploidy at the same time; and allows arbitrary classes, including continuous variables such as condition. The simplicity is achieved by focussing just on the effects of natural selection on genotype frequencies: while other causes are present in the model, and the effect of natural selection is assessed in their presence, these causes will have their own further effects on genoytpe frequencies that are not assessed here. Also, Fisher's uses of reproductive value are shown to have two ambivalences, and a new axiomatic foundation for reproductive value is endorsed. The results continue the formal darwinism project, and extend support for the individual-as-maximising-agent analogy to finite populations with random environments and fluctuating class-distributions. The model may also lead to improved ways to measure fitness in real populations.

  11. Accelerating String Set Matching in FPGA Hardware for Bioinformatics Research

    PubMed Central

    Dandass, Yoginder S; Burgess, Shane C; Lawrence, Mark; Bridges, Susan M

    2008-01-01

    Background This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA) devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM) is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences). Results This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM) resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs. Conclusion Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation. PMID:18412963

  12. Active control of the forced and transient response of a finite beam. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Post, John T.

    1990-01-01

    Structural vibrations from a point force are modelled on a finite beam. This research explores the theoretical limit on controlling beam vibrations utilizing another point source as an active controller. Three different types of excitation are considered, harmonic, random, and transient. For harmonic excitation, control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam. Control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, integrating the expected value of the displacement squared over the required interval, is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. The form of the controller is specified as either one or two delayed pulses, thus constraining the controller to be casual. The best possible control is examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses.

  13. A study of the response of nonlinear springs

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Knott, T. W.; Johnson, E. R.

    1991-01-01

    The various phases to developing a methodology for studying the response of a spring-reinforced arch subjected to a point load are discussed. The arch is simply supported at its ends with both the spring and the point load assumed to be at midspan. The spring is present to off-set the typical snap through behavior normally associated with arches, and to provide a structure that responds with constant resistance over a finite displacement. The various phases discussed consist of the following: (1) development of the closed-form solution for the shallow arch case; (2) development of a finite difference analysis to study (shallow) arches; and (3) development of a finite element analysis for studying more general shallow and nonshallow arches. The two numerical analyses rely on a continuation scheme to move the solution past limit points, and to move onto bifurcated paths, both characteristics being common to the arch problem. An eigenvalue method is used for a continuation scheme. The finite difference analysis is based on a mixed formulation (force and displacement variables) of the governing equations. The governing equations for the mixed formulation are in first order form, making the finite difference implementation convenient. However, the mixed formulation is not well-suited for the eigenvalue continuation scheme. This provided the motivation for the displacement based finite element analysis. Both the finite difference and the finite element analyses are compared with the closed form shallow arch solution. Agreement is excellent, except for the potential problems with the finite difference analysis and the continuation scheme. Agreement between the finite element analysis and another investigator's numerical analysis for deep arches is also good.

  14. Annual Review of Research Under the Joint Services Electronics Program.

    DTIC Science & Technology

    1983-12-01

    Total Number of Professionals: PI 2 RA 2 (1/2 time ) 6. Sunmmary: Our research into the theory of nonlinear control systems and appli- * cations to...known that all linear time -invariant controllable systems can be transformed to Brunovsky canonical form by a transformation consist- ing only of...estimating the impulse response ( = transfer matrix) of a discrete- time linear system x(t+l) = Fx(t) + Gu(t) y(t) = Hx(t) from a finite set of finite

  15. On certain families of rational functions arising in dynamics

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.

    1979-01-01

    It is noted that linear systems, depending on parameters, can occur in diverse situations including families of rational solutions to the Korteweg-de Vries equation or to the finite Toda lattice. The inverse scattering method used by Moser (1975) to obtain canonical coordinates for the finite homogeneous Toda lattice can be used for the synthesis of RC networks. It is concluded that the multivariable RC setting is ideal for the analysis of the periodic Toda lattice.

  16. Transverse spin correlations of the random transverse-field Ising model

    NASA Astrophysics Data System (ADS)

    Iglói, Ferenc; Kovács, István A.

    2018-03-01

    The critical behavior of the random transverse-field Ising model in finite-dimensional lattices is governed by infinite disorder fixed points, several properties of which have already been calculated by the use of the strong disorder renormalization-group (SDRG) method. Here we extend these studies and calculate the connected transverse-spin correlation function by a numerical implementation of the SDRG method in d =1 ,2 , and 3 dimensions. At the critical point an algebraic decay of the form ˜r-ηt is found, with a decay exponent being approximately ηt≈2 +2 d . In d =1 the results are related to dimer-dimer correlations in the random antiferromagnetic X X chain and have been tested by numerical calculations using free-fermionic techniques.

  17. Local Neighbourhoods for First-Passage Percolation on the Configuration Model

    NASA Astrophysics Data System (ADS)

    Dereich, Steffen; Ortgiese, Marcel

    2018-04-01

    We consider first-passage percolation on the configuration model. Once the network has been generated each edge is assigned an i.i.d. weight modeling the passage time of a message along this edge. Then independently two vertices are chosen uniformly at random, a sender and a recipient, and all edges along the geodesic connecting the two vertices are coloured in red (in the case that both vertices are in the same component). In this article we prove local limit theorems for the coloured graph around the recipient in the spirit of Benjamini and Schramm. We consider the explosive regime, in which case the random distances are of finite order, and the Malthusian regime, in which case the random distances are of logarithmic order.

  18. Numerical analysis for finite-range multitype stochastic contact financial market dynamic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ge; Wang, Jun; Fang, Wen, E-mail: fangwen@bjtu.edu.cn

    In an attempt to reproduce and study the dynamics of financial markets, a random agent-based financial price model is developed and investigated by the finite-range multitype contact dynamic system, in which the interaction and dispersal of different types of investment attitudes in a stock market are imitated by viruses spreading. With different parameters of birth rates and finite-range, the normalized return series are simulated by Monte Carlo simulation method and numerical studied by power-law distribution analysis and autocorrelation analysis. To better understand the nonlinear dynamics of the return series, a q-order autocorrelation function and a multi-autocorrelation function are also definedmore » in this work. The comparisons of statistical behaviors of return series from the agent-based model and the daily historical market returns of Shanghai Composite Index and Shenzhen Component Index indicate that the proposed model is a reasonable qualitative explanation for the price formation process of stock market systems.« less

  19. Finite-temperature behavior of a classical spin-orbit-coupled model for YbMgGaO4 with and without bond disorder

    NASA Astrophysics Data System (ADS)

    Parker, Edward; Balents, Leon

    2018-05-01

    We present the results of finite-temperature classical Monte Carlo simulations of a strongly spin-orbit-coupled nearest-neighbor triangular-lattice model for the candidate U (1 ) quantum spin liquid YbMgGaO4 at large system sizes. We find a single continuous finite-temperature stripe-ordering transition with slowly diverging heat capacity that completely breaks the sixfold ground-state degeneracy, despite the absence of a known conformal field theory describing such a transition. We also simulate the effect of random-bond disorder in the model, and find that even weak bond disorder destroys the transition by fragmenting the system into very large domains—possibly explaining the lack of observed ordering in the real material. The Imry-Ma argument only partially explains this fragility to disorder, and we extend the argument with a physical explanation for the preservation of our system's time-reversal symmetry even under a disorder model that preserves the same symmetry.

  20. Vibration study of a vehicle suspension assembly with the finite element method

    NASA Astrophysics Data System (ADS)

    Cătălin Marinescu, Gabriel; Castravete, Ştefan-Cristian; Dumitru, Nicolae

    2017-10-01

    The main steps of the present work represent a methodology of analysing various vibration effects over suspension mechanical parts of a vehicle. A McPherson type suspension from an existing vehicle was created using CAD software. Using the CAD model as input, a finite element model of the suspension assembly was developed. Abaqus finite element analysis software was used to pre-process, solve, and post-process the results. Geometric nonlinearities are included in the model. Severe sources of nonlinearities such us friction and contact are also included in the model. The McPherson spring is modelled as linear spring. The analysis include several steps: preload, modal analysis, the reduction of the model to 200 generalized coordinates, a deterministic external excitation, a random excitation that comes from different types of roads. The vibration data used as an input for the simulation were previously obtained by experimental means. Mathematical expressions used for the simulation were also presented in the paper.

Top