Sample records for discrete random variable

  1. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  2. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  3. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  4. Mutual Information between Discrete Variables with Many Categories using Recursive Adaptive Partitioning

    PubMed Central

    Seok, Junhee; Seon Kang, Yeong

    2015-01-01

    Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461

  5. A comparison of three random effects approaches to analyze repeated bounded outcome scores with an application in a stroke revalidation study.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2008-12-30

    Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.

  6. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  7. Non-equilibrium Green's functions study of discrete dopants variability on an ultra-scaled FinFET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valin, R., E-mail: r.valinferreiro@swansea.ac.uk; Martinez, A., E-mail: a.e.Martinez@swansea.ac.uk; Barker, J. R., E-mail: john.barker@glasgow.ac.uk

    In this paper, we study the effect of random discrete dopants on the performance of a 6.6 nm channel length silicon FinFET. The discrete dopants have been distributed randomly in the source/drain region of the device. Due to the small dimensions of the FinFET, a quantum transport formalism based on the non-equilibrium Green's functions has been deployed. The transfer characteristics for several devices that differ in location and number of dopants have been calculated. Our results demonstrate that discrete dopants modify the effective channel length and the height of the source/drain barrier, consequently changing the channel control of the charge. Thismore » effect becomes more significant at high drain bias. As a consequence, there is a strong effect on the variability of the on-current, off-current, sub-threshold slope, and threshold voltage. Finally, we have also calculated the mean and standard deviation of these parameters to quantify their variability. The obtained results show that the variability at high drain bias is 1.75 larger than at low drain bias. However, the variability of the on-current, off-current, and sub-threshold slope remains independent of the drain bias. In addition, we have found that a large source to drain current by tunnelling current occurs at low gate bias.« less

  8. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    USGS Publications Warehouse

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.

  9. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  10. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  11. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  12. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory

    PubMed Central

    Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank

    2016-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957

  13. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory.

    PubMed

    Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank

    2017-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  15. Impact of random discrete dopant in extension induced fluctuation in gate-source/drain underlap FinFET

    NASA Astrophysics Data System (ADS)

    Wang, Yijiao; Huang, Peng; Xin, Zheng; Zeng, Lang; Liu, Xiaoyan; Du, Gang; Kang, Jinfeng

    2014-01-01

    In this work, three dimensional technology computer-aided design (TCAD) simulations are performed to investigate the impact of random discrete dopant (RDD) including extension induced fluctuation in 14 nm silicon-on-insulator (SOI) gate-source/drain (G-S/D) underlap fin field effect transistor (FinFET). To fully understand the RDD impact in extension, RDD effect is evaluated in channel and extension separately and together. The statistical variability of FinFET performance parameters including threshold voltage (Vth), subthreshold slope (SS), drain induced barrier lowering (DIBL), drive current (Ion), and leakage current (Ioff) are analyzed. The results indicate that RDD in extension can lead to substantial variability, especially for SS, DIBL, and Ion and should be taken into account together with that in channel to get an accurate estimation on RDF. Meanwhile, higher doping concentration of extension region is suggested from the perspective of overall variability control.

  16. Bounds for the price of discrete arithmetic Asian options

    NASA Astrophysics Data System (ADS)

    Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.

    2006-01-01

    In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.

  17. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075

  18. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation.

    PubMed

    Müller, Eike H; Scheichl, Rob; Shardlow, Tony

    2015-04-08

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  19. Exact Markov chains versus diffusion theory for haploid random mating.

    PubMed

    Tyvand, Peder A; Thorvaldsen, Steinar

    2010-05-01

    Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.

  20. Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.

    2008-11-06

    This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less

  1. Biochemical Network Stochastic Simulator (BioNetS): software for stochastic modeling of biochemical networks.

    PubMed

    Adalsteinsson, David; McMillen, David; Elston, Timothy C

    2004-03-08

    Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  2. Failure of self-consistency in the discrete resource model of visual working memory.

    PubMed

    Bays, Paul M

    2018-06-03

    The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.

  3. Mum, why do you keep on growing? Impacts of environmental variability on optimal growth and reproduction allocation strategies of annual plants.

    PubMed

    De Lara, Michel

    2006-05-01

    In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.

  4. A survival tree method for the analysis of discrete event times in clinical and epidemiological studies.

    PubMed

    Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard

    2016-02-28

    Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Encoding dependence in Bayesian causal networks

    USDA-ARS?s Scientific Manuscript database

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  6. Regularization of the big bang singularity with random perturbations

    NASA Astrophysics Data System (ADS)

    Belbruno, Edward; Xue, BingKan

    2018-03-01

    We show how to regularize the big bang singularity in the presence of random perturbations modeled by Brownian motion using stochastic methods. We prove that the physical variables in a contracting universe dominated by a scalar field can be continuously and uniquely extended through the big bang as a function of time to an expanding universe only for a discrete set of values of the equation of state satisfying special co-prime number conditions. This result significantly generalizes a previous result (Xue and Belbruno 2014 Class. Quantum Grav. 31 165002) that did not model random perturbations. This result implies that the extension from a contracting to an expanding universe for the discrete set of co-prime equation of state is robust, which is a surprising result. Implications for a purely expanding universe are discussed, such as a non-smooth, randomly varying scale factor near the big bang.

  7. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  8. Stochastic dynamics of time correlation in complex systems with discrete time

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail

    2000-11-01

    In this paper we present the concept of description of random processes in complex systems with discrete time. It involves the description of kinetics of discrete processes by means of the chain of finite-difference non-Markov equations for time correlation functions (TCFs). We have introduced the dynamic (time dependent) information Shannon entropy Si(t) where i=0,1,2,3,..., as an information measure of stochastic dynamics of time correlation (i=0) and time memory (i=1,2,3,...). The set of functions Si(t) constitute the quantitative measure of time correlation disorder (i=0) and time memory disorder (i=1,2,3,...) in complex system. The theory developed started from the careful analysis of time correlation involving dynamics of vectors set of various chaotic states. We examine two stochastic processes involving the creation and annihilation of time correlation (or time memory) in details. We carry out the analysis of vectors' dynamics employing finite-difference equations for random variables and the evolution operator describing their natural motion. The existence of TCF results in the construction of the set of projection operators by the usage of scalar product operation. Harnessing the infinite set of orthogonal dynamic random variables on a basis of Gram-Shmidt orthogonalization procedure tends to creation of infinite chain of finite-difference non-Markov kinetic equations for discrete TCFs and memory functions (MFs). The solution of the equations above thereof brings to the recurrence relations between the TCF and MF of senior and junior orders. This offers new opportunities for detecting the frequency spectra of power of entropy function Si(t) for time correlation (i=0) and time memory (i=1,2,3,...). The results obtained offer considerable scope for attack on stochastic dynamics of discrete random processes in a complex systems. Application of this technique on the analysis of stochastic dynamics of RR intervals from human ECG's shows convincing evidence for a non-Markovian phenomemena associated with a peculiarities in short- and long-range scaling. This method may be of use in distinguishing healthy from pathologic data sets based in differences in these non-Markovian properties.

  9. Dynamical Localization for Discrete Anderson Dirac Operators

    NASA Astrophysics Data System (ADS)

    Prado, Roberto A.; de Oliveira, César R.; Carvalho, Silas L.

    2017-04-01

    We establish dynamical localization for random Dirac operators on the d-dimensional lattice, with d\\in { 1, 2, 3} , in the three usual regimes: large disorder, band edge and 1D. These operators are discrete versions of the continuous Dirac operators and consist in the sum of a discrete free Dirac operator with a random potential. The potential is a diagonal matrix formed by different scalar potentials, which are sequences of independent and identically distributed random variables according to an absolutely continuous probability measure with bounded density and of compact support. We prove the exponential decay of fractional moments of the Green function for such models in each of the above regimes, i.e., (j) throughout the spectrum at larger disorder, (jj) for energies near the band edges at arbitrary disorder and (jjj) in dimension one, for all energies in the spectrum and arbitrary disorder. Dynamical localization in theses regimes follows from the fractional moments method. The result in the one-dimensional regime contrast with one that was previously obtained for 1D Dirac model with Bernoulli potential.

  10. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  11. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  12. Probability Distributions of Minkowski Distances between Discrete Random Variables.

    ERIC Educational Resources Information Center

    Schroger, Erich; And Others

    1993-01-01

    Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)

  13. Phenomenological picture of fluctuations in branching random walks

    NASA Astrophysics Data System (ADS)

    Mueller, A. H.; Munier, S.

    2014-10-01

    We propose a picture of the fluctuations in branching random walks, which leads to predictions for the distribution of a random variable that characterizes the position of the bulk of the particles. We also interpret the 1 /√{t } correction to the average position of the rightmost particle of a branching random walk for large times t ≫1 , computed by Ebert and Van Saarloos, as fluctuations on top of the mean-field approximation of this process with a Brunet-Derrida cutoff at the tip that simulates discreteness. Our analytical formulas successfully compare to numerical simulations of a particular model of a branching random walk.

  14. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  15. Spatial autocorrelation in growth of undisturbed natural pine stands across Georgia

    Treesearch

    Raymond L. Czaplewski; Robin M. Reich; William A. Bechtold

    1994-01-01

    Moran's I statistic measures the spatial autocorrelation in a random variable measured at discrete locations in space. Permutation procedures test the null hypothesis that the observed Moran's I value is no greater than that expected by chance. The spatial autocorrelation of gross basal area increment is analyzed for undisturbed, naturally regenerated stands...

  16. A Random Forest Approach to Predict the Spatial Distribution ...

    EPA Pesticide Factsheets

    Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF) model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxy)phenol) (TCS), in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies) and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high) for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC) (transport and fate proxy) was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry) and sand (transport and fate proxy) were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary), which was validated wi

  17. The effects of demand uncertainty on strategic gaming in the merit-order electricity pool market

    NASA Astrophysics Data System (ADS)

    Frem, Bassam

    In a merit-order electricity pool market, generating companies (Gencos) game with their offered incremental cost to meet the electricity demand and earn bigger market shares and higher profits. However when the demand is treated as a random variable instead of as a known constant, these Genco gaming strategies become more complex. After a brief introduction of electricity markets and gaming, the effects of demand uncertainty on strategic gaming are studied in two parts: (1) Demand modelled as a discrete random variable (2) Demand modelled as a continuous random variable. In the first part, we proposed an algorithm, the discrete stochastic strategy (DSS) algorithm that generates a strategic set of offers from the perspective of the Gencos' profits. The DSS offers were tested and compared to the deterministic Nash equilibrium (NE) offers based on the predicted demand. This comparison, based on the expected Genco profits, showed the DSS to be a better strategy in a probabilistic sense than the deterministic NE. In the second part, we presented three gaming strategies: (1) Deterministic NE (2) No-Risk (3) Risk-Taking. The strategies were then tested and their profit performances were compared using two assessment tools: (a) Expected value and standard deviation (b) Inverse cumulative distribution. We concluded that despite yielding higher profit performance under the right conjectures, Risk-Taking strategies are very sensitive to incorrect conjectures on the competitors' gaming decisions. As such, despite its lower profit performance, the No-Risk strategy was deemed preferable.

  18. The Semigeostrophic Equations Discretized in Reference and Dual Variables

    NASA Astrophysics Data System (ADS)

    Cullen, Mike; Gangbo, Wilfrid; Pisante, Giovanni

    2007-08-01

    We study the evolution of a system of n particles {\\{(x_i, v_i)\\}_{i=1}n} in {mathbb{R}^{2d}} . That system is a conservative system with a Hamiltonian of the form {H[μ]=W22(μ, νn)} , where W 2 is the Wasserstein distance and μ is a discrete measure concentrated on the set {\\{(x_i, v_i)\\}_{i=1}n} . Typically, μ(0) is a discrete measure approximating an initial L ∞ density and can be chosen randomly. When d = 1, our results prove convergence of the discrete system to a variant of the semigeostrophic equations. We obtain that the limiting densities are absolutely continuous with respect to the Lebesgue measure. When {\\{ν^n\\}_{n=1}^infty} converges to a measure concentrated on a special d-dimensional set, we obtain the Vlasov-Monge-Ampère (VMA) system. When, d = 1 the VMA system coincides with the standard Vlasov-Poisson system.

  19. The partition function of the Bures ensemble as the τ-function of BKP and DKP hierarchies: continuous and discrete

    NASA Astrophysics Data System (ADS)

    Hu, Xing-Biao; Li, Shi-Hao

    2017-07-01

    The relationship between matrix integrals and integrable systems was revealed more than 20 years ago. As is known, matrix integrals over a Gaussian ensemble used in random matrix theory could act as the τ-function of several hierarchies of integrable systems. In this article, we will show that the time-dependent partition function of the Bures ensemble, whose measure has many interesting geometric properties, could act as the τ-function of BKP and DKP hierarchies. In addition, if discrete time variables are introduced, then this partition function could act as the τ-function of discrete BKP and DKP hierarchies. In particular, there are some links between the partition function of the Bures ensemble and Toda-type equations.

  20. Fast and Accurate Multivariate Gaussian Modeling of Protein Families: Predicting Residue Contacts and Protein-Interaction Partners

    PubMed Central

    Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061

  1. Measurement of discrete vertical in-shoe stress with piezoelectric transducers.

    PubMed

    Gross, T S; Bunch, R P

    1988-05-01

    The purpose of this investigation was to design and validate a system suitable for non-invasive measurement of discrete in-shoe vertical plantar stress during dynamic activities. Eight transducers were constructed, with small piezoelectric ceramic squares (4.83 x 4.83 x 1.3 mm) used to generate a charge output proportional to vertical plantar stress. The mechanical properties of the transducers included 2.3% linearity and 3.7% hysteresis for stresses up to 2000 kPa and loading times up to 200 ms. System design efficacy was analysed by means of a multiple day, multiple trial data collection. With the transducers placed beneath plantar landmarks, the footstrike of one subject was recorded ten times on each of five days while running at 3.58 m/s on a treadmill. Within-day and between-day proportional error (PE) was used to estimate the error contained in the mean peak stress during foot contact. Within-day PE focused on trial to trial variability associated with the subject and equipment, and averaged 3.1% (range 2.5-4.0%) across transducer location. Between-day PE provided a cumulative estimate of subject, transducer placement, and random equipment variability, but excluded trial to trial variability. It ranged from 4.9 to 15.8%, with a mean of 9.9%. Peak stress, impulse, and sequence of loading data were examined to identify discrete foot function patterns and highlight the value of discrete stress analysis.

  2. A unified approach for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-09-01

    Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.

  3. Risk management for sulfur dioxide abatement under multiple uncertainties

    NASA Astrophysics Data System (ADS)

    Dai, C.; Sun, W.; Tan, Q.; Liu, Y.; Lu, W. T.; Guo, H. C.

    2016-03-01

    In this study, interval-parameter programming, two-stage stochastic programming (TSP), and conditional value-at-risk (CVaR) were incorporated into a general optimization framework, leading to an interval-parameter CVaR-based two-stage programming (ICTP) method. The ICTP method had several advantages: (i) its objective function simultaneously took expected cost and risk cost into consideration, and also used discrete random variables and discrete intervals to reflect uncertain properties; (ii) it quantitatively evaluated the right tail of distributions of random variables which could better calculate the risk of violated environmental standards; (iii) it was useful for helping decision makers to analyze the trade-offs between cost and risk; and (iv) it was effective to penalize the second-stage costs, as well as to capture the notion of risk in stochastic programming. The developed model was applied to sulfur dioxide abatement in an air quality management system. The results indicated that the ICTP method could be used for generating a series of air quality management schemes under different risk-aversion levels, for identifying desired air quality management strategies for decision makers, and for considering a proper balance between system economy and environmental quality.

  4. Design and simulation of stratified probability digital receiver with application to the multipath communication

    NASA Technical Reports Server (NTRS)

    Deal, J. H.

    1975-01-01

    One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.

  5. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  6. Condensation with two constraints and disorder

    NASA Astrophysics Data System (ADS)

    Barré, J.; Mangeolle, L.

    2018-04-01

    We consider a set of positive random variables obeying two additive constraints, a linear and a quadratic one; these constraints mimic the conservation laws of a dynamical system. In the simplest setting, without disorder, it is known that such a system may undergo a ‘condensation’ transition, whereby one random variable becomes much larger than the others; this transition has been related to the spontaneous appearance of non linear localized excitations in certain nonlinear chains, called breathers. Motivated by the study of breathers in a disordered discrete nonlinear Schrödinger equation, we study different instances of this problem in presence of a quenched disorder. Unless the disorder is too strong, the phase diagram looks like the one without disorder, with a transition separating a fluid phase, where all variables have the same order of magnitude, and a condensed phase, where one variable is much larger than the others. We then show that the condensed phase exhibits various degrees of ‘intermediate symmetry breaking’: the site hosting the condensate is chosen neither uniformly at random, nor is it fixed by the disorder realization. Throughout the article, our heuristic arguments are complemented with direct Monte Carlo simulations.

  7. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  8. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  9. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  10. Field comparison of analytical results from discrete-depth ground water samplers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Delfino, T.A.; Gallinatti, J.D.

    1995-07-01

    Discrete-depth ground water samplers are used during environmental screening investigations to collect ground water samples in lieu of installing and sampling monitoring wells. Two of the most commonly used samplers are the BAT Enviroprobe and the QED HydroPunch I, which rely on differing sample collection mechanics. Although these devices have been on the market for several years, it was unknown what, if any, effect the differences would have on analytical results for ground water samples containing low to moderate concentrations of chlorinated volatile organic compounds (VOCs). This study investigated whether the discrete-depth ground water sampler used introduces statistically significant differencesmore » in analytical results. The goal was to provide a technical basis for allowing the two devices to be used interchangeably during screening investigations. Because this study was based on field samples, it included several sources of potential variability. It was necessary to separate differences due to sampler type from variability due to sampling location, sample handling, and laboratory analytical error. To statistically evaluate these sources of variability, the experiment was arranged in a nested design. Sixteen ground water samples were collected from eight random locations within a 15-foot by 15-foot grid. The grid was located in an area where shallow ground water was believed to be uniformly affected by VOCs. The data were evaluated using analysis of variance.« less

  11. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Treesearch

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  12. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  13. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  14. Discrete Gust Model for Launch Vehicle Assessments

    NASA Technical Reports Server (NTRS)

    Leahy, Frank B.

    2008-01-01

    Analysis of spacecraft vehicle responses to atmospheric wind gusts during flight is important in the establishment of vehicle design structural requirements and operational capability. Typically, wind gust models can be either a spectral type determined by a random process having a wide range of wavelengths, or a discrete type having a single gust of predetermined magnitude and shape. Classical discrete models used by NASA during the Apollo and Space Shuttle Programs included a 9 m/sec quasi-square-wave gust with variable wavelength from 60 to 300 m. A later study derived discrete gust from a military specification (MIL-SPEC) document that used a "1-cosine" shape. The MIL-SPEC document contains a curve of non-dimensional gust magnitude as a function of non-dimensional gust half-wavelength based on the Dryden spectral model, but fails to list the equation necessary to reproduce the curve. Therefore, previous studies could only estimate a value of gust magnitude from the curve, or attempt to fit a function to it. This paper presents the development of the MIL-SPEC curve, and provides the necessary information to calculate discrete gust magnitudes as a function of both gust half-wavelength and the desired probability level of exceeding a specified gust magnitude.

  15. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    USGS Publications Warehouse

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.

  16. Variability of a "force signature" during windmill softball pitching and relationship between discrete force variables and pitch velocity.

    PubMed

    Nimphius, Sophia; McGuigan, Michael R; Suchomel, Timothy J; Newton, Robert U

    2016-06-01

    This study assessed reliability of discrete ground reaction force (GRF) variables over multiple pitching trials, investigated the relationships between discrete GRF variables and pitch velocity (PV) and assessed the variability of the "force signature" or continuous force-time curve during the pitching motion of windmill softball pitchers. Intraclass correlation coefficient (ICC) for all discrete variables was high (0.86-0.99) while the coefficient of variance (CV) was low (1.4-5.2%). Two discrete variables were significantly correlated to PV; second vertical peak force (r(5)=0.81, p=0.03) and time between peak forces (r(5)=-0.79; p=0.03). High ICCs and low CVs support the reliability of discrete GRF and PV variables over multiple trials and significant correlations indicate there is a relationship between the ability to produce force and the timing of this force production with PV. The mean of all pitchers' curve-average standard deviation of their continuous force-time curves demonstrated low variability (CV=4.4%) indicating a repeatable and identifiable "force signature" pattern during this motion. As such, the continuous force-time curve in addition to discrete GRF variables should be examined in future research as a potential method to monitor or explain changes in pitching performance. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. THE DISTRIBUTION OF ROUNDS FIRED IN STOCHASTIC DUELS

    DTIC Science & Technology

    This paper continues the development of the theory of Stochastic Duels to include the distribution of the number of rounds fired. Most generally...the duel between two contestants who fire at each other with constant kill probabilities per round is considered. The time between rounds fired may be...at the beginning of the duel may be limited and is a discrete random variable. Besides the distribution of rounds fired, its first two moments and

  18. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    NASA Astrophysics Data System (ADS)

    Žukovič, Milan; Hristopulos, Dionissios T.

    2009-02-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.

  19. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  20. Digital high speed programmable convolver

    NASA Astrophysics Data System (ADS)

    Rearick, T. C.

    1984-12-01

    A circuit module for rapidly calculating a discrete numerical convolution is described. A convolution such as finding the sum of the products of a 16 bit constant and a 16 bit variable is performed by a module which is programmable so that the constant may be changed for a new problem. In addition, the module may be programmed to find the sum of the products of 4 and 8 bit constants and variables. RAM (Random Access Memories) are loaded with partial products of the selected constant and all possible variables. Then, when the actual variable is loaded, it acts as an address to find the correct partial product in the particular RAM. The partial products from all of the RAMs are shifted to the appropriate numerical power position (if necessary) and then added in adder elements.

  1. A Random Forest approach to predict the spatial distribution of sediment pollution in an estuarine system

    PubMed Central

    Kreakie, Betty J.; Cantwell, Mark G.; Nacci, Diane

    2017-01-01

    Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF) model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxy)phenol) (TCS), in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies) and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high) for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC) (transport and fate proxy) was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry) and sand (transport and fate proxy) were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary), which was validated with independent test samples. This decision-support tool performed well at the sub-estuary extent and provided the means to identify areas of concern and prioritize bay-wide sampling. PMID:28738089

  2. Verifying and Validating Simulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less

  3. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  4. Random discrete linear canonical transform.

    PubMed

    Wei, Deyun; Wang, Ruikui; Li, Yuan-Min

    2016-12-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.

  5. Inference for the Bivariate and Multivariate Hidden Truncated Pareto(type II) and Pareto(type IV) Distribution and Some Measures of Divergence Related to Incompatibility of Probability Distribution

    ERIC Educational Resources Information Center

    Ghosh, Indranil

    2011-01-01

    Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…

  6. First-Principles Modeling Of Electromagnetic Scattering By Discrete and Discretely Heterogeneous Random Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2016-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.

  7. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media.

    PubMed

    Mishchenko, Michael I; Dlugach, Janna M; Yurkin, Maxim A; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R Lee; Travis, Larry D; Yang, Ping; Zakharova, Nadezhda T

    2016-05-16

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ , or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell-Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.

  8. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    PubMed Central

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2018-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies. PMID:29657355

  9. Bounds for Asian basket options

    NASA Astrophysics Data System (ADS)

    Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle

    2008-09-01

    In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.

  10. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  11. An interactive approach based on a discrete differential evolution algorithm for a class of integer bilevel programming problems

    NASA Astrophysics Data System (ADS)

    Li, Hong; Zhang, Li; Jiao, Yong-Chang

    2016-07-01

    This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.

  12. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  13. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  14. High Productivity Computing Systems Analysis and Performance

    DTIC Science & Technology

    2005-07-01

    cubic grid Discrete Math Global Updates per second (GUP/S) RandomAccess Paper & Pencil Contact Bob Lucas (ISI) Multiple Precision none...can be found at the web site. One of the HPCchallenge codes, RandomAccess, is derived from the HPCS discrete math benchmarks that we released, and...Kernels Discrete Math … Graph Analysis … Linear Solvers … Signal Processi ng Execution Bounds Execution Indicators 6 Scalable Compact

  15. Variable Weight Fractional Collisions for Multiple Species Mixtures

    DTIC Science & Technology

    2017-08-28

    DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in

  16. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  17. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  18. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  19. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  20. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.

  1. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  2. H∞ filtering for discrete-time systems subject to stochastic missing measurements: a decomposition approach

    NASA Astrophysics Data System (ADS)

    Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang

    2014-07-01

    This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.

  3. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    DOE PAGES

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; ...

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less

  4. Analytical instrumentation infrastructure for combinatorial and high-throughput development of formulated discrete and gradient polymeric sensor materials arrays

    NASA Astrophysics Data System (ADS)

    Potyrailo, Radislav A.; Hassib, Lamyaa

    2005-06-01

    Multicomponent polymer-based formulations of optical sensor materials are difficult and time consuming to optimize using conventional approaches. To address these challenges, our long-term goal is to determine relationships between sensor formulation and sensor response parameters using new scientific methodologies. As the first step, we have designed and implemented an automated analytical instrumentation infrastructure for combinatorial and high-throughput development of polymeric sensor materials for optical sensors. Our approach is based on the fabrication and performance screening of discrete and gradient sensor arrays. Simultaneous formation of multiple sensor coatings into discrete 4×6, 6×8, and 8×12 element arrays (3-15μL volume per element) and their screening provides not only a well-recognized acceleration in the screening rate, but also considerably reduces or even eliminates sources of variability, which are randomly affecting sensors response during a conventional one-at-a-time sensor coating evaluation. The application of gradient sensor arrays provides additional capabilities for rapid finding of the optimal formulation parameters.

  5. Robustness of quantum key distribution with discrete and continuous variables to channel noise

    NASA Astrophysics Data System (ADS)

    Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.

    2017-06-01

    We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.

  6. Effect of source tampering in the security of quantum cryptography

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Xu, Feihu; Jiang, Mu-Sheng; Ma, Xiang-Chun; Lo, Hoi-Kwong; Liang, Lin-Mei

    2015-08-01

    The security of source has become an increasingly important issue in quantum cryptography. Based on the framework of measurement-device-independent quantum key distribution (MDI-QKD), the source becomes the only region exploitable by a potential eavesdropper (Eve). Phase randomization is a cornerstone assumption in most discrete-variable (DV) quantum communication protocols (e.g., QKD, quantum coin tossing, weak-coherent-state blind quantum computing, and so on), and the violation of such an assumption is thus fatal to the security of those protocols. In this paper, we show a simple quantum hacking strategy, with commercial and homemade pulsed lasers, by Eve that allows her to actively tamper with the source and violate such an assumption, without leaving a trace afterwards. Furthermore, our attack may also be valid for continuous-variable (CV) QKD, which is another main class of QKD protocol, since, excepting the phase random assumption, other parameters (e.g., intensity) could also be changed, which directly determine the security of CV-QKD.

  7. Dense image registration through MRFs and efficient linear programming.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos

    2008-12-01

    In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.

  8. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    NASA Astrophysics Data System (ADS)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  9. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  10. Continuous and difficult discrete cognitive tasks promote improved stability in older adults.

    PubMed

    Lajoie, Yves; Jehu, Deborah A; Richer, Natalie; Chan, Alan

    2017-06-01

    Directing attention away from postural control and onto a cognitive task affords the emergence of automatic control processes. Perhaps the continuous withdrawal of attention from the postural task facilitates an automatization of posture as opposed to only intermittent withdrawal; however this is unknown in the aging population. Twenty older adults (69.9±3.5years) stood with feet together on a force platform for 60s while performing randomly assigned discrete and continuous cognitive tasks. Participants were instructed to stand comfortably with their arms by their sides while verbally responding to the auditory stimuli as fast as possible during the discrete tasks, or mentally performing the continuous cognitive tasks. Participants also performed single-task standing. Results demonstrate significant reductions in sway amplitude and sway variability for the difficult discrete task as well as the continuous tasks relative to single-task standing. The continuous cognitive tasks also prompted greater frequency of sway in the anterior-posterior direction compared to single-standing and discrete tasks, and greater velocity in both directions compared to single-task standing, which could suggest ankle stiffening. No differences in the simple discrete condition were shown compared to single-task standing, perhaps due to the simplicity of the task. Therefore, we propose that the level of difficulty of the task, the specific neuropsychological process engaged during the cognitive task, and the type of task (discrete vs. continuous) influence postural control in older adults. Dual-tasking is a common activity of daily living; this work provides insight into the age-related changes in postural stability and attention demand. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  12. Record statistics of a strongly correlated time series: random walks and Lévy flights

    NASA Astrophysics Data System (ADS)

    Godrèche, Claude; Majumdar, Satya N.; Schehr, Grégory

    2017-08-01

    We review recent advances on the record statistics of strongly correlated time series, whose entries denote the positions of a random walk or a Lévy flight on a line. After a brief survey of the theory of records for independent and identically distributed random variables, we focus on random walks. During the last few years, it was indeed realized that random walks are a very useful ‘laboratory’ to test the effects of correlations on the record statistics. We start with the simple one-dimensional random walk with symmetric jumps (both continuous and discrete) and discuss in detail the statistics of the number of records, as well as of the ages of the records, i.e. the lapses of time between two successive record breaking events. Then we review the results that were obtained for a wide variety of random walk models, including random walks with a linear drift, continuous time random walks, constrained random walks (like the random walk bridge) and the case of multiple independent random walkers. Finally, we discuss further observables related to records, like the record increments, as well as some questions raised by physical applications of record statistics, like the effects of measurement error and noise.

  13. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  14. 'Extremotaxis': computing with a bacterial-inspired algorithm.

    PubMed

    Nicolau, Dan V; Burrage, Kevin; Nicolau, Dan V; Maini, Philip K

    2008-01-01

    We present a general-purpose optimization algorithm inspired by "run-and-tumble", the biased random walk chemotactic swimming strategy used by the bacterium Escherichia coli to locate regions of high nutrient concentration The method uses particles (corresponding to bacteria) that swim through the variable space (corresponding to the attractant concentration profile). By constantly performing temporal comparisons, the particles drift towards the minimum or maximum of the function of interest. We illustrate the use of our method with four examples. We also present a discrete version of the algorithm. The new algorithm is expected to be useful in combinatorial optimization problems involving many variables, where the functional landscape is apparently stochastic and has local minima, but preserves some derivative structure at intermediate scales.

  15. An energy-stable method for solving the incompressible Navier-Stokes equations with non-slip boundary condition

    NASA Astrophysics Data System (ADS)

    Lee, Byungjoon; Min, Chohong

    2018-05-01

    We introduce a stable method for solving the incompressible Navier-Stokes equations with variable density and viscosity. Our method is stable in the sense that it does not increase the total energy of dynamics that is the sum of kinetic energy and potential energy. Instead of velocity, a new state variable is taken so that the kinetic energy is formulated by the L2 norm of the new variable. Navier-Stokes equations are rephrased with respect to the new variable, and a stable time discretization for the rephrased equations is presented. Taking into consideration the incompressibility in the Marker-And-Cell (MAC) grid, we present a modified Lax-Friedrich method that is L2 stable. Utilizing the discrete integration-by-parts in MAC grid and the modified Lax-Friedrich method, the time discretization is fully discretized. An explicit CFL condition for the stability of the full discretization is given and mathematically proved.

  16. Structural Equations and Path Analysis for Discrete Data.

    ERIC Educational Resources Information Center

    Winship, Christopher; Mare, Robert D.

    1983-01-01

    Presented is an approach to causal models in which some or all variables are discretely measured, showing that path analytic methods permit quantification of causal relationships among variables with the same flexibility and power of interpretation as is feasible in models including only continuous variables. Examples are provided. (Author/IS)

  17. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  18. Avoiding and Correcting Bias in Score-Based Latent Variable Regression with Discrete Manifest Items

    ERIC Educational Resources Information Center

    Lu, Irene R. R.; Thomas, D. Roland

    2008-01-01

    This article considers models involving a single structural equation with latent explanatory and/or latent dependent variables where discrete items are used to measure the latent variables. Our primary focus is the use of scores as proxies for the latent variables and carrying out ordinary least squares (OLS) regression on such scores to estimate…

  19. Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking

    NASA Astrophysics Data System (ADS)

    Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.

    2008-12-01

    Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.

  20. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  1. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  2. Fractional Programming for Communication Systems—Part II: Uplink Scheduling via Matching

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multi-cell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Further, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.

  3. Role of conviction in nonequilibrium models of opinion formation

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno; Anteneodo, Celia

    2012-12-01

    We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).

  4. Neural-network-based state feedback control of a nonlinear discrete-time system in nonstrict feedback form.

    PubMed

    Jagannathan, Sarangapani; He, Pingan

    2008-12-01

    In this paper, a suite of adaptive neural network (NN) controllers is designed to deliver a desired tracking performance for the control of an unknown, second-order, nonlinear discrete-time system expressed in nonstrict feedback form. In the first approach, two feedforward NNs are employed in the controller with tracking error as the feedback variable whereas in the adaptive critic NN architecture, three feedforward NNs are used. In the adaptive critic architecture, two action NNs produce virtual and actual control inputs, respectively, whereas the third critic NN approximates certain strategic utility function and its output is employed for tuning action NN weights in order to attain the near-optimal control action. Both the NN control methods present a well-defined controller design and the noncausal problem in discrete-time backstepping design is avoided via NN approximation. A comparison between the controller methodologies is highlighted. The stability analysis of the closed-loop control schemes is demonstrated. The NN controller schemes do not require an offline learning phase and the NN weights can be initialized at zero or random. Results show that the performance of the proposed controller schemes is highly satisfactory while meeting the closed-loop stability.

  5. Theory and generation of conditional, scalable sub-Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2016-03-01

    Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.

  6. The discrete hungry Lotka Volterra system and a new algorithm for computing matrix eigenvalues

    NASA Astrophysics Data System (ADS)

    Fukuda, Akiko; Ishiwata, Emiko; Iwasaki, Masashi; Nakamura, Yoshimasa

    2009-01-01

    The discrete hungry Lotka-Volterra (dhLV) system is a generalization of the discrete Lotka-Volterra (dLV) system which stands for a prey-predator model in mathematical biology. In this paper, we show that (1) some invariants exist which are expressed by dhLV variables and are independent from the discrete time and (2) a dhLV variable converges to some positive constant or zero as the discrete time becomes sufficiently large. Some characteristic polynomial is then factorized with the help of the dhLV system. The asymptotic behaviour of the dhLV system enables us to design an algorithm for computing complex eigenvalues of a certain band matrix.

  7. Numerical Schemes for Dynamically Orthogonal Equations of Stochastic Fluid and Ocean Flows

    DTIC Science & Technology

    2011-11-03

    stages of the simulation (see §5.1). Also, because the pdf is discrete, we calculate the mo- ments using the biased estimator CYiYj ≈ 1q ∑ r Yr,iYr,j...independent random variables. For problems that require large p (e.g. non-Gaussian) and large s (e.g. large ocean or fluid simulations ), the number of...Sc = ν̂/K̂ is the Schmidt number which is the ratio of kinematic viscosity ν̂ to molecular diffusivity K̂ for the density field, ĝ′ = ĝ (ρ̂max−ρ̂min

  8. Some applications of uncertainty relations in quantum information

    NASA Astrophysics Data System (ADS)

    Majumdar, A. S.; Pramanik, T.

    2016-08-01

    We discuss some applications of various versions of uncertainty relations for both discrete and continuous variables in the context of quantum information theory. The Heisenberg uncertainty relation enables demonstration of the Einstein, Podolsky and Rosen (EPR) paradox. Entropic uncertainty relations (EURs) are used to reveal quantum steering for non-Gaussian continuous variable states. EURs for discrete variables are studied in the context of quantum memory where fine-graining yields the optimum lower bound of uncertainty. The fine-grained uncertainty relation is used to obtain connections between uncertainty and the nonlocality of retrieval games for bipartite and tripartite systems. The Robertson-Schrödinger (RS) uncertainty relation is applied for distinguishing pure and mixed states of discrete variables.

  9. Discretization of 3d gravity in different polarizations

    NASA Astrophysics Data System (ADS)

    Dupuis, Maïté; Freidel, Laurent; Girelli, Florian

    2017-10-01

    We study the discretization of three-dimensional gravity with Λ =0 following the loop quantum gravity framework. In the process, we realize that different choices of polarization are possible. This allows us to introduce a new discretization based on the triad as opposed to the connection as in the standard loop quantum gravity framework. We also identify the classical nontrivial symmetries of discrete gravity, namely the Drinfeld double, given in terms of momentum maps. Another choice of polarization is given by the Chern-Simons formulation of gravity. Our framework also provides a new discretization scheme of Chern-Simons, which keeps track of the link between the continuum variables and the discrete ones. We show how the Poisson bracket we recover between the Chern-Simons holonomies allows us to recover the Goldman bracket. There is also a transparent link between the discrete Chern-Simons formulation and the discretization of gravity based on the connection (loop gravity) or triad variables (dual loop gravity).

  10. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables.

    PubMed

    Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter

    2011-04-13

    The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  11. Continuous operation of four-state continuous-variable quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2016-10-01

    We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.

  12. Random Telegraph Signal Amplitudes in Sub 100 nm (Decanano) MOSFETs: A 3D 'Atomistic' Simulation Study

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Balasubramaniam, R.; Brown, A. R.; Davies, J. H.; Saini, Subhash

    2000-01-01

    In this paper we use 3D simulations to study the amplitudes of random telegraph signals (RTS) associated with the trapping of a single carrier in interface states in the channel of sub 100 nm (decanano) MOSFETs. Both simulations using continuous doping charge and random discrete dopants in the active region of the MOSFETs are presented. We have studied the dependence of the RTS amplitudes on the position of the trapped charge in the channel and on the device design parameters. We have observed a significant increase in the maximum RTS amplitude when discrete random dopants are employed in the simulations.

  13. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  14. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  15. A Stochastic Dynamic Programming Model With Fuzzy Storage States Applied to Reservoir Operation Optimization

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas

    2004-08-01

    Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.

  16. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  17. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  18. Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mainemer, C. I.

    1978-01-01

    The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.

  19. Stability analysis for discrete-time stochastic memristive neural networks with both leakage and probabilistic delays.

    PubMed

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E

    2018-06-01

    This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Multicomponent Supramolecular Systems: Self-Organization in Coordination-Driven Self-Assembly

    PubMed Central

    Zheng, Yao-Rong; Yang, Hai-Bo; Ghosh, Koushik; Zhao, Liang; Stang, Peter J.

    2009-01-01

    The self-organization of multicomponent supramolecular systems involving a variety of two-dimensional (2-D) polygons and three-dimensional (3-D) cages is presented. Nine self-organizing systems, SS1–SS9, have been studied. Each involving the simultaneous mixing of organoplatinum acceptors and pyridyl donors of varying geometry and their selective self-assembly into three to four specific 2-D (rectangular, triangular, and rhomboid) and/or 3-D (triangular prism and distorted and nondistorted trigonal bipyramidal) supramolecules. The formation of these discrete structures is characterized using NMR spectroscopy and electrospray ionization mass spectrometry (ESI-MS). In all cases, the self-organization process is directed by: (1) the geometric information encoded within the molecular subunits and (2) a thermodynamically driven dynamic self-correction process. The result is the selective self-assembly of multiple discrete products from a randomly formed complex. The influence of key experimental variables – temperature and solvent – on the self-correction process and the fidelity of the resulting self-organization systems is also described. PMID:19544512

  1. A priori discretization quality metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  2. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    PubMed

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  3. Dual Formulations of Mixed Finite Element Methods with Applications

    PubMed Central

    Gillette, Andrew; Bajaj, Chandrajit

    2011-01-01

    Mixed finite element methods solve a PDE using two or more variables. The theory of Discrete Exterior Calculus explains why the degrees of freedom associated to the different variables should be stored on both primal and dual domain meshes with a discrete Hodge star used to transfer information between the meshes. We show through analysis and examples that the choice of discrete Hodge star is essential to the numerical stability of the method. Additionally, we define interpolation functions and discrete Hodge stars on dual meshes which can be used to create previously unconsidered mixed methods. Examples from magnetostatics and Darcy flow are examined in detail. PMID:21984841

  4. Continuous-time quantum random walks require discrete space

    NASA Astrophysics Data System (ADS)

    Manouchehri, K.; Wang, J. B.

    2007-11-01

    Quantum random walks are shown to have non-intuitive dynamics which makes them an attractive area of study for devising quantum algorithms for long-standing open problems as well as those arising in the field of quantum computing. In the case of continuous-time quantum random walks, such peculiar dynamics can arise from simple evolution operators closely resembling the quantum free-wave propagator. We investigate the divergence of quantum walk dynamics from the free-wave evolution and show that, in order for continuous-time quantum walks to display their characteristic propagation, the state space must be discrete. This behavior rules out many continuous quantum systems as possible candidates for implementing continuous-time quantum random walks.

  5. Environmental diversity as a surrogate for species representation.

    PubMed

    Beier, Paul; de Albuquerque, Fábio Suzart

    2015-10-01

    Because many species have not been described and most species ranges have not been mapped, conservation planners often use surrogates for conservation planning, but evidence for surrogate effectiveness is weak. Surrogates are well-mapped features such as soil types, landforms, occurrences of an easily observed taxon (discrete surrogates), and well-mapped environmental conditions (continuous surrogate). In the context of reserve selection, the idea is that a set of sites selected to span diversity in the surrogate will efficiently represent most species. Environmental diversity (ED) is a rarely used surrogate that selects sites to efficiently span multivariate ordination space. Because it selects across continuous environmental space, ED should perform better than discrete surrogates (which necessarily ignore within-bin and between-bin heterogeneity). Despite this theoretical advantage, ED appears to have performed poorly in previous tests of its ability to identify 50 × 50 km cells that represented vertebrates in Western Europe. Using an improved implementation of ED, we retested ED on Western European birds, mammals, reptiles, amphibians, and combined terrestrial vertebrates. We also tested ED on data sets for plants of Zimbabwe, birds of Spain, and birds of Arizona (United States). Sites selected using ED represented European mammals no better than randomly selected cells, but they represented species in the other 7 data sets with 20% to 84% effectiveness. This far exceeds the performance in previous tests of ED, and exceeds the performance of most discrete surrogates. We believe ED performed poorly in previous tests because those tests considered only a few candidate explanatory variables and used suboptimal forms of ED's selection algorithm. We suggest future work on ED focus on analyses at finer grain sizes more relevant to conservation decisions, explore the effect of selecting the explanatory variables most associated with species turnover, and investigate whether nonclimate abiotic variables can provide useful surrogates in an ED framework. © 2015 Society for Conservation Biology.

  6. Discrete gravity on random tensor network and holographic Rényi entropy

    NASA Astrophysics Data System (ADS)

    Han, Muxin; Huang, Shilin

    2017-11-01

    In this paper we apply the discrete gravity and Regge calculus to tensor networks and Anti-de Sitter/conformal field theory (AdS/CFT) correspondence. We construct the boundary many-body quantum state |Ψ〉 using random tensor networks as the holographic mapping, applied to the Wheeler-deWitt wave function of bulk Euclidean discrete gravity in 3 dimensions. The entanglement Rényi entropy of |Ψ〉 is shown to holographically relate to the on-shell action of Einstein gravity on a branch cover bulk manifold. The resulting Rényi entropy S n of |Ψ〉 approximates with high precision the Rényi entropy of ground state in 2-dimensional conformal field theory (CFT). In particular it reproduces the correct n dependence. Our results develop the framework of realizing the AdS3/CFT2 correspondence on random tensor networks, and provide a new proposal to approximate the CFT ground state.

  7. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  8. The theory of planned behaviour and discrete food choices: a systematic review and meta-analysis.

    PubMed

    McDermott, Máirtín S; Oliver, Madalyn; Svenson, Alexander; Simnadis, Thomas; Beck, Eleanor J; Coltman, Tim; Iverson, Don; Caputi, Peter; Sharma, Rajeev

    2015-12-30

    The combination of economic and social costs associated with non-communicable diseases provide a compelling argument for developing strategies that can influence modifiable risk factors, such as discrete food choices. Models of behaviour, such as the Theory of Planned Behaviour (TPB) provide conceptual order that allows program designers and policy makers to identify the substantive elements that drive behaviour and design effective interventions. The primary aim of the current review was to examine the association between TPB variables and discrete food choice behaviours. A systematic literature search was conducted to identify relevant studies. Calculation of the pooled mean effect size (r(+)) was conducted using inverse-variance weighted, random effects meta-analysis. Heterogeneity across studies was assessed using the Q- and I(2)-statistics. Meta-regression was used to test the impact of moderator variables: type of food choice behaviour; participants' age and gender. A total of 42 journal articles and four unpublished dissertations met the inclusion criteria. TPB variables were found to have medium to large associations with both intention and behaviour. Attitudes had the strongest association with intention (r(+)  = 0.54) followed by perceived behavioural control (PBC, r(+)  = 0.42) and subjective norm (SN, r(+)  = 0.37). The association between intention and behaviour was r(+)  = 0.45 and between PBC and behaviour was r(+)  = 0.27. Moderator analyses revealed the complex nature of dietary behaviour and the factors that underpin individual food choices. Significantly higher PBC-behaviour associations were found for choosing health compromising compared to health promoting foods. Significantly higher intention-behaviour and PBC-behaviour associations were found for choosing health promoting foods compared to avoiding health compromising foods. Participant characteristics were also found to moderate associations within the model. Higher intention-behaviour associations were found for older, compared to younger age groups. The variability in the association of the TPB with different food choice behaviours uncovered by the moderator analyses strongly suggest that researchers should carefully consider the nature of the behaviour being exhibited prior to selecting a theory.

  9. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  10. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.

    PubMed

    Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J

    2017-12-01

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.

  11. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  12. Monotonic entropy growth for a nonlinear model of random exchanges

    NASA Astrophysics Data System (ADS)

    Apenko, S. M.

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  13. Unconditional security proof of long-distance continuous-variable quantum key distribution with discrete modulation.

    PubMed

    Leverrier, Anthony; Grangier, Philippe

    2009-05-08

    We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.

  14. Mapping of uncertainty relations between continuous and discrete time

    NASA Astrophysics Data System (ADS)

    Chiuchiú, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  15. Mapping of uncertainty relations between continuous and discrete time.

    PubMed

    Chiuchiù, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  16. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  17. Discrete-time bidirectional associative memory neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.

    2005-02-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.

  18. Counting and classifying attractors in high dimensional dynamical systems.

    PubMed

    Bagley, R J; Glass, L

    1996-12-07

    Randomly connected Boolean networks have been used as mathematical models of neural, genetic, and immune systems. A key quantity of such networks is the number of basins of attraction in the state space. The number of basins of attraction changes as a function of the size of the network, its connectivity and its transition rules. In discrete networks, a simple count of the number of attractors does not reveal the combinatorial structure of the attractors. These points are illustrated in a reexamination of dynamics in a class of random Boolean networks considered previously by Kauffman. We also consider comparisons between dynamics in discrete networks and continuous analogues. A continuous analogue of a discrete network may have a different number of attractors for many different reasons. Some attractors in discrete networks may be associated with unstable dynamics, and several different attractors in a discrete network may be associated with a single attractor in the continuous case. Special problems in determining attractors in continuous systems arise when there is aperiodic dynamics associated with quasiperiodicity of deterministic chaos.

  19. On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models

    NASA Astrophysics Data System (ADS)

    Khorunzhiy, O.

    2008-08-01

    Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.

  20. Controllability of discrete bilinear systems with bounded control.

    NASA Technical Reports Server (NTRS)

    Tarn, T. J.; Elliott, D. L.; Goka, T.

    1973-01-01

    The subject of this paper is the controllability of time-invariant discrete-time bilinear systems. Bilinear systems are classified into two categories; homogeneous and inhomogeneous. Sufficient conditions which ensure the global controllability of discrete-time bilinear systems are obtained by localized analysis in control variables.

  1. Clustering and variable selection in the presence of mixed variable types and missing data.

    PubMed

    Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D

    2018-05-17

    We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.

  2. A simulation study of capacity utilization to predict future capacity for manufacturing system sustainability

    NASA Astrophysics Data System (ADS)

    Rimo, Tan Hauw Sen; Chai Tin, Ong

    2017-12-01

    Capacity utilization (CU) measurement is an important task in a manufacturing system, especially in make-to-order (MTO) type manufacturing system with product customization, in predicting capacity to meet future demand. A stochastic discrete-event simulation is developed using ARENA software to determine CU and capacity gap (CG) in short run production function. This study focused on machinery breakdown and product defective rate as random variables in the simulation. The study found that the manufacturing system run in 68.01% CU and 31.99% CG. It is revealed that machinery breakdown and product defective rate have a direct relationship with CU. By improving product defective rate into zero defect, manufacturing system can improve CU up to 73.56% and CG decrease to 26.44%. While improving machinery breakdown into zero breakdowns will improve CU up to 93.99% and the CG decrease to 6.01%. This study helps operation level to study CU using “what-if” analysis in order to meet future demand in more practical and easier method by using simulation approach. Further study is recommended by including other random variables that affect CU to make the simulation closer with the real-life situation for a better decision.

  3. Discrete-time BAM neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  4. Improved robustness and performance of discrete time sliding mode control systems.

    PubMed

    Chakrabarty, Sohom; Bartoszewicz, Andrzej

    2016-11-01

    This paper presents a theoretical analysis along with simulations to show that increased robustness can be achieved for discrete time sliding mode control systems by choosing the sliding variable, or the output, to be of relative degree two instead of relative degree one. In other words it successfully reduces the ultimate bound of the sliding variable compared to the ultimate bound for standard discrete time sliding mode control systems. It is also found out that for such a selection of relative degree two output of the discrete time system, the reduced order system during sliding becomes finite time stable in absence of disturbance. With disturbance, it becomes finite time ultimately bounded. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Uncertainty relation for the discrete Fourier transform.

    PubMed

    Massar, Serge; Spindel, Philippe

    2008-05-16

    We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.

  6. Measurement time and statistics for a noise thermometer with a synthetic-noise reference

    NASA Astrophysics Data System (ADS)

    White, D. R.; Benz, S. P.; Labenski, J. R.; Nam, S. W.; Qu, J. F.; Rogalla, H.; Tew, W. L.

    2008-08-01

    This paper describes methods for reducing the statistical uncertainty in measurements made by noise thermometers using digital cross-correlators and, in particular, for thermometers using pseudo-random noise for the reference signal. First, a discrete-frequency expression for the correlation bandwidth for conventional noise thermometers is derived. It is shown how an alternative frequency-domain computation can be used to eliminate the spectral response of the correlator and increase the correlation bandwidth. The corresponding expressions for the uncertainty in the measurement of pseudo-random noise in the presence of uncorrelated thermal noise are then derived. The measurement uncertainty in this case is less than that for true thermal-noise measurements. For pseudo-random sources generating a frequency comb, an additional small reduction in uncertainty is possible, but at the cost of increasing the thermometer's sensitivity to non-linearity errors. A procedure is described for allocating integration times to further reduce the total uncertainty in temperature measurements. Finally, an important systematic error arising from the calculation of ratios of statistical variables is described.

  7. On the design of henon and logistic map-based random number generator

    NASA Astrophysics Data System (ADS)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  8. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  9. Forest structure estimation and pattern exploration from discrete return lidar in subalpine forests of the Central Rockies

    Treesearch

    K. R. Sherrill; M. A. Lefsky; J. B. Bradford; M. G. Ryan

    2008-01-01

    This study evaluates the relative ability of simple light detection and ranging (lidar) indices (i.e., mean and maximum heights) and statistically derived canonical correlation analysis (CCA) variables attained from discrete-return lidar to estimate forest structure and forest biomass variables for three temperate subalpine forest sites. Both lidar and CCA explanatory...

  10. Forest structure estimation and pattern exploration from discrete-return lidar in subalpine forests of the central Rockies

    Treesearch

    K.R. Sherrill; M.A. Lefsky; J.B. Bradford; M.G. Ryan

    2008-01-01

    This study evaluates the relative ability of simple light detection and ranging (lidar) indices (i.e., mean and maximum heights) and statistically derived canonical correlation analysis (CCA) variables attained from discrete-return lidar to estimate forest structure and forest biomass variables for three temperate subalpine forest sites. Both lidar and CCA explanatory...

  11. Discrete Choice Modeling (DCM): An Exciting Marketing Research Survey Method for Educational Researchers.

    ERIC Educational Resources Information Center

    Berdie, Doug R.

    Discrete Choice Marketing (DCM), a research technique that has become more popular in recent marketing research, is described. DCM is a method that forces people to look at the combination of relevant variables within each choice domain and, with each option fully defined in terms of the values for those variables, make a choice of options. DCM…

  12. Variable-length analog of Stavskaya process: A new example of misleading simulation

    NASA Astrophysics Data System (ADS)

    Ramos, A. D.; Silva, F. S. G.; Sousa, C. S.; Toom, A.

    2017-05-01

    This article presents a new example intended to showcase limitations of computer simulations in the study of random processes with local interaction. For this purpose, we examine a new version of the well-known Stavskaya process, which is a discrete-time analog of the well-known contact processes. Like the bulk of random processes studied till now, the Stavskaya process is constant-length, that is, its components do not appear or disappear in the course of its functioning. The process, which we study here and call Variable Stavskaya, VS, is similar to Stavskaya; it is discrete-time; its states are bi-infinite sequences, whose terms take only two values (denoted here as "minus" and "plus"), and the measure concentrated in the configuration "all pluses" is invariant. However, it is a variable length, which means that its components, also called particles, may appear and disappear under its action. The operator VS is a composition of the following two operators. The first operator, called "birth," depends on a real parameter β; it creates a new component in the state "plus" between every two neighboring components with probability β independently from what happens at other places. The second operator, called "murder," depends on a real parameter α and acts in the following way: whenever a plus is a left neighbor of a minus, this plus disappears (as if murdered by that minus which is its right neighbor) with probability α independently from what happens to other particles. We prove for any α <1 and any β >0 and any initial measure μ that the sequence μ (𝖵𝖲)t (the result of t iterative applications of VS to μ) tends to the measure δ⊕ (concentrated in "all pluses") as t →∞ . Such a behavior is often called ergodic. However, the Monte Carlo simulations and mean-field approximations, which we performed, behaved as if μ (𝖵𝖲)t tended to δ⊕ much slower for some α ,β ,μ than for some others. Based on these numerical results, we conjecture that 𝖵𝖲 has phases, but not in that simple sense as the classical Stavskaya process.

  13. Modelling heat transfer during flow through a random packed bed of spheres

    NASA Astrophysics Data System (ADS)

    Burström, Per E. C.; Frishfelds, Vilnis; Ljung, Anna-Lena; Lundström, T. Staffan; Marjavaara, B. Daniel

    2018-04-01

    Heat transfer in a random packed bed of monosized iron ore pellets is modelled with both a discrete three-dimensional system of spheres and a continuous Computational Fluid Dynamics (CFD) model. Results show a good agreement between the two models for average values over a cross section of the bed for an even temperature profiles at the inlet. The advantage with the discrete model is that it captures local effects such as decreased heat transfer in sections with low speed. The disadvantage is that it is computationally heavy for larger systems of pellets. If averaged values are sufficient, the CFD model is an attractive alternative that is easy to couple to the physics up- and downstream the packed bed. The good agreement between the discrete and continuous model furthermore indicates that the discrete model may be used also on non-Stokian flow in the transitional region between laminar and turbulent flow, as turbulent effects show little influence of the overall heat transfer rates in the continuous model.

  14. Group delay spread analysis of coupled-multicore fibers: A comparison between weak and tight bending conditions

    NASA Astrophysics Data System (ADS)

    Fujisawa, Takeshi; Saitoh, Kunimasa

    2017-06-01

    Group delay spread of coupled three-core fiber is investigated based on coupled-wave theory. The differences between supermode and discrete core mode models are thoroughly investigated to reveal applicability of both models for specific fiber bending condition. A macrobending with random twisting is taken into account for random modal mixing in the fiber. It is found that for weakly bent condition, both supermode and discrete core mode models are applicable. On the other hand, for strongly bent condition, the discrete core mode model should be used to account for increased differential modal group delay for the fiber without twisting and short correlation length, which were experimentally observed recently. Results presented in this paper indicate the discrete core mode model is superior to the supermode model for the analysis of coupled-multicore fibers for various bent condition. Also, for estimating GDS of coupled-multicore fiber, it is critically important to take into account the fiber bending condition.

  15. Dynamical Localization for Discrete and Continuous Random Schrödinger Operators

    NASA Astrophysics Data System (ADS)

    Germinet, F.; De Bièvre, S.

    We show for a large class of random Schrödinger operators Ho on and on that dynamical localization holds, i.e. that, with probability one, for a suitable energy interval I and for q a positive real, Here ψ is a function of sufficiently rapid decrease, and PI(Ho) is the spectral projector of Ho corresponding to the interval I. The result is obtained through the control of the decay of the eigenfunctions of Ho and covers, in the discrete case, the Anderson tight-binding model with Bernoulli potential (dimension ν = 1) or singular potential (ν > 1), and in the continuous case Anderson as well as random Landau Hamiltonians.

  16. Discrete Huygens’ modeling for the characterization of a sound absorbing medium

    NASA Astrophysics Data System (ADS)

    Chai, L.; Kagawa, Y.

    2007-07-01

    Based on the equivalence between the wave propagation in the electrical transmission-lines and acoustic tubes, the authors proposed the use of the transmission-line matrix modeling (TLM) for time-domain solution method of the sound field. TLM is known in electromagnetic engineering community, which is equivalent to the discrete Huygens' modeling. The wave propagation is simulated by tracing the sequences of the transmission and scattering of impulses. The theory and the demonstrated examples are presented in the references, in which a sound absorbing field was preliminarily considered to be a medium with simple acoustic resistance independent of frequency and the angle of incidence for the absorbing layer placed on the room wall surface. The present work is concerned with the time-domain response for the characterization of the sound absorbing materials. A lossy component with variable propagation velocity is introduced for sound absorbing materials to facilitate the energy consumption. The frequency characteristics of the absorption coefficient are also considered for the normal, oblique and random incidence. Some numerical demonstrations show that the present modeling provide a reasonable modeling of the homogeneous sound absorbing materials in time domain.

  17. A Statistical Test of Walrasian Equilibrium by Means of Complex Networks Theory

    NASA Astrophysics Data System (ADS)

    Bargigli, Leonardo; Viaggiu, Stefano; Lionetto, Andrea

    2016-10-01

    We represent an exchange economy in terms of statistical ensembles for complex networks by introducing the concept of market configuration. This is defined as a sequence of nonnegative discrete random variables {w_{ij}} describing the flow of a given commodity from agent i to agent j. This sequence can be arranged in a nonnegative matrix W which we can regard as the representation of a weighted and directed network or digraph G. Our main result consists in showing that general equilibrium theory imposes highly restrictive conditions upon market configurations, which are in most cases not fulfilled by real markets. An explicit example with reference to the e-MID interbank credit market is provided.

  18. Interpreting Significant Discrete-Time Periods in Survival Analysis.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Denson, Kathleen B.

    Discrete-time survival analysis is a new method for educational researchers to employ when looking at the timing of certain educational events. Previous continuous-time methods do not allow for the flexibility inherent in a discrete-time method. Because both time-invariant and time-varying predictor variables can now be used, the interaction of…

  19. Development and Application of Methods for Estimating Operating Characteristics of Discrete Test Item Responses without Assuming any Mathematical Form.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In latent trait theory the latent space, or space of the hypothetical construct, is usually represented by some unidimensional or multi-dimensional continuum of real numbers. Like the latent space, the item response can either be treated as a discrete variable or as a continuous variable. Latent trait theory relates the item response to the latent…

  20. Sentient Structures: Optimising Sensor Layouts for Direct Measurement of Discrete Variables

    DTIC Science & Technology

    2008-11-01

    1 Sentient Structures Optimising Sensor Layouts for Direct Measurement of Discrete Variables Report to US Air Force...TITLE AND SUBTITLE Sentient Structures 5a. CONTRACT NUMBER FA48690714045 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Donald Price...optimal sensor placements is an important requirement for the development of sentient structures. An optimal sensor layout is attained when a limited

  1. Boundaries, kinetic properties, and final domain structure of plane discrete uniform Poisson-Voronoi tessellations with von Neumann neighborhoods.

    PubMed

    Korobov, A

    2009-03-01

    Discrete random tessellations appear not infrequently in describing nucleation and growth transformations. Generally, several non-Euclidean metrics are possible in this case. Previously [A. Korobov, Phys. Rev. B 76, 085430 (2007)] continual analogs of such tessellations have been studied. Here one of the simplest discrete varieties of the Kolmogorov-Johnson-Mehl-Avrami model, namely, the model with von Neumann neighborhoods, has been examined per se, i.e., without continualization. The tessellation is uniform in the sense that domain boundaries consist of tiles. Similarities and distinctions between discrete and continual models are discussed.

  2. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  3. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  4. Anomalous transport in disordered fracture networks: Spatial Markov model for dispersion with variable injection modes

    NASA Astrophysics Data System (ADS)

    Kang, Peter K.; Dentz, Marco; Le Borgne, Tanguy; Lee, Seunghak; Juanes, Ruben

    2017-08-01

    We investigate tracer transport on random discrete fracture networks that are characterized by the statistics of the fracture geometry and hydraulic conductivity. While it is well known that tracer transport through fractured media can be anomalous and particle injection modes can have major impact on dispersion, the incorporation of injection modes into effective transport modeling has remained an open issue. The fundamental reason behind this challenge is that-even if the Eulerian fluid velocity is steady-the Lagrangian velocity distribution experienced by tracer particles evolves with time from its initial distribution, which is dictated by the injection mode, to a stationary velocity distribution. We quantify this evolution by a Markov model for particle velocities that are equidistantly sampled along trajectories. This stochastic approach allows for the systematic incorporation of the initial velocity distribution and quantifies the interplay between velocity distribution and spatial and temporal correlation. The proposed spatial Markov model is characterized by the initial velocity distribution, which is determined by the particle injection mode, the stationary Lagrangian velocity distribution, which is derived from the Eulerian velocity distribution, and the spatial velocity correlation length, which is related to the characteristic fracture length. This effective model leads to a time-domain random walk for the evolution of particle positions and velocities, whose joint distribution follows a Boltzmann equation. Finally, we demonstrate that the proposed model can successfully predict anomalous transport through discrete fracture networks with different levels of heterogeneity and arbitrary tracer injection modes.

  5. Functional entropy variables: A new methodology for deriving thermodynamically consistent algorithms for complex fluids, with particular reference to the isothermal Navier–Stokes–Korteweg equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ju, E-mail: jliu@ices.utexas.edu; Gomez, Hector; Evans, John A.

    2013-09-01

    We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionallymore » stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.« less

  6. Affective norms of 875 Spanish words for five discrete emotional categories and two emotional dimensions.

    PubMed

    Hinojosa, J A; Martínez-García, N; Villalba-García, C; Fernández-Folgueiras, U; Sánchez-Carmona, A; Pozo, M A; Montoro, P R

    2016-03-01

    In the present study, we introduce affective norms for a new set of Spanish words, the Madrid Affective Database for Spanish (MADS), that were scored on two emotional dimensions (valence and arousal) and on five discrete emotional categories (happiness, anger, sadness, fear, and disgust), as well as on concreteness, by 660 Spanish native speakers. Measures of several objective psycholinguistic variables--grammatical class, word frequency, number of letters, and number of syllables--for the words are also included. We observed high split-half reliabilities for every emotional variable and a strong quadratic relationship between valence and arousal. Additional analyses revealed several associations between the affective dimensions and discrete emotions, as well as with some psycholinguistic variables. This new corpus complements and extends prior databases in Spanish and allows for designing new experiments investigating the influence of affective content in language processing under both dimensional and discrete theoretical conceptions of emotion. These norms can be downloaded as supplemental materials for this article from www.dropbox.com/s/o6dpw3irk6utfhy/Hinojosa%20et%20al_Supplementary%20materials.xlsx?dl=0 .

  7. Scattering in discrete random media with implications to propagation through rain. Ph.D. Thesis George Washingtion Univ., Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J., Jr.

    1977-01-01

    The multiple scattering effects on wave propagation through a volume of discrete scatterers were investigated. The mean field and intensity for a distribution of scatterers was developed using a discrete random media formulation, and second order series expansions for the mean field and total intensity derived for one-dimensional and three-dimensional configurations. The volume distribution results were shown to proceed directly from the one-dimensional results. The multiple scattering intensity expansion was compared to the classical single scattering intensity and the classical result was found to represent only the first three terms in the total intensity expansion. The Foldy approximation to the mean field was applied to develop the coherent intensity, and was found to exactly represent all coherent terms of the total intensity.

  8. Contingency and statistical laws in replicate microbial closed ecosystems.

    PubMed

    Hekstra, Doeke R; Leibler, Stanislas

    2012-05-25

    Contingency, the persistent influence of past random events, pervades biology. To what extent, then, is each course of ecological or evolutionary dynamics unique, and to what extent are these dynamics subject to a common statistical structure? Addressing this question requires replicate measurements to search for emergent statistical laws. We establish a readily replicated microbial closed ecosystem (CES), sustaining its three species for years. We precisely measure the local population density of each species in many CES replicates, started from the same initial conditions and kept under constant light and temperature. The covariation among replicates of the three species densities acquires a stable structure, which could be decomposed into discrete eigenvectors, or "ecomodes." The largest ecomode dominates population density fluctuations around the replicate-average dynamics. These fluctuations follow simple power laws consistent with a geometric random walk. Thus, variability in ecological dynamics can be studied with CES replicates and described by simple statistical laws. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Improved Results for Route Planning in Stochastic Transportation Networks

    NASA Technical Reports Server (NTRS)

    Boyan, Justin; Mitzenmacher, Michael

    2000-01-01

    In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.

  10. Nonautonomous discrete bright soliton solutions and interaction management for the Ablowitz-Ladik equation.

    PubMed

    Yu, Fajun

    2015-03-01

    We present the nonautonomous discrete bright soliton solutions and their interactions in the discrete Ablowitz-Ladik (DAL) equation with variable coefficients, which possesses complicated wave propagation in time and differs from the usual bright soliton waves. The differential-difference similarity transformation allows us to relate the discrete bright soliton solutions of the inhomogeneous DAL equation to the solutions of the homogeneous DAL equation. Propagation and interaction behaviors of the nonautonomous discrete solitons are analyzed through the one- and two-soliton solutions. We study the discrete snaking behaviors, parabolic behaviors, and interaction behaviors of the discrete solitons. In addition, the interaction management with free functions and dynamic behaviors of these solutions is investigated analytically, which have certain applications in electrical and optical systems.

  11. Digital double random amplitude image encryption method based on the symmetry property of the parametric discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Bekkouche, Toufik; Bouguezel, Saad

    2018-03-01

    We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.

  12. Demands, skill discretion, decision authority and social climate at work as determinants of major depression in a 3-year follow-up study.

    PubMed

    Fandiño-Losada, Andrés; Forsell, Yvonne; Lundberg, Ingvar

    2013-07-01

    The psychosocial work environment may be a determinant of the development and course of depressive disorders, but the literature shows inconsistent findings. Thus, the aim of this study is to determine longitudinal effects of the job demands-control-support model (JDCSM) variables on the occurrence of major depression among working men and women from the general population. The sample comprised 4,710 working women and men living in Stockholm, who answered the same questionnaire twice, 3 years apart, who were not depressed during the first wave and had the same job in both waves. The questionnaire included JDCSM variables (demands, skill discretion, decision authority and social climate) and other co-variables (income, education, occupational group, social support, help and small children at home, living with an adult and depressive symptoms at time 1; and negative life events at time 2). Multiple logistic regressions were run to calculate odds ratios of having major depression at time 2, after adjustment for other JDCSM variables and co-variables. Among women, inadequate work social climate was the only significant risk indicator for major depression. Surprisingly, among men, high job demands and low skill discretion appeared as protective factors against major depression. The results showed a strong relationship between inadequate social climate and major depression among women, while there were no certain effects for the remaining exposure variables. Among men, few cases of major depression hampered well-founded conclusions regarding our findings of low job demands and high skill discretion as related to major depression.

  13. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE PAGES

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    2017-10-13

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  14. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  15. Evaluating the Effectiveness of Two Commonly Used Discrete Trial Procedures for Teaching Receptive Discrimination to Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Gutierrez, Anibal, Jr.; Hale, Melissa N.; O'Brien, Heather A.; Fischer, Aaron J.; Durocher, Jennifer S.; Alessandri, Michael

    2009-01-01

    Discrete trial teaching procedures have been demonstrated to be effective in teaching a variety of important skills for children with autism spectrum disorders (ASD). Although all discrete trial programs are based in the principles of applied behavior analysis, some variability exists between programs with regards to the precise teaching…

  16. Security of a discretely signaled continuous variable quantum key distribution protocol for high rate systems.

    PubMed

    Zhang, Zheshen; Voss, Paul L

    2009-07-06

    We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.

  17. Modeling of Electromagnetic Scattering by Discrete and Discretely Heterogeneous Random Media by Using Numerically Exact Solutions of the Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2017-01-01

    In this paper, we discuss some aspects of numerical modeling of electromagnetic scattering by discrete random medium by using numerically exact solutions of the macroscopic Maxwell equations. Typical examples of such media are clouds of interstellar dust, clouds of interplanetary dust in the Solar system, dusty atmospheres of comets, particulate planetary rings, clouds in planetary atmospheres, aerosol particles with numerous inclusions and so on. Our study is based on the results of extensive computations of different characteristics of electromagnetic scattering obtained by using the superposition T-matrix method which represents a direct computer solver of the macroscopic Maxwell equations for an arbitrary multisphere configuration. As a result, in particular, we clarify the range of applicability of the low-density theories of radiative transfer and coherent backscattering as well as of widely used effective-medium approximations.

  18. Comment on "Route from discreteness to the continuum for the Tsallis q -entropy"

    NASA Astrophysics Data System (ADS)

    Ou, Congjie; Abe, Sumiyoshi

    2018-06-01

    Several years ago, it had been discussed that nonlogarithmic entropies, such as the Tsallis q -entropy cannot be applied to systems with continuous variables. Now, in their recent paper [Phys. Rev. E 97, 012104 (2018), 10.1103/PhysRevE.97.012104], Oikonomou and Bagci have modified the form of the q -entropy for discrete variables in such a way that its continuum limit exists. Here, it is shown that this modification violates the expandability property of entropy, and their work is actually supporting evidence for the absence of the q -entropy for systems with continuous variables.

  19. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  20. Quantum circuit dynamics via path integrals: Is there a classical action for discrete-time paths?

    NASA Astrophysics Data System (ADS)

    Penney, Mark D.; Enshan Koh, Dax; Spekkens, Robert W.

    2017-07-01

    It is straightforward to compute the transition amplitudes of a quantum circuit using the sum-over-paths methodology when the gates in the circuit are balanced, where a balanced gate is one for which all non-zero transition amplitudes are of equal magnitude. Here we consider the question of whether, for such circuits, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action, as they are for continuous-time paths. We show how to do so for certain kinds of quantum circuits, namely, Clifford circuits where the elementary systems are continuous-variable systems or discrete systems of odd-prime dimension. These types of circuit are distinguished by having phase-space representations that serve to define their classical counterparts. For discrete systems, the phase-space coordinates are also discrete variables. We show that for each gate in the generating set, one can associate a symplectomorphism on the phase-space and to each of these one can associate a generating function, defined on two copies of the configuration space. For discrete systems, the latter association is achieved using tools from algebraic geometry. Finally, we show that if the action functional for a discrete-time path through a sequence of gates is defined using the sum of the corresponding generating functions, then it yields the correct relative phases for the path-sum expression. These results are likely to be relevant for quantizing physical theories where time is fundamentally discrete, characterizing the classical limit of discrete-time quantum dynamics, and proving complexity results for quantum circuits.

  1. Discrete structures in continuum descriptions of defective crystals.

    PubMed

    Parry, G P

    2016-04-28

    I discuss various mathematical constructions that combine together to provide a natural setting for discrete and continuum geometric models of defective crystals. In particular, I provide a quite general list of 'plastic strain variables', which quantifies inelastic behaviour, and exhibit rigorous connections between discrete and continuous mathematical structures associated with crystalline materials that have a correspondingly general constitutive specification. © 2016 The Author(s).

  2. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  3. Biomechanical symmetry in elite rugby union players during dynamic tasks: an investigation using discrete and continuous data analysis techniques.

    PubMed

    Marshall, Brendan; Franklyn-Miller, Andrew; Moran, Kieran; King, Enda; Richter, Chris; Gore, Shane; Strike, Siobhán; Falvey, Éanna

    2015-01-01

    While measures of asymmetry may provide a means of identifying individuals predisposed to injury, normative asymmetry values for challenging sport specific movements in elite athletes are currently lacking in the literature. In addition, previous studies have typically investigated symmetry using discrete point analyses alone. This study examined biomechanical symmetry in elite rugby union players using both discrete point and continuous data analysis techniques. Twenty elite injury free international rugby union players (mean ± SD: age 20.4 ± 1.0 years; height 1.86 ± 0.08 m; mass 98.4 ± 9.9 kg) underwent biomechanical assessment. A single leg drop landing, a single leg hurdle hop, and a running cut were analysed. Peak joint angles and moments were examined in the discrete point analysis while analysis of characterising phases (ACP) techniques were used to examine the continuous data. Dominant side was compared to non-dominant side using dependent t-tests for normally distributed data or Wilcoxon signed-rank test for non-normally distributed data. The significance level was set at α = 0.05. The majority of variables were found to be symmetrical with a total of 57/60 variables displaying symmetry in the discrete point analysis and 55/60 in the ACP. The five variables that were found to be asymmetrical were hip abductor moment in the drop landing (p = 0.02), pelvis lift/drop in the drop landing (p = 0.04) and hurdle hop (p = 0.02), ankle internal rotation moment in the cut (p = 0.04) and ankle dorsiflexion angle also in the cut (p = 0.01). The ACP identified two additional asymmetries not identified in the discrete point analysis. Elite injury free rugby union players tended to exhibit bi-lateral symmetry across a range of biomechanical variables in a drop landing, hurdle hop and cut. This study provides useful normative values for inter-limb symmetry in these movement tests. When examining symmetry it is recommended to incorporate continuous data analysis techniques rather than a discrete point analysis alone; a discrete point analysis was unable to detect two of the five asymmetries identified.

  4. New preconditioning strategy for Jacobian-free solvers for variably saturated flows with Richards’ equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil

    2016-04-29

    We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less

  5. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  6. Practical secure quantum communications

    NASA Astrophysics Data System (ADS)

    Diamanti, Eleni

    2015-05-01

    We review recent advances in the field of quantum cryptography, focusing in particular on practical implementations of two central protocols for quantum network applications, namely key distribution and coin flipping. The former allows two parties to share secret messages with information-theoretic security, even in the presence of a malicious eavesdropper in the communication channel, which is impossible with classical resources alone. The latter enables two distrustful parties to agree on a random bit, again with information-theoretic security, and with a cheating probability lower than the one that can be reached in a classical scenario. Our implementations rely on continuous-variable technology for quantum key distribution and on a plug and play discrete-variable system for coin flipping, and necessitate a rigorous security analysis adapted to the experimental schemes and their imperfections. In both cases, we demonstrate the protocols with provable security over record long distances in optical fibers and assess the performance of our systems as well as their limitations. The reported advances offer a powerful toolbox for practical applications of secure communications within future quantum networks.

  7. Electrolytic plating apparatus for discrete microsized particles

    DOEpatents

    Mayer, Anton

    1976-11-30

    Method and apparatus are disclosed for electrolytically producing very uniform coatings of a desired material on discrete microsized particles. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with a powered cathode for a time sufficient for such to occur.

  8. Electroless plating apparatus for discrete microsized particles

    DOEpatents

    Mayer, Anton

    1978-01-01

    Method and apparatus are disclosed for producing very uniform coatings of a desired material on discrete microsized particles by electroless techniques. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with each other for a time sufficient for such to occur.

  9. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  10. Generic emergence of power law distributions and Lévy-Stable intermittent fluctuations in discrete logistic systems

    NASA Astrophysics Data System (ADS)

    Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin

    1998-08-01

    The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.

  11. The Wronskian solution of the constrained discrete Kadomtsev-Petviashvili hierarchy

    NASA Astrophysics Data System (ADS)

    Li, Maohua; He, Jingsong

    2016-05-01

    From the constrained discrete Kadomtsev-Petviashvili (cdKP) hierarchy, the discrete nonlinear Schrödinger (DNLS) equations have been derived. By means of the gauge transformation, the Wronskian solution of DNLS equations have been given. The u1 of the cdKP hierarchy is a Y-type soliton solution for odd times of the gauge transformation, but it becomes a dark-bright soliton solution for even times of the gauge transformation. The role of the discrete variable n in the profile of the u1 is discussed.

  12. Coherent Backscattering in the Cross-Polarized Channel

    NASA Technical Reports Server (NTRS)

    Mischenko, Michael I.; Mackowski, Daniel W.

    2011-01-01

    We analyze the asymptotic behavior of the cross-polarized enhancement factor in the framework of the standard low-packing-density theory of coherent backscattering by discrete random media composed of spherically symmetric particles. It is shown that if the particles are strongly absorbing or if the smallest optical dimension of the particulate medium (i.e., the optical thickness of a plane-parallel slab or the optical diameter of a spherically symmetric volume) approaches zero, then the cross-polarized enhancement factor tends to its upper-limit value 2. This theoretical prediction is illustrated using direct computer solutions of the Maxwell equations for spherical volumes of discrete random medium.

  13. Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2011-01-01

    The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.

  14. Cramer-Rao Bound for Gaussian Random Processes and Applications to Radar Processing of Atmospheric Signals

    NASA Technical Reports Server (NTRS)

    Frehlich, Rod

    1993-01-01

    Calculations of the exact Cramer-Rao Bound (CRB) for unbiased estimates of the mean frequency, signal power, and spectral width of Doppler radar/lidar signals (a Gaussian random process) are presented. Approximate CRB's are derived using the Discrete Fourier Transform (DFT). These approximate results are equal to the exact CRB when the DFT coefficients are mutually uncorrelated. Previous high SNR limits for CRB's are shown to be inaccurate because the discrete summations cannot be approximated with integration. The performance of an approximate maximum likelihood estimator for mean frequency approaches the exact CRB for moderate signal to noise ratio and moderate spectral width.

  15. Discrete disorder models for many-body localization

    NASA Astrophysics Data System (ADS)

    Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub

    2018-04-01

    Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.

  16. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  17. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  18. Discrete optimal control approach to a four-dimensional guidance problem near terminal areas

    NASA Technical Reports Server (NTRS)

    Nagarajan, N.

    1974-01-01

    Description of a computer-oriented technique to generate the necessary control inputs to guide an aircraft in a given time from a given initial state to a prescribed final state subject to the constraints on airspeed, acceleration, and pitch and bank angles of the aircraft. A discrete-time mathematical model requiring five state variables and three control variables is obtained, assuming steady wind and zero sideslip. The guidance problem is posed as a discrete nonlinear optimal control problem with a cost functional of Bolza form. A solution technique for the control problem is investigated, and numerical examples are presented. It is believed that this approach should prove to be useful in automated air traffic control schemes near large terminal areas.

  19. Low energy physical activity recognition system on smartphones.

    PubMed

    Soria Morillo, Luis Miguel; Gonzalez-Abril, Luis; Ortega Ramirez, Juan Antonio; de la Concepcion, Miguel Angel Alvarez

    2015-03-03

    An innovative approach to physical activity recognition based on the use of discrete variables obtained from accelerometer sensors is presented. The system first performs a discretization process for each variable, which allows efficient recognition of activities performed by users using as little energy as possible. To this end, an innovative discretization and classification technique is presented based on the χ2 distribution. Furthermore, the entire recognition process is executed on the smartphone, which determines not only the activity performed, but also the frequency at which it is carried out. These techniques and the new classification system presented reduce energy consumption caused by the activity monitoring system. The energy saved increases smartphone usage time to more than 27 h without recharging while maintaining accuracy.

  20. Evaluation of the Navys Sea/Shore Flow Policy

    DTIC Science & Technology

    2016-06-01

    Std. Z39.18 i Abstract CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of...a more steady manning level, but the variability remains, even if the system is optimized. In building a Discrete -Event Simulation model, we...steady-state model. In FY 2014, CNA developed a Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model

  1. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  2. A study of renal blood flow regulation using the discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Pavlova, Olga N.; Mosekilde, Erik; Sosnovtseva, Olga V.

    2010-02-01

    In this paper we provide a way to distinguish features of renal blood flow autoregulation mechanisms in normotensive and hypertensive rats based on the discrete wavelet transform. Using the variability of the wavelet coefficients we show distinctions that occur between the normal and pathological states. A reduction of this variability in hypertension is observed on the microscopic level of the blood flow in efferent arteriole of single nephrons. This reduction is probably associated with higher flexibility of healthy cardiovascular system.

  3. A 24 km fiber-based discretely signaled continuous variable quantum key distribution system.

    PubMed

    Dinh Xuan, Quyen; Zhang, Zheshen; Voss, Paul L

    2009-12-21

    We report a continuous variable key distribution system that achieves a final secure key rate of 3.45 kilobits/s over a distance of 24.2 km of optical fiber. The protocol uses discrete signaling and post-selection to improve reconciliation speed and quantifies security by means of quantum state tomography. Polarization multiplexing and a frequency translation scheme permit transmission of a continuous wave local oscillator and suppression of noise from guided acoustic wave Brillouin scattering by more than 27 dB.

  4. Dependent scattering and absorption by densely packed discrete spherical particles: Effects of complex refractive index

    NASA Astrophysics Data System (ADS)

    Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.

    2017-07-01

    Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.

  5. Influences of system uncertainties on the numerical transfer path analysis of engine systems

    NASA Astrophysics Data System (ADS)

    Acri, A.; Nijman, E.; Acri, A.; Offner, G.

    2017-10-01

    Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.

  6. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  7. Discrete-time systems with random switches: From systems stability to networks synchronization.

    PubMed

    Guo, Yao; Lin, Wei; Ho, Daniel W C

    2016-03-01

    In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.

  8. Discrete-time systems with random switches: From systems stability to networks synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yao; Lin, Wei, E-mail: wlin@fudan.edu.cn; Shanghai Key Laboratory of Contemporary Applied Mathematics, LMNS, and Shanghai Center for Mathematical Sciences, Shanghai 200433

    2016-03-15

    In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developedmore » approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.« less

  9. Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra

    NASA Astrophysics Data System (ADS)

    Rezakhah, Saeid; Maleki, Yasaman

    2016-07-01

    Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.

  10. Stochastic resetting in backtrack recovery by RNA polymerases

    NASA Astrophysics Data System (ADS)

    Roldán, Édgar; Lisica, Ana; Sánchez-Taltavull, Daniel; Grill, Stephan W.

    2016-06-01

    Transcription is a key process in gene expression, in which RNA polymerases produce a complementary RNA copy from a DNA template. RNA polymerization is frequently interrupted by backtracking, a process in which polymerases perform a random walk along the DNA template. Recovery of polymerases from the transcriptionally inactive backtracked state is determined by a kinetic competition between one-dimensional diffusion and RNA cleavage. Here we describe backtrack recovery as a continuous-time random walk, where the time for a polymerase to recover from a backtrack of a given depth is described as a first-passage time of a random walker to reach an absorbing state. We represent RNA cleavage as a stochastic resetting process and derive exact expressions for the recovery time distributions and mean recovery times from a given initial backtrack depth for both continuous and discrete-lattice descriptions of the random walk. We show that recovery time statistics do not depend on the discreteness of the DNA lattice when the rate of one-dimensional diffusion is large compared to the rate of cleavage.

  11. Variability of multilevel switching in scaled hybrid RS/CMOS nanoelectronic circuits: theory

    NASA Astrophysics Data System (ADS)

    Heittmann, Arne; Noll, Tobias G.

    2013-07-01

    A theory is presented which describes the variability of multilevel switching in scaled hybrid resistive-switching/CMOS nanoelectronic circuits. Variability is quantified in terms of conductance variation using the first two moments derived from the probability density function (PDF) of the RS conductance. For RS, which are based on the electrochemical metallization effect (ECM), this variability is - to some extent - caused by discrete events such as electrochemical reactions, which occur on atomic scale and are at random. The theory shows that the conductance variation depends on the joint interaction between the programming circuit and the resistive switch (RS), and explicitly quantifies the impact of RS device parameters and parameters of the programming circuit on the conductance variance. Using a current mirror as an exemplary programming circuit an upper limit of 2-4 bits (dependent on the filament surface area) is estimated as the storage capacity exploiting the multilevel capabilities of an ECM cell. The theoretical results were verified by Monte Carlo circuit simulations on a standard circuit simulation environment using an ECM device model which models the filament growth by a Poisson process. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.

  12. Control approach development for variable recruitment artificial muscles

    NASA Astrophysics Data System (ADS)

    Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew

    2016-04-01

    This study characterizes hybrid control approaches for the variable recruitment of fluidic artificial muscles with double acting (antagonistic) actuation. Fluidic artificial muscle actuators have been explored by researchers due to their natural compliance, high force-to-weight ratio, and low cost of fabrication. Previous studies have attempted to improve system efficiency of the actuators through variable recruitment, i.e. using discrete changes in the number of active actuators. While current variable recruitment research utilizes manual valve switching, this paper details the current development of an online variable recruitment control scheme. By continuously controlling applied pressure and discretely controlling the number of active actuators, operation in the lowest possible recruitment state is ensured and working fluid consumption is minimized. Results provide insight into switching control scheme effects on working fluids, fabrication material choices, actuator modeling, and controller development decisions.

  13. Stability with large step sizes for multistep discretizations of stiff ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Majda, George

    1986-01-01

    One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.

  14. Discrete cosine and sine transforms generalized to honeycomb lattice

    NASA Astrophysics Data System (ADS)

    Hrivnák, Jiří; Motlochová, Lenka

    2018-06-01

    The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.

  15. The Propagation of Movement Variability in Time: A Methodological Approach for Discrete Movements with Multiple Degrees of Freedom.

    PubMed

    Krüger, Melanie; Straube, Andreas; Eggert, Thomas

    2017-01-01

    In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is shown that, by using canonical correlation analysis, alterations in movement coordination that indicate changes in the control strategy concerning the use of motor redundancy can be detected, which represents an important methodical advance in the context of neuromechanics.

  16. Interpretation of the Lempel-Ziv complexity measure in the context of biomedical signal analysis.

    PubMed

    Aboy, Mateo; Hornero, Roberto; Abásolo, Daniel; Alvarez, Daniel

    2006-11-01

    Lempel-Ziv complexity (LZ) and derived LZ algorithms have been extensively used to solve information theoretic problems such as coding and lossless data compression. In recent years, LZ has been widely used in biomedical applications to estimate the complexity of discrete-time signals. Despite its popularity as a complexity measure for biosignal analysis, the question of LZ interpretability and its relationship to other signal parameters and to other metrics has not been previously addressed. We have carried out an investigation aimed at gaining a better understanding of the LZ complexity itself, especially regarding its interpretability as a biomedical signal analysis technique. Our results indicate that LZ is particularly useful as a scalar metric to estimate the bandwidth of random processes and the harmonic variability in quasi-periodic signals.

  17. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  18. Characterization of cancer and normal tissue fluorescence through wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2008-02-01

    The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.

  19. A latent class multiple constraint multiple discrete-continuous extreme value model of time use and goods consumption.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...

  20. Electromagnetic Scattering by Spheroidal Volumes of Discrete Random Medium

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2017-01-01

    We use the superposition T-matrix method to compare the far-field scattering matrices generated by spheroidal and spherical volumes of discrete random medium having the same volume and populated by identical spherical particles. Our results fully confirm the robustness of the previously identified coherent and diffuse scattering regimes and associated optical phenomena exhibited by spherical particulate volumes and support their explanation in terms of the interference phenomenon coupled with the order-of-scattering expansion of the far-field Foldy equations. We also show that increasing non-sphericity of particulate volumes causes discernible (albeit less pronounced) optical effects in forward and backscattering directions and explain them in terms of the same interference/multiple-scattering phenomenon.

  1. Numerical simulation of freshwater/seawater interaction in a dual-permeability karst system with conduits: the development of discrete-continuum VDFST-CFP model

    NASA Astrophysics Data System (ADS)

    Xu, Zexuan; Hu, Bill

    2016-04-01

    Dual-permeability karst aquifers of porous media and conduit networks with significant different hydrological characteristics are widely distributed in the world. Discrete-continuum numerical models, such as MODFLOW-CFP and CFPv2, have been verified as appropriate approaches to simulate groundwater flow and solute transport in numerical modeling of karst hydrogeology. On the other hand, seawater intrusion associated with fresh groundwater resources contamination has been observed and investigated in numbers of coastal aquifers, especially under conditions of sea level rise. Density-dependent numerical models including SEAWAT are able to quantitatively evaluate the seawater/freshwater interaction processes. A numerical model of variable-density flow and solute transport - conduit flow process (VDFST-CFP) is developed to provide a better description of seawater intrusion and submarine groundwater discharge in a coastal karst aquifer with conduits. The coupling discrete-continuum VDFST-CFP model applies Darcy-Weisbach equation to simulate non-laminar groundwater flow in the conduit system in which is conceptualized and discretized as pipes, while Darcy equation is still used in continuum porous media. Density-dependent groundwater flow and solute transport equations with appropriate density terms in both conduit and porous media systems are derived and numerically solved using standard finite difference method with an implicit iteration procedure. Synthetic horizontal and vertical benchmarks are created to validate the newly developed VDFST-CFP model by comparing with other numerical models such as variable density SEAWAT, couplings of constant density groundwater flow and solute transport MODFLOW/MT3DMS and discrete-continuum CFPv2/UMT3D models. VDFST-CFP model improves the simulation of density dependent seawater/freshwater mixing processes and exchanges between conduit and matrix. Continuum numerical models greatly overestimated the flow rate under turbulent flow condition but discrete-continuum models provide more accurate results. Parameters sensitivities analysis indicates that conduit diameter and friction factor, matrix hydraulic conductivity and porosity are important parameters that significantly affect variable-density flow and solute transport simulation. The pros and cons of model assumptions, conceptual simplifications and numerical techniques in VDFST-CFP are discussed. In general, the development of VDFST-CFP model is an innovation in numerical modeling methodology and could be applied to quantitatively evaluate the seawater/freshwater interaction in coastal karst aquifers. Keywords: Discrete-continuum numerical model; Variable density flow and transport; Coastal karst aquifer; Non-laminar flow

  2. Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2012-01-01

    The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.

  3. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  4. Automation of Random Conical Tilt and Orthogonal Tilt Data Collection using Feature Based Correlation

    PubMed Central

    Yoshioka, Craig; Pulokas, James; Fellmann, Denis; Potter, Clinton S.; Milligan, Ronald A.; Carragher, Bridget

    2007-01-01

    Visualization by electron microscopy has provided many insights into the composition, quaternary structure, and mechanism of macromolecular assemblies. By preserving samples in stain or vitreous ice it is possible to image them as discrete particles, and from these images generate three-dimensional structures. This ‘single-particle’ approach suffers from two major shortcomings; it requires an initial model to reconstitute 2D data into a 3D volume, and it often fails when faced with conformational variability. Random conical tilt (RCT) and orthogonal tilt (OTR) are methods developed to overcome these problems, but the data collection required, particularly for vitreous ice specimens, is difficult and tedious. In this paper we present an automated approach to RCT/OTR data collection that removes the burden of manual collection and offers higher quality and throughput than is otherwise possible. We show example datasets collected under stain and cryo conditions and provide statistics related to the efficiency and robustness of the process. Furthermore, we describe the new algorithms that make this method possible, which include new calibrations, improved targeting and feature-based tracking. PMID:17524663

  5. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

    PubMed

    Probst, Dimitri; Petrovici, Mihai A; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

  6. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2014-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled. PMID:25481130

  7. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons

    PubMed Central

    Probst, Dimitri; Petrovici, Mihai A.; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems. PMID:25729361

  8. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    NASA Astrophysics Data System (ADS)

    Chevalier, Michael W.; El-Samad, Hana

    2014-12-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.

  9. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    NASA Astrophysics Data System (ADS)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  10. Generalized chaos synchronization theorems for bidirectional differential equations and discrete systems with applications

    NASA Astrophysics Data System (ADS)

    Ji, Ye; Liu, Ting; Min, Lequan

    2008-05-01

    Two constructive generalized chaos synchronization (GCS) theorems for bidirectional differential equations and discrete systems are introduced. Using the two theorems, one can construct new chaos systems to make the system variables be in GCS. Five examples are presented to illustrate the effectiveness of the theoretical results.

  11. A coherent discrete variable representation method on a sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua -Gen

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  12. A coherent discrete variable representation method on a sphere

    DOE PAGES

    Yu, Hua -Gen

    2017-09-05

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  13. A discrete random walk on the hypercube

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyuan; Xiang, Yonghong; Sun, Weigang

    2018-03-01

    In this paper, we study the scaling for mean first-passage time (MFPT) of random walks on the hypercube and obtain a closed-form formula for the MFPT over all node pairs. We also determine the exponent of scaling efficiency characterizing the random walks and compare it with those of the existing networks. Finally we study the random walks on the hypercube with a located trap and provide a solution of the Kirchhoff index of the hypercube.

  14. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  15. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  16. Relation between random walks and quantum walks

    NASA Astrophysics Data System (ADS)

    Boettcher, Stefan; Falkner, Stefan; Portugal, Renato

    2015-05-01

    Based on studies of four specific networks, we conjecture a general relation between the walk dimensions dw of discrete-time random walks and quantum walks with the (self-inverse) Grover coin. In each case, we find that dw of the quantum walk takes on exactly half the value found for the classical random walk on the same geometry. Since walks on homogeneous lattices satisfy this relation trivially, our results for heterogeneous networks suggest that such a relation holds irrespective of whether translational invariance is maintained or not. To develop our results, we extend the renormalization-group analysis (RG) of the stochastic master equation to one with a unitary propagator. As in the classical case, the solution ρ (x ,t ) in space and time of this quantum-walk equation exhibits a scaling collapse for a variable xdw/t in the weak limit, which defines dw and illuminates fundamental aspects of the walk dynamics, e.g., its mean-square displacement. We confirm the collapse for ρ (x ,t ) in each case with extensive numerical simulation. The exact values for dw themselves demonstrate that RG is a powerful complementary approach to study the asymptotics of quantum walks that weak-limit theorems have not been able to access, such as for systems lacking translational symmetries beyond simple trees.

  17. Biomechanical contributions of the trunk and upper extremity in discrete versus cyclic reaching in survivors of stroke.

    PubMed

    Massie, Crystal L; Malcolm, Matthew P; Greene, David P; Browning, Raymond C

    2014-01-01

    Stroke rehabilitation interventions and assessments incorporate discrete and/or cyclic reaching tasks, yet no biomechanical comparison exists between these 2 movements in survivors of stroke. To characterize the differences between discrete (movements bounded by stationary periods) and cyclic (continuous repetitive movements) reaching in survivors of stroke. Seventeen survivors of stroke underwent kinematic motion analysis of discrete and cyclic reaching movements. Outcomes collected for each side included shoulder, elbow, and trunk range of motion (ROM); peak velocity; movement time; and spatial variability at target contact. Participants used significantly less shoulder and elbow ROM and significantly more trunk flexion ROM when reaching with the stroke-affected side compared with the less-affected side (P < .001). Participants used significantly more trunk rotation during cyclic reaching than discrete reaching with the stroke-affected side (P = .01). No post hoc differences were observed between tasks within the stroke-affected side for elbow, shoulder, and trunk flexion ROM. Peak velocity, movement time, and spatial variability were not different between discrete and cyclic reaching in the stroke-affected side. Survivors of stroke reached with altered kinematics when the stroke-affected side was compared with the less-affected side, yet there were few differences between discrete and cyclic reaching within the stroke-affected side. The greater trunk rotation during cyclic reaching represents a unique segmental strategy when using the stroke-affected side without consequences to end-point kinematics. These findings suggest that clinicians should consider the type of reaching required in therapeutic activities because of the continuous movement demands required with cyclic reaching.

  18. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  19. Quadratic Finite Element Method for 1D Deterministic Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolar, Jr., D R; Ferguson, J M

    2004-01-06

    In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.

  20. Exactly and quasi-exactly solvable 'discrete' quantum mechanics.

    PubMed

    Sasaki, Ryu

    2011-03-28

    A brief introduction to discrete quantum mechanics is given together with the main results on various exactly solvable systems. Namely, the intertwining relations, shape invariance, Heisenberg operator solutions, annihilation/creation operators and dynamical symmetry algebras, including the q-oscillator algebra and the Askey-Wilson algebra. A simple recipe to construct exactly and quasi-exactly solvable (QES) Hamiltonians in one-dimensional 'discrete' quantum mechanics is presented. It reproduces all the known Hamiltonians whose eigenfunctions consist of the Askey scheme of hypergeometric orthogonal polynomials of a continuous or a discrete variable. Several new exactly and QES Hamiltonians are constructed. The sinusoidal coordinate plays an essential role.

  1. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Using structural equation modeling to detect response shifts and true change in discrete variables: an application to the items of the SF-36.

    PubMed

    Verdam, Mathilde G E; Oort, Frans J; Sprangers, Mirjam A G

    2016-06-01

    The structural equation modeling (SEM) approach for detection of response shift (Oort in Qual Life Res 14:587-598, 2005. doi: 10.1007/s11136-004-0830-y ) is especially suited for continuous data, e.g., questionnaire scales. The present objective is to explain how the SEM approach can be applied to discrete data and to illustrate response shift detection in items measuring health-related quality of life (HRQL) of cancer patients. The SEM approach for discrete data includes two stages: (1) establishing a model of underlying continuous variables that represent the observed discrete variables, (2) using these underlying continuous variables to establish a common factor model for the detection of response shift and to assess true change. The proposed SEM approach was illustrated with data of 485 cancer patients whose HRQL was measured with the SF-36, before and after start of antineoplastic treatment. Response shift effects were detected in items of the subscales mental health, physical functioning, role limitations due to physical health, and bodily pain. Recalibration response shifts indicated that patients experienced relatively fewer limitations with "bathing or dressing yourself" (effect size d = 0.51) and less "nervousness" (d = 0.30), but more "pain" (d = -0.23) and less "happiness" (d = -0.16) after antineoplastic treatment as compared to the other symptoms of the same subscale. Overall, patients' mental health improved, while their physical health, vitality, and social functioning deteriorated. No change was found for the other subscales of the SF-36. The proposed SEM approach to discrete data enables response shift detection at the item level. This will lead to a better understanding of the response shift phenomena at the item level and therefore enhances interpretation of change in the area of HRQL.

  3. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  4. Influence of the random walk finite step on the first-passage probability

    NASA Astrophysics Data System (ADS)

    Klimenkova, Olga; Menshutin, Anton; Shchur, Lev

    2018-01-01

    A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.

  5. Discrete Pathophysiology is Uncommon in Patients with Nonspecific Arm Pain.

    PubMed

    Kortlever, Joost T P; Janssen, Stein J; Molleman, Jeroen; Hageman, Michiel G J S; Ring, David

    2016-06-01

    Nonspecific symptoms are common in all areas of medicine. Patients and caregivers can be frustrated when an illness cannot be reduced to a discrete pathophysiological process that corresponds with the symptoms. We therefore asked the following questions: 1) Which demographic factors and psychological comorbidities are associated with change from an initial diagnosis of nonspecific arm pain to eventual identification of discrete pathophysiology that corresponds with symptoms? 2) What is the percentage of patients eventually diagnosed with discrete pathophysiology, what are those pathologies, and do they account for the symptoms? We evaluated 634 patients with an isolated diagnosis of nonspecific upper extremity pain to see if discrete pathophysiology was diagnosed on subsequent visits to the same hand surgeon, a different hand surgeon, or any physician within our health system for the same pain. There were too few patients with discrete pathophysiology at follow-up to address the primary study question. Definite discrete pathophysiology that corresponded with the symptoms was identified in subsequent evaluations by the index surgeon in one patient (0.16% of all patients) and cured with surgery (nodular fasciitis). Subsequent doctors identified possible discrete pathophysiology in one patient and speculative pathophysiology in four patients and the index surgeon identified possible discrete pathophysiology in four patients, but the five discrete diagnoses accounted for only a fraction of the symptoms. Nonspecific diagnoses are not harmful. Prospective randomized research is merited to determine if nonspecific, descriptive diagnoses are better for patients than specific diagnoses that imply pathophysiology in the absence of discrete verifiable pathophysiology.

  6. Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.

    2010-01-01

    NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand DES capabilities to address KSC's planning needs.

  7. Leaping from Discrete to Continuous Independent Variables: Sixth Graders' Science Line Graph Interpretations

    ERIC Educational Resources Information Center

    Boote, Stacy K.; Boote, David N.

    2017-01-01

    Students often struggle to interpret graphs correctly, despite emphasis on graphic literacy in U.S. education standards documents. The purpose of this study was to describe challenges sixth graders with varying levels of science and mathematics achievement encounter when transitioning from interpreting graphs having discrete independent variables…

  8. Discrete-Trial Functional Analysis and Functional Communication Training with Three Individuals with Autism and Severe Problem Behavior

    ERIC Educational Resources Information Center

    Schmidt, Jonathan D.; Drasgow, Erik; Halle, James W.; Martin, Christian A.; Bliss, Sacha A.

    2014-01-01

    Discrete-trial functional analysis (DTFA) is an experimental method for determining the variables maintaining problem behavior in the context of natural routines. Functional communication training (FCT) is an effective method for replacing problem behavior, once identified, with a functionally equivalent response. We implemented these procedures…

  9. Lens elliptic gamma function solution of the Yang-Baxter equation at roots of unity

    NASA Astrophysics Data System (ADS)

    Kels, Andrew P.; Yamazaki, Masahito

    2018-02-01

    We study the root of unity limit of the lens elliptic gamma function solution of the star-triangle relation, for an integrable model with continuous and discrete spin variables. This limit involves taking an elliptic nome to a primitive rNth root of unity, where r is an existing integer parameter of the lens elliptic gamma function, and N is an additional integer parameter. This is a singular limit of the star-triangle relation, and at subleading order of an asymptotic expansion, another star-triangle relation is obtained for a model with discrete spin variables in {Z}rN . Some special choices of solutions of equation of motion are shown to result in well-known discrete spin solutions of the star-triangle relation. The saddle point equations themselves are identified with three-leg forms of ‘3D-consistent’ classical discrete integrable equations, known as Q4 and Q3(δ=0) . We also comment on the implications for supersymmetric gauge theories, and in particular comment on a close parallel with the works of Nekrasov and Shatashvili.

  10. Synchronization Control for a Class of Discrete-Time Dynamical Networks With Packet Dropouts: A Coding-Decoding-Based Approach.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2017-09-06

    The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.

  11. Relationship of attenuation in a vegetation canopy to physical parameters of the canopy

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Levine, D. M.

    1993-01-01

    A discrete scatter model is employed to compute the radiometric response (i.e. emissivity) of a layer of vegetation over a homogeneous ground. This was done to gain insight into empirical formulas for the emissivity which have recently appeared in the literature and which indicate that the attenuation through the canopy is proportional to the water content of the vegetation and inversely proportional to wavelength raised to a power around unity. The analytical result assumes that the vegetation can be modeled by a sparse layer of discrete, randomly oriented particles (leaves, stalks, etc.). The attenuation is given by the effective wave number of the layer obtained from the solution for the mean wave using the effective field approximation. By using the Ulaby-El Rayes formula to relate the dielectric constant of the vegetation to its water content, it can be shown that the attenuation is proportional to water content. The analytical form offers insight into the dependence of the empirical parameters on other variables of the canopy, including plant geometry (i.e. shape and orientation of the leaves and stalks of which the vegetation is comprised), frequency of the measurement and even the physical temperature of the vegetation. Solutions are presented for some special cases including layers consisting of cylinders (stalks) and disks (leaves).

  12. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less

  13. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  14. A stochastic approach to uncertainty in the equations of MHD kinematics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Edward G., E-mail: egphillips@math.umd.edu; Elman, Howard C., E-mail: elman@cs.umd.edu

    2015-03-01

    The magnetohydrodynamic (MHD) kinematics model describes the electromagnetic behavior of an electrically conducting fluid when its hydrodynamic properties are assumed to be known. In particular, the MHD kinematics equations can be used to simulate the magnetic field induced by a given velocity field. While prescribing the velocity field leads to a simpler model than the fully coupled MHD system, this may introduce some epistemic uncertainty into the model. If the velocity of a physical system is not known with certainty, the magnetic field obtained from the model may not be reflective of the magnetic field seen in experiments. Additionally, uncertaintymore » in physical parameters such as the magnetic resistivity may affect the reliability of predictions obtained from this model. By modeling the velocity and the resistivity as random variables in the MHD kinematics model, we seek to quantify the effects of uncertainty in these fields on the induced magnetic field. We develop stochastic expressions for these quantities and investigate their impact within a finite element discretization of the kinematics equations. We obtain mean and variance data through Monte Carlo simulation for several test problems. Toward this end, we develop and test an efficient block preconditioner for the linear systems arising from the discretized equations.« less

  15. Metastability of Reversible Random Walks in Potential Fields

    NASA Astrophysics Data System (ADS)

    Landim, C.; Misturini, R.; Tsunoda, K.

    2015-09-01

    Let be an open and bounded subset of , and let be a twice continuously differentiable function. Denote by the discretization of , , and denote by the continuous-time, nearest-neighbor, random walk on which jumps from to at rate . We examine in this article the metastable behavior of among the wells of the potential F.

  16. A two-level stochastic collocation method for semilinear elliptic equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Luoping; Zheng, Bin; Lin, Guang

    In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less

  17. Statistical self-similarity of width function maxima with implications to floods

    USGS Publications Warehouse

    Veitzer, S.A.; Gupta, V.K.

    2001-01-01

    Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.

  18. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    NASA Astrophysics Data System (ADS)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  19. Population coding in sparsely connected networks of noisy neurons.

    PubMed

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  20. Multilayer shallow water models with locally variable number of layers and semi-implicit time discretization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys

    2018-07-01

    We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.

  1. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  2. Structure of random discrete spacetime

    NASA Technical Reports Server (NTRS)

    Brightwell, Graham; Gregory, Ruth

    1991-01-01

    The usual picture of spacetime consists of a continuous manifold, together with a metric of Lorentzian signature which imposes a causal structure on the spacetime. A model, first suggested by Bombelli et al., is considered in which spacetime consists of a discrete set of points taken at random from a manifold, with only the causal structure on this set remaining. This structure constitutes a partially ordered set (or poset). Working from the poset alone, it is shown how to construct a metric on the space which closely approximates the metric on the original spacetime manifold, how to define the effective dimension of the spacetime, and how such quantities may depend on the scale of measurement. Possible desirable features of the model are discussed.

  3. The structure of random discrete spacetime

    NASA Technical Reports Server (NTRS)

    Brightwell, Graham; Gregory, Ruth

    1990-01-01

    The usual picture of spacetime consists of a continuous manifold, together with a metric of Lorentzian signature which imposes a causal structure on the spacetime. A model, first suggested by Bombelli et al., is considered in which spacetime consists of a discrete set of points taken at random from a manifold, with only the causal structure on this set remaining. This structure constitutes a partially ordered set (or poset). Working from the poset alone, it is shown how to construct a metric on the space which closely approximates the metric on the original spacetime manifold, how to define the effective dimension of the spacetime, and how such quantities may depend on the scale of measurement. Possible desirable features of the model are discussed.

  4. Exactly solvable random graph ensemble with extensively many short cycles

    NASA Astrophysics Data System (ADS)

    Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.

    2018-02-01

    We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.

  5. Exact Lyapunov exponent of the harmonic magnon modes of one-dimensional Heisenberg-Mattis spin glasses

    NASA Astrophysics Data System (ADS)

    Sepehrinia, Reza; Niry, M. D.; Bozorg, B.; Tabar, M. Reza Rahimi; Sahimi, Muhammad

    2008-03-01

    A mapping is developed between the linearized equation of motion for the dynamics of the transverse modes at T=0 of the Heisenberg-Mattis model of one-dimensional (1D) spin glasses and the (discretized) random wave equation. The mapping is used to derive an exact expression for the Lyapunov exponent (LE) of the magnon modes of spin glasses and to show that it follows anomalous scaling at low magnon frequencies. In addition, through numerical simulations, the differences between the LE and the density of states of the wave equation in a discrete 1D model of randomly disordered media (those with a finite correlation length) and that of continuous media (with a zero correlation length) are demonstrated and emphasized.

  6. Mean-Potential Law in Evolutionary Games

    NASA Astrophysics Data System (ADS)

    Nałecz-Jawecki, Paweł; Miekisz, Jacek

    2018-01-01

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  7. CGBayesNets: Conditional Gaussian Bayesian Network Learning and Inference with Mixed Discrete and Continuous Data

    PubMed Central

    Weiss, Scott T.

    2014-01-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com. PMID:24922310

  8. CGBayesNets: conditional Gaussian Bayesian network learning and inference with mixed discrete and continuous data.

    PubMed

    McGeachie, Michael J; Chang, Hsun-Hsien; Weiss, Scott T

    2014-06-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.

  9. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  10. A Bayesian hierarchical model for discrete choice data in health care.

    PubMed

    Antonio, Anna Liza M; Weiss, Robert E; Saigal, Christopher S; Dahan, Ely; Crespi, Catherine M

    2017-01-01

    In discrete choice experiments, patients are presented with sets of health states described by various attributes and asked to make choices from among them. Discrete choice experiments allow health care researchers to study the preferences of individual patients by eliciting trade-offs between different aspects of health-related quality of life. However, many discrete choice experiments yield data with incomplete ranking information and sparsity due to the limited number of choice sets presented to each patient, making it challenging to estimate patient preferences. Moreover, methods to identify outliers in discrete choice data are lacking. We develop a Bayesian hierarchical random effects rank-ordered multinomial logit model for discrete choice data. Missing ranks are accounted for by marginalizing over all possible permutations of unranked alternatives to estimate individual patient preferences, which are modeled as a function of patient covariates. We provide a Bayesian version of relative attribute importance, and adapt the use of the conditional predictive ordinate to identify outlying choice sets and outlying individuals with unusual preferences compared to the population. The model is applied to data from a study using a discrete choice experiment to estimate individual patient preferences for health states related to prostate cancer treatment.

  11. Rational Ruijsenaars Schneider hierarchy and bispectral difference operators

    NASA Astrophysics Data System (ADS)

    Iliev, Plamen

    2007-05-01

    We show that a monic polynomial in a discrete variable n, with coefficients depending on time variables t1,t2,…, is a τ-function for the discrete Kadomtsev-Petviashvili hierarchy if and only if the motion of its zeros is governed by a hierarchy of Ruijsenaars-Schneider systems. These τ-functions were considered in [L. Haine, P. Iliev, Commutative rings of difference operators and an adelic flag manifold, Int. Math. Res. Not. 2000 (6) (2000) 281-323], where it was proved that they parametrize rank one solutions to a difference-differential version of the bispectral problem.

  12. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  13. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution

    DOE PAGES

    Cai, Hong; Long, Christopher M.; DeRose, Christopher T.; ...

    2017-01-01

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  14. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution.

    PubMed

    Cai, Hong; Long, Christopher M; DeRose, Christopher T; Boynton, Nicholas; Urayama, Junji; Camacho, Ryan; Pomerene, Andrew; Starbuck, Andrew L; Trotter, Douglas C; Davids, Paul S; Lentine, Anthony L

    2017-05-29

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  15. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Hong; Long, Christopher M.; DeRose, Christopher T.

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  16. Multifactor valuation models of energy futures and options on futures

    NASA Astrophysics Data System (ADS)

    Bertus, Mark J.

    The intent of this dissertation is to investigate continuous time pricing models for commodity derivative contracts that consider mean reversion. The motivation for pricing commodity futures and option on futures contracts leads to improved practical risk management techniques in markets where uncertainty is increasing. In the dissertation closed-form solutions to mean reverting one-factor, two-factor, three-factor Brownian motions are developed for futures contracts. These solutions are obtained through risk neutral pricing methods that yield tractable expressions for futures prices, which are linear in the state variables, hence making them attractive for estimation. These functions, however, are expressed in terms of latent variables (i.e. spot prices, convenience yield) which complicate the estimation of the futures pricing equation. To address this complication a discussion on Dynamic factor analysis is given. This procedure documents latent variables using a Kalman filter and illustrations show how this technique may be used for the analysis. In addition, to the futures contracts closed form solutions for two option models are obtained. Solutions to the one- and two-factor models are tailored solutions of the Black-Scholes pricing model. Furthermore, since these contracts are written on the futures contracts, they too are influenced by the same underlying parameters of the state variables used to price the futures contracts. To conclude, the analysis finishes with an investigation of commodity futures options that incorporate random discrete jumps.

  17. Entanglement transfer from two-mode continuous variable SU(2) cat states to discrete qubits systems in Jaynes-Cummings Dimers

    PubMed Central

    Ran, Du; Hu, Chang-Sheng; Yang, Zhen-Biao

    2016-01-01

    We study the entanglement transfer from a two-mode continuous variable system (initially in the two-mode SU(2) cat states) to a couple of discrete two-state systems (initially in an arbitrary mixed state), by use of the resonant Jaynes-Cummings (JC) interaction. We first quantitatively connect the entanglement transfer to non-Gaussianity of the two-mode SU(2) cat states and find a positive correlation between them. We then investigate the behaviors of the entanglement transfer and find that it is dependent on the initial state of the discrete systems. We also find that the largest possible value of the transferred entanglement exhibits a variety of behaviors for different photon number as well as for the phase angle of the two-mode SU(2) cat states. We finally consider the influences of the noise on the transferred entanglement. PMID:27553881

  18. Predicting temperate forest stand types using only structural profiles from discrete return airborne lidar

    NASA Astrophysics Data System (ADS)

    Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.

    2018-02-01

    Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.

  19. On Studying Common Factor Dominance and Approximate Unidimensionality in Multicomponent Measuring Instruments with Discrete Items

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2018-01-01

    This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…

  20. Modular architecture for robotics and teleoperation

    DOEpatents

    Anderson, Robert J.

    1996-12-03

    Systems and methods for modularization and discretization of real-time robot, telerobot and teleoperation systems using passive, network based control laws. Modules consist of network one-ports and two-ports. Wave variables and position information are passed between modules. The behavior of each module is decomposed into uncoupled linear-time-invariant, and coupled, nonlinear memoryless elements and then are separately discretized.

  1. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE PAGES

    Pandey, Sachin; Rajaram, Harihar

    2016-12-05

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  2. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, Sachin; Rajaram, Harihar

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  3. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  4. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  5. Localization on Quantum Graphs with Random Vertex Couplings

    NASA Astrophysics Data System (ADS)

    Klopp, Frédéric; Pankrashkin, Konstantin

    2008-05-01

    We consider Schrödinger operators on a class of periodic quantum graphs with randomly distributed Kirchhoff coupling constants at all vertices. We obtain necessary conditions for localization on quantum graphs in terms of finite volume criteria for some energy-dependent discrete Hamiltonians. These conditions hold in the strong disorder limit and at the spectral edges.

  6. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  7. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  8. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  9. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  10. Macroscopic damping model for structural dynamics with random polycrystalline configurations

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Cui, Junzhi; Yu, Yifan; Xiang, Meizhen

    2018-06-01

    In this paper the macroscopic damping model for dynamical behavior of the structures with random polycrystalline configurations at micro-nano scales is established. First, the global motion equation of a crystal is decomposed into a set of motion equations with independent single degree of freedom (SDOF) along normal discrete modes, and then damping behavior is introduced into each SDOF motion. Through the interpolation of discrete modes, the continuous representation of damping effects for the crystal is obtained. Second, from energy conservation law the expression of the damping coefficient is derived, and the approximate formula of damping coefficient is given. Next, the continuous damping coefficient for polycrystalline cluster is expressed, the continuous dynamical equation with damping term is obtained, and then the concrete damping coefficients for a polycrystalline Cu sample are shown. Finally, by using statistical two-scale homogenization method, the macroscopic homogenized dynamical equation containing damping term for the structures with random polycrystalline configurations at micro-nano scales is set up.

  11. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  12. Adaptive feedback synchronisation of complex dynamical network with discrete-time communications and delayed nodes

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Ding, Yongsheng; Zhang, Lei; Hao, Kuangrong

    2016-08-01

    This paper considered the synchronisation of continuous complex dynamical networks with discrete-time communications and delayed nodes. The nodes in the dynamical networks act in the continuous manner, while the communications between nodes are discrete-time; that is, they communicate with others only at discrete time instants. The communication intervals in communication period can be uncertain and variable. By using a piecewise Lyapunov-Krasovskii function to govern the characteristics of the discrete communication instants, we investigate the adaptive feedback synchronisation and a criterion is derived to guarantee the existence of the desired controllers. The globally exponential synchronisation can be achieved by the controllers under the updating laws. Finally, two numerical examples including globally coupled network and nearest-neighbour coupled networks are presented to demonstrate the validity and effectiveness of the proposed control scheme.

  13. Effect of Single-Electron Interface Trapping in Decanano MOSFETs: A 3D Atomistic Simulation Study

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Balasubramaniam, R.; Brown, A. R.; Davies, J. H.

    2000-01-01

    We study the effect of trapping/detrapping of a single-electron in interface states in the channel of n-type MOSFETs with decanano dimensions using 3D atomistic simulation techniques. In order to highlight the basic dependencies, the simulations are carried out initially assuming continuous doping charge, and discrete localized charge only for the trapped electron. The dependence of the random telegraph signal (RTS) amplitudes on the device dimensions and on the position of the trapped charge in the channel are studied in detail. Later, in full-scale, atomistic simulations assuming discrete charge for both randomly placed dopants and the trapped electron, we highlight the importance of current percolation and of traps with strategic position where the trapped electron blocks a dominant current path.

  14. Scattering of electromagnetic waves from a half-space of randomly distributed discrete scatterers and polarized backscattering ratio law

    NASA Technical Reports Server (NTRS)

    Zhu, P. Y.

    1991-01-01

    The effective-medium approximation is applied to investigate scattering from a half-space of randomly and densely distributed discrete scatterers. Starting from vector wave equations, an approximation, called effective-medium Born approximation, a particular way, treating Green's functions, and special coordinates, of which the origin is set at the field point, are used to calculate the bistatic- and back-scatterings. An analytic solution of backscattering with closed form is obtained and it shows a depolarization effect. The theoretical results are in good agreement with the experimental measurements in the cases of snow, multi- and first-year sea-ice. The root product ratio of polarization to depolarization in backscattering is equal to 8; this result constitutes a law about polarized scattering phenomena in the nature.

  15. Mean-Potential Law in Evolutionary Games.

    PubMed

    Nałęcz-Jawecki, Paweł; Miękisz, Jacek

    2018-01-12

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1/3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  16. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  17. Exploration properties of biased evanescent random walkers on a one-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Esguerra, Jose Perico; Reyes, Jelian

    2017-08-01

    We investigate the combined effects of bias and evanescence on the characteristics of random walks on a one-dimensional lattice. We calculate the time-dependent return probability, eventual return probability, conditional mean return time, and the time-dependent mean number of visited sites of biased immortal and evanescent discrete-time random walkers on a one-dimensional lattice. We then extend the calculations to the case of a continuous-time step-coupled biased evanescent random walk on a one-dimensional lattice with an exponential waiting time distribution.

  18. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  19. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    PubMed

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  20. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries

    NASA Astrophysics Data System (ADS)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Z.; Department of Applied Mathematics and Mechanics, University of Science and Technology Beijing, Beijing 100083; Lin, P.

    In this paper, we investigate numerically a diffuse interface model for the Navier–Stokes equation with fluid–fluid interface when the fluids have different densities [48]. Under minor reformulation of the system, we show that there is a continuous energy law underlying the system, assuming that all variables have reasonable regularities. It is shown in the literature that an energy law preserving method will perform better for multiphase problems. Thus for the reformulated system, we design a C{sup 0} finite element method and a special temporal scheme where the energy law is preserved at the discrete level. Such a discrete energy lawmore » (almost the same as the continuous energy law) for this variable density two-phase flow model has never been established before with C{sup 0} finite element. A Newton method is introduced to linearise the highly non-linear system of our discretization scheme. Some numerical experiments are carried out using the adaptive mesh to investigate the scenario of coalescing and rising drops with differing density ratio. The snapshots for the evolution of the interface together with the adaptive mesh at different times are presented to show that the evolution, including the break-up/pinch-off of the drop, can be handled smoothly by our numerical scheme. The discrete energy functional for the system is examined to show that the energy law at the discrete level is preserved by our scheme.« less

  2. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  3. The discrete and localized nature of the variable emission from active regions

    NASA Technical Reports Server (NTRS)

    Arndt, Martina Belz; Habbal, Shadia Rifai; Karovska, Margarita

    1994-01-01

    Using data from the Extreme Ultraviolet (EUV) Spectroheliometer on Skylab, we study the empirical characteristics of the variable emission in active regions. These simultaneous multi-wavelength observations clearly confirm that active regions consist of a complex of loops at different temperatures. The variable emission from this complex has very well-defined properties that can be quantitatively summarized as follows: (1) It is localized predominantly around the footpoints where it occurs at discrete locations. (2) The strongest variability does not necessarily coincide with the most intense emission. (3) The fraction of the area of the footpoints, (delta n)/N, that exhibits variable emission, varies by +/- 15% as a function of time, at any of the wavelengths measured. It also varies very little from footpoint to footpoint. (4) This fractional variation is temperature dependent with a maximum around 10(exp 5) K. (5) The ratio of the intensity of the variable to the average background emission, (delta I)/(bar-I), also changes with temperature. In addition, we find that these distinctive characteristics persist even when flares occur within the active region.

  4. Continuous-variable quantum network coding for coherent states

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Li, Ke; Liu, Jian-wei

    2017-04-01

    As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.

  5. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  6. On the dynamic rounding-off in analogue and RF optimal circuit sizing

    NASA Astrophysics Data System (ADS)

    Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena

    2014-04-01

    Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.

  7. Gaussian-modulated coherent-state measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Ma, Xiang-Chun; Sun, Shi-Hai; Jiang, Mu-Sheng; Gui, Ming; Liang, Lin-Mei

    2014-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks.

  8. A boundary value approach for solving three-dimensional elliptic and hyperbolic partial differential equations.

    PubMed

    Biala, T A; Jator, S N

    2015-01-01

    In this article, the boundary value method is applied to solve three dimensional elliptic and hyperbolic partial differential equations. The partial derivatives with respect to two of the spatial variables (y, z) are discretized using finite difference approximations to obtain a large system of ordinary differential equations (ODEs) in the third spatial variable (x). Using interpolation and collocation techniques, a continuous scheme is developed and used to obtain discrete methods which are applied via the Block unification approach to obtain approximations to the resulting large system of ODEs. Several test problems are investigated to elucidate the solution process.

  9. The discrete adjoint method for parameter identification in multibody system dynamics.

    PubMed

    Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin

    2018-01-01

    The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.

  10. The modelling of carbon-based supercapacitors: Distributions of time constants and Pascal Equivalent Circuits

    NASA Astrophysics Data System (ADS)

    Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon

    2017-03-01

    Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.

  11. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevalier, Michael W., E-mail: Michael.Chevalier@ucsf.edu; El-Samad, Hana, E-mail: Hana.El-Samad@ucsf.edu

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation timesmore » of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.« less

  12. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  13. The multinomial simulation algorithm for discrete stochastic simulation of reaction-diffusion systems.

    PubMed

    Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R

    2009-03-07

    The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.

  14. Repressing the effects of variable speed harmonic orders in operational modal analysis

    NASA Astrophysics Data System (ADS)

    Randall, R. B.; Coats, M. D.; Smith, W. A.

    2016-10-01

    Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.

  15. Spatial and temporal variability of microgeographic genetic structure in white-tailed deer

    USGS Publications Warehouse

    Scribner, Kim T.; Smith, Michael H.; Chesser, Ronald K.

    1997-01-01

    Techniques are described that define contiguous genetic subpopulations of white-tailed deer (Odocoileus virginianus) based on the spatial dispersion of 4,749 individuals that possessed discrete character values (alleles or genotypes) during each of 6 years (1974-1979). White-tailed deer were not uniformly distributed in space, but exhibited considerable spatial genetic structuring. Significant non-random clusters of individuals were documented during each year based on specific alleles and genotypes at the Sdh locus. Considerable temporal variation was observed in the position and genetic composition of specific clusters, which reflected changes in allele frequency in small geographic areas. The position of clusters did not consistently correspond with traditional management boundaries based on major discontinuities in habitat (swamp versus upland) and hunt compartments that were defined by roads and streams. Spatio-temporal stability of observed genetic contiguous clusters was interpreted relative to method and intensity of harvest, movements, and breeding ecology.

  16. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  17. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  18. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  19. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  20. Ring-Opening Copolymerization of Epoxides and Cyclic Anhydrides with Discrete Metal Complexes: Structure-Property Relationships.

    PubMed

    Longo, Julie M; Sanford, Maria J; Coates, Geoffrey W

    2016-12-28

    Polyesters synthesized through the alternating copolymerization of epoxides and cyclic anhydrides compose a growing class of polymers that exhibit an impressive array of chemical and physical properties. Because they are synthesized through the chain-growth polymerization of two variable monomers, their syntheses can be controlled by discrete metal complexes, and the resulting materials vary widely in their functionality and physical properties. This polymer-focused review gives a perspective on the current state of the field of epoxide/anhydride copolymerization mediated by discrete catalysts and the relationships between the structures and properties of these polyesters.

  1. Dynamic characteristics of a two-stage variable-mass flexible missile with internal flow

    NASA Technical Reports Server (NTRS)

    Meirovitch, L.; Bankovskis, J.

    1972-01-01

    A general formulation of the dynamical problems associated with powered flight of a two stage flexible, variable-mass missile with internal flow, discrete masses, and aerodynamic forces is presented. The formulation comprises six ordinary differential equations for the rigid body motion, 3n ordinary differential equations for the n discrete masses and three partial differential equations with the appropriate boundary conditions for the elastic motion. This set of equations is modified to represent a single stage flexible, variable-mass missile with internal flow and aerodynamic forces. The rigid-body motion consists then of three translations and three rotations, whereas the elastic motion is defined by one longitudinal and two flexural displacements, the latter about two orthogonal transverse axes. The differential equations are nonlinear and, in addition, they possess time-dependent coefficients due to the mass variation.

  2. Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics.

    PubMed

    Chorin, Alexandre J; Lu, Fei

    2015-08-11

    Many physical systems are described by nonlinear differential equations that are too complicated to solve in full. A natural way to proceed is to divide the variables into those that are of direct interest and those that are not, formulate solvable approximate equations for the variables of greater interest, and use data and statistical methods to account for the impact of the other variables. In the present paper we consider time-dependent problems and introduce a fully discrete solution method, which simplifies both the analysis of the data and the numerical algorithms. The resulting time series are identified by a NARMAX (nonlinear autoregression moving average with exogenous input) representation familiar from engineering practice. The connections with the Mori-Zwanzig formalism of statistical physics are discussed, as well as an application to the Lorenz 96 system.

  3. Elementary exact calculations of degree growth and entropy for discrete equations.

    PubMed

    Halburd, R G

    2017-05-01

    Second-order discrete equations are studied over the field of rational functions [Formula: see text], where z is a variable not appearing in the equation. The exact degree of each iterate as a function of z can be calculated easily using the standard calculations that arise in singularity confinement analysis, even when the singularities are not confined. This produces elementary yet rigorous entropy calculations.

  4. Regression modeling and mapping of coniferous forest basal area and tree density from discrete-return lidar and multispectral data

    Treesearch

    Andrew T. Hudak; Nicholas L. Crookston; Jeffrey S. Evans; Michael K. Falkowski; Alistair M. S. Smith; Paul E. Gessler; Penelope Morgan

    2006-01-01

    We compared the utility of discrete-return light detection and ranging (lidar) data and multispectral satellite imagery, and their integration, for modeling and mapping basal area and tree density across two diverse coniferous forest landscapes in north-central Idaho. We applied multiple linear regression models subset from a suite of 26 predictor variables derived...

  5. Discrete Time McKean–Vlasov Control Problem: A Dynamic Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, Huyên, E-mail: pham@math.univ-paris-diderot.fr; Wei, Xiaoli, E-mail: tyswxl@gmail.com

    We consider the stochastic optimal control problem of nonlinear mean-field systems in discrete time. We reformulate the problem into a deterministic control problem with marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. We apply our method for solving explicitly the mean-variance portfolio selection and the multivariate linear-quadratic McKean–Vlasov control problem.

  6. Diurnal variations in metal concentrations in the Alamosa River and Wightman Fork, southwestern Colorado, 1995-97

    USGS Publications Warehouse

    Ortiz, Roderick F.; Stogner, Sr., Robert W.

    2001-01-01

    A comprehensive sampling network was implemented in the Alamosa River Basin from 1995 to 1997 to address data gaps identified as part of the ecological risk assessment of the Summitville Superfund site. Aluminum, copper, iron, and zinc were identified as the constituents of concern for the risk assessment. Water-quality samples were collected at six sites on the Alamosa River and Wightman Fork by automatic samplers. Several discrete (instantaneous) samples were collected over 24 hours at each site during periods of high diurnal variations in streamflow (May through September). The discrete samples were analyzed individually and duplicate samples were composited to produce a single sample that represented the daily-mean concentration. The diurnal variations in concentration with respect to the theoretical daily-mean concentration (maximum minus minimum divided by daily mean) are presented. Diurnal metal concentrations were highly variable in the Alamosa River and Wightman Fork. The concentration of a metal at a single site could change by several hundred percent during one diurnal cycle. The largest percent change in metal concentrations was observed for aluminum and iron. Zinc concentrations varied the least of the four metals. No discernible or predictable pattern was indicated in the timing of the daily mean, maximum, or minimum concentrations. The percentage of discrete sample concentrations that varied from the daily-mean concentration by thresholds of plus or minus 10, 25, and 50 percent was evaluated. Between 50 and 75 percent of discrete-sample concentrations varied from the daily-mean concentration by more than plus or minus 10 percent. The percentage of samples exceeding given thresholds generally was smaller during the summer period than the snowmelt period. Sampling strategies are critical to accurately define variability in constituent concentration, and conversely, understanding constituent variability is important in determining appropriate sampling strategies. During nonsteady-state periods, considerable errors in estimates of daily-mean concentration are possible if based on one discrete sample. Flow-weighting multiple discrete samples collected over a diurnal cycle provides a better estimate of daily-mean concentrations during nonsteady-state periods.

  7. Modeling and control of operator functional state in a unified framework of fuzzy inference petri nets.

    PubMed

    Zhang, Jian-Hua; Xia, Jia-Jun; Garibaldi, Jonathan M; Groumpos, Petros P; Wang, Ru-Bin

    2017-06-01

    In human-machine (HM) hybrid control systems, human operator and machine cooperate to achieve the control objectives. To enhance the overall HM system performance, the discrete manual control task-load by the operator must be dynamically allocated in accordance with continuous-time fluctuation of psychophysiological functional status of the operator, so-called operator functional state (OFS). The behavior of the HM system is hybrid in nature due to the co-existence of discrete task-load (control) variable and continuous operator performance (system output) variable. Petri net is an effective tool for modeling discrete event systems, but for hybrid system involving discrete dynamics, generally Petri net model has to be extended. Instead of using different tools to represent continuous and discrete components of a hybrid system, this paper proposed a method of fuzzy inference Petri nets (FIPN) to represent the HM hybrid system comprising a Mamdani-type fuzzy model of OFS and a logical switching controller in a unified framework, in which the task-load level is dynamically reallocated between the operator and machine based on the model-predicted OFS. Furthermore, this paper used a multi-model approach to predict the operator performance based on three electroencephalographic (EEG) input variables (features) via the Wang-Mendel (WM) fuzzy modeling method. The membership function parameters of fuzzy OFS model for each experimental participant were optimized using artificial bee colony (ABC) evolutionary algorithm. Three performance indices, RMSE, MRE, and EPR, were computed to evaluate the overall modeling accuracy. Experiment data from six participants are analyzed. The results show that the proposed method (FIPN with adaptive task allocation) yields lower breakdown rate (from 14.8% to 3.27%) and higher human performance (from 90.30% to 91.99%). The simulation results of the FIPN-based adaptive HM (AHM) system on six experimental participants demonstrate that the FIPN framework provides an effective way to model and regulate/optimize the OFS in HM hybrid systems composed of continuous-time OFS model and discrete-event switching controller. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A well-posed and stable stochastic Galerkin formulation of the incompressible Navier–Stokes equations with random data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2016-02-01

    We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less

  9. Consumer Behavior in the Choice of Mode of Transport: A Case Study in the Toledo-Madrid Corridor

    PubMed Central

    Muro-Rodríguez, Ana I.; Perez-Jiménez, Israel R.; Gutiérrez-Broncano, Santiago

    2017-01-01

    Within the context of the consumption of goods or services the decisions made by individuals involve the choice between a set of discrete alternatives, such as the choice of mode of transport. The methodology for analyzing the consumer behavior are the models of discrete choice based on the Theory of Random Utility. These models are based on the definition of preferences through a utility function that is maximized. These models also denominated of disaggregated demand derived from the decision of a set of individuals, who are formalized by the application of probabilistic models. The objective of this study is to determine the behavior of the consumer in the choice of a service, namely of transport services and in a short-distance corridor, such as Toledo-Madrid. The Toledo-Madrid corridor is characterized by being short distance, with high speed train available within the choice options to get the airport, along with the bus and the car. And where offers of HST and aircraft services can be proposed as complementary modes. By applying disaggregated transport models with revealed preference survey data and declared preferences, one can determine the most important variables involved in the choice and determine the arrangements for payment of individuals. These payment provisions may condition the use of certain transport policies to promote the use of efficient transportation. PMID:28676776

  10. Consumer Behavior in the Choice of Mode of Transport: A Case Study in the Toledo-Madrid Corridor.

    PubMed

    Muro-Rodríguez, Ana I; Perez-Jiménez, Israel R; Gutiérrez-Broncano, Santiago

    2017-01-01

    Within the context of the consumption of goods or services the decisions made by individuals involve the choice between a set of discrete alternatives, such as the choice of mode of transport. The methodology for analyzing the consumer behavior are the models of discrete choice based on the Theory of Random Utility. These models are based on the definition of preferences through a utility function that is maximized. These models also denominated of disaggregated demand derived from the decision of a set of individuals, who are formalized by the application of probabilistic models. The objective of this study is to determine the behavior of the consumer in the choice of a service, namely of transport services and in a short-distance corridor, such as Toledo-Madrid. The Toledo-Madrid corridor is characterized by being short distance, with high speed train available within the choice options to get the airport, along with the bus and the car. And where offers of HST and aircraft services can be proposed as complementary modes. By applying disaggregated transport models with revealed preference survey data and declared preferences, one can determine the most important variables involved in the choice and determine the arrangements for payment of individuals. These payment provisions may condition the use of certain transport policies to promote the use of efficient transportation.

  11. Elegant anti-disturbance control for discrete-time stochastic systems with nonlinearity and multiple disturbances

    NASA Astrophysics Data System (ADS)

    Wei, Xinjiang; Sun, Shixiang

    2018-03-01

    An elegant anti-disturbance control (EADC) strategy for a class of discrete-time stochastic systems with both nonlinearity and multiple disturbances, which include the disturbance with partially known information and a sequence of random vectors, is proposed in this paper. A stochastic disturbance observer is constructed to estimate the disturbance with partially known information, based on which, an EADC scheme is proposed by combining pole placement and linear matrix inequality methods. It is proved that the two different disturbances can be rejected and attenuated, and the corresponding desired performances can be guaranteed for discrete-time stochastic systems with known and unknown nonlinear dynamics, respectively. Simulation examples are given to demonstrate the effectiveness of the proposed schemes compared with some existing results.

  12. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  13. Mapping Evidence-Based Treatments for Children and Adolescents: Application of the Distillation and Matching Model to 615 Treatments from 322 Randomized Trials

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Daleiden, Eric L.

    2009-01-01

    This study applied the distillation and matching model to 322 randomized clinical trials for child mental health treatments. The model involved initial data reduction of 615 treatment protocol descriptions by means of a set of codes describing discrete clinical strategies, referred to as practice elements. Practice elements were then summarized in…

  14. General phase spaces: from discrete variables to rotor and continuum limits

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.

    2017-12-01

    We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.

  15. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  16. Dynamics of nonautonomous discrete rogue wave solutions for an Ablowitz-Musslimani equation with PT-symmetric potential.

    PubMed

    Yu, Fajun

    2017-02-01

    Starting from a discrete spectral problem, we derive a hierarchy of nonlinear discrete equations which include the Ablowitz-Ladik (AL) equation. We analytically study the discrete rogue-wave (DRW) solutions of AL equation with three free parameters. The trajectories of peaks and depressions of profiles for the first- and second-order DRWs are produced by means of analytical and numerical methods. In particular, we study the solutions with dispersion in parity-time ( PT) symmetric potential for Ablowitz-Musslimani equation. And we consider the non-autonomous DRW solutions, parameters controlling and their interactions with variable coefficients, and predict the long-living rogue wave solutions. Our results might provide useful information for potential applications of synthetic PT symmetric systems in nonlinear optics and condensed matter physics.

  17. Careful accounting of extrinsic noise in protein expression reveals correlations among its sources

    NASA Astrophysics Data System (ADS)

    Cole, John A.; Luthey-Schulten, Zaida

    2017-06-01

    In order to grow and replicate, living cells must express a diverse array of proteins, but the process by which proteins are made includes a great deal of inherent randomness. Understanding this randomness—whether it arises from the discrete stochastic nature of chemical reactivity ("intrinsic" noise), or from cell-to-cell variability in the concentrations of molecules involved in gene expression, or from the timings of important cell-cycle events like DNA replication and cell division ("extrinsic" noise)—remains a challenge. In this article we analyze a model of gene expression that accounts for several extrinsic sources of noise, including those associated with chromosomal replication, cell division, and variability in the numbers of RNA polymerase, ribonuclease E, and ribosomes. We then attempt to fit our model to a large proteomics and transcriptomics data set and find that only through the introduction of a few key correlations among the extrinsic noise sources can we accurately recapitulate the experimental data. These include significant correlations between the rate of mRNA degradation (mediated by ribonuclease E) and the rates of both transcription (RNA polymerase) and translation (ribosomes) and, strikingly, an anticorrelation between the transcription and the translation rates themselves.

  18. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  19. A Noachian/Hesperian Hiatus and Erosive Reactivation of Martian Valley Networks

    NASA Technical Reports Server (NTRS)

    Irwin, R. P., III.; Maxwell, T. A.; Howard, A. D.; Craddock, R. A.; Moore, J. M.

    2005-01-01

    Despite new evidence for persistent flow and sedimentation on early Mars, it remains unclear whether valley networks were active over long geologic timescales (10(exp 5)-10(exp 8) yr), or if flows were persistent only during multiple discrete episodes of moderate (approx. 10(exp 4) yr) to short (<10 yr) duration. Understanding the long-term stability/variability of valley network hydrology would provide an important control on paleoclimate and groundwater models. Here we describe geologic evidence for a hiatus in highland valley network activity while the fretted terrain formed, followed by a discrete reactivation of persistent (but possibly variable) erosive flows. Additional information is included in the original extended abstract.

  20. Violation of continuous-variable Einstein-Podolsky-Rosen steering with discrete measurements.

    PubMed

    Schneeloch, James; Dixon, P Ben; Howland, Gregory A; Broadbent, Curtis J; Howell, John C

    2013-03-29

    In this Letter, we derive an entropic Einstein-Podolsky-Rosen (EPR) steering inequality for continuous-variable systems using only experimentally measured discrete probability distributions and details of the measurement apparatus. We use this inequality to witness EPR steering between the positions and momenta of photon pairs generated in spontaneous parametric down-conversion. We examine the asymmetry between parties in this inequality, and show that this asymmetry can be used to reduce the technical requirements of experimental setups intended to demonstrate the EPR paradox. Furthermore, we develop a more stringent steering inequality that is symmetric between parties, and use it to show that the down-converted photon pairs also exhibit symmetric EPR steering.

  1. Violation of Continuous-Variable Einstein-Podolsky-Rosen Steering with Discrete Measurements

    NASA Astrophysics Data System (ADS)

    Schneeloch, James; Dixon, P. Ben; Howland, Gregory A.; Broadbent, Curtis J.; Howell, John C.

    2013-03-01

    In this Letter, we derive an entropic Einstein-Podolsky-Rosen (EPR) steering inequality for continuous-variable systems using only experimentally measured discrete probability distributions and details of the measurement apparatus. We use this inequality to witness EPR steering between the positions and momenta of photon pairs generated in spontaneous parametric down-conversion. We examine the asymmetry between parties in this inequality, and show that this asymmetry can be used to reduce the technical requirements of experimental setups intended to demonstrate the EPR paradox. Furthermore, we develop a more stringent steering inequality that is symmetric between parties, and use it to show that the down-converted photon pairs also exhibit symmetric EPR steering.

  2. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  3. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  4. On discrete control of nonlinear systems with applications to robotics

    NASA Technical Reports Server (NTRS)

    Eslami, Mansour

    1989-01-01

    Much progress has been reported in the areas of modeling and control of nonlinear dynamic systems in a continuous-time framework. From implementation point of view, however, it is essential to study these nonlinear systems directly in a discrete setting that is amenable for interfacing with digital computers. But to develop discrete models and discrete controllers for a nonlinear system such as robot is a nontrivial task. Robot is also inherently a variable-inertia dynamic system involving additional complications. Not only the computer-oriented models of these systems must satisfy the usual requirements for such models, but these must also be compatible with the inherent capabilities of computers and must preserve the fundamental physical characteristics of continuous-time systems such as the conservation of energy and/or momentum. Preliminary issues regarding discrete systems in general and discrete models of a typical industrial robot that is developed with full consideration of the principle of conservation of energy are presented. Some research on the pertinent tactile information processing is reviewed. Finally, system control methods and how to integrate these issues in order to complete the task of discrete control of a robot manipulator are also reviewed.

  5. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  6. Direct Simulation of Multiple Scattering by Discrete Random Media Illuminated by Gaussian Beams

    NASA Technical Reports Server (NTRS)

    Mackowski, Daniel W.; Mishchenko, Michael I.

    2011-01-01

    The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GB can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.

  7. Dynamical Localization for Unitary Anderson Models

    NASA Astrophysics Data System (ADS)

    Hamza, Eman; Joye, Alain; Stolz, Günter

    2009-11-01

    This paper establishes dynamical localization properties of certain families of unitary random operators on the d-dimensional lattice in various regimes. These operators are generalizations of one-dimensional physical models of quantum transport and draw their name from the analogy with the discrete Anderson model of solid state physics. They consist in a product of a deterministic unitary operator and a random unitary operator. The deterministic operator has a band structure, is absolutely continuous and plays the role of the discrete Laplacian. The random operator is diagonal with elements given by i.i.d. random phases distributed according to some absolutely continuous measure and plays the role of the random potential. In dimension one, these operators belong to the family of CMV-matrices in the theory of orthogonal polynomials on the unit circle. We implement the method of Aizenman-Molchanov to prove exponential decay of the fractional moments of the Green function for the unitary Anderson model in the following three regimes: In any dimension, throughout the spectrum at large disorder and near the band edges at arbitrary disorder and, in dimension one, throughout the spectrum at arbitrary disorder. We also prove that exponential decay of fractional moments of the Green function implies dynamical localization, which in turn implies spectral localization. These results complete the analogy with the self-adjoint case where dynamical localization is known to be true in the same three regimes.

  8. Distributed Relaxation for Conservative Discretizations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2001-01-01

    A multigrid method is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work that is a small (less than 10) multiple of the operation count in one target-grid residual evaluation. The way to achieve this efficiency is the distributed relaxation approach. TME solvers employing distributed relaxation have already been demonstrated for nonconservative formulations of high-Reynolds-number viscous incompressible and subsonic compressible flow regimes. The purpose of this paper is to provide foundations for applications of distributed relaxation to conservative discretizations. A direct correspondence between the primitive variable interpolations for calculating fluxes in conservative finite-volume discretizations and stencils of the discretized derivatives in the nonconservative formulation has been established. Based on this correspondence, one can arrive at a conservative discretization which is very efficiently solved with a nonconservative relaxation scheme and this is demonstrated for conservative discretization of the quasi one-dimensional Euler equations. Formulations for both staggered and collocated grid arrangements are considered and extensions of the general procedure to multiple dimensions are discussed.

  9. Spectral collocation method with a flexible angular discretization scheme for radiative transfer in multi-layer graded index medium

    NASA Astrophysics Data System (ADS)

    Wei, Linyang; Qi, Hong; Sun, Jianping; Ren, Yatao; Ruan, Liming

    2017-05-01

    The spectral collocation method (SCM) is employed to solve the radiative transfer in multi-layer semitransparent medium with graded index. A new flexible angular discretization scheme is employed to discretize the solid angle domain freely to overcome the limit of the number of discrete radiative direction when adopting traditional SN discrete ordinate scheme. Three radial basis function interpolation approaches, named as multi-quadric (MQ), inverse multi-quadric (IMQ) and inverse quadratic (IQ) interpolation, are employed to couple the radiative intensity at the interface between two adjacent layers and numerical experiments show that MQ interpolation has the highest accuracy and best stability. Variable radiative transfer problems in double-layer semitransparent media with different thermophysical properties are investigated and the influence of these thermophysical properties on the radiative transfer procedure in double-layer semitransparent media is also analyzed. All the simulated results show that the present SCM with the new angular discretization scheme can predict the radiative transfer in multi-layer semitransparent medium with graded index efficiently and accurately.

  10. The Livingstone Model of a Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Bajwa, Anupa; Sweet, Adam; Korsmeyer, David (Technical Monitor)

    2003-01-01

    Livingstone is a discrete, propositional logic-based inference engine that has been used for diagnosis of physical systems. We present a component-based model of a Main Propulsion System (MPS) and say how it is used with Livingstone (L2) in order to implement a diagnostic system for integrated vehicle health management (IVHM) for the Propulsion IVHM Technology Experiment (PITEX). We start by discussing the process of conceptualizing such a model. We describe graphical tools that facilitated the generation of the model. The model is composed of components (which map onto physical components), connections between components and constraints. A component is specified by variables, with a set of discrete, qualitative values for each variable in its local nominal and failure modes. For each mode, the model specifies the component's behavior and transitions. We describe the MPS components' nominal and fault modes and associated Livingstone variables and data structures. Given this model, and observed external commands and observations from the system, Livingstone tracks the state of the MPS over discrete time-steps by choosing trajectories that are consistent with observations. We briefly discuss how the compiled model fits into the overall PITEX architecture. Finally we summarize our modeling experience, discuss advantages and disadvantages of our approach, and suggest enhancements to the modeling process.

  11. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  12. Transparent lattices and their solitary waves.

    PubMed

    Sadurní, E

    2014-09-01

    We provide a family of transparent tight-binding models with nontrivial potentials and site-dependent hopping parameters. Their feasibility is discussed in electromagnetic resonators, dielectric slabs, and quantum-mechanical traps. In the second part of the paper, the arrays are obtained through a generalization of supersymmetric quantum mechanics in discrete variables. The formalism includes a finite-difference Darboux transformation applied to the scattering matrix of a periodic array. A procedure for constructing a hierarchy of discrete Hamiltonians is indicated and a particular biparametric family is given. The corresponding potentials and hopping functions are identified as solitary waves, pointing to a discrete spinorial generalization of the Korteweg-deVries family.

  13. Quantum information processing in phase space: A modular variables approach

    NASA Astrophysics Data System (ADS)

    Ketterer, A.; Keller, A.; Walborn, S. P.; Coudreau, T.; Milman, P.

    2016-08-01

    Binary quantum information can be fault-tolerantly encoded in states defined in infinite-dimensional Hilbert spaces. Such states define a computational basis, and permit a perfect equivalence between continuous and discrete universal operations. The drawback of this encoding is that the corresponding logical states are unphysical, meaning infinitely localized in phase space. We use the modular variables formalism to show that, in a number of protocols relevant for quantum information and for the realization of fundamental tests of quantum mechanics, it is possible to loosen the requirements on the logical subspace without jeopardizing their usefulness or their successful implementation. Such protocols involve measurements of appropriately chosen modular variables that permit the readout of the encoded discrete quantum information from the corresponding logical states. Finally, we demonstrate the experimental feasibility of our approach by applying it to the transverse degrees of freedom of single photons.

  14. The arbitrary order mixed mimetic finite difference method for the diffusion equation

    DOE PAGES

    Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco

    2016-05-01

    Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less

  15. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    PubMed

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  16. Remote creation of hybrid entanglement between particle-like and wave-like optical qubits

    NASA Astrophysics Data System (ADS)

    Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien

    2014-07-01

    The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.

  17. Self-dual form of Ruijsenaars-Schneider models and ILW equation with discrete Laplacian

    NASA Astrophysics Data System (ADS)

    Zabrodin, A.; Zotov, A.

    2018-02-01

    We discuss a self-dual form or the Bäcklund transformations for the continuous (in time variable) glN Ruijsenaars-Schneider model. It is based on the first order equations in N + M complex variables which include N positions of particles and M dual variables. The latter satisfy equations of motion of the glM Ruijsenaars-Schneider model. In the elliptic case it holds M = N while for the rational and trigonometric models M is not necessarily equal to N. Our consideration is similar to the previously obtained results for the Calogero-Moser models which are recovered in the non-relativistic limit. We also show that the self-dual description of the Ruijsenaars-Schneider models can be derived from complexified intermediate long wave equation with discrete Laplacian by means of the simple pole ansatz likewise the Calogero-Moser models arise from ordinary intermediate long wave and Benjamin-Ono equations.

  18. What influences participation in genetic carrier testing? Results from a discrete choice experiment.

    PubMed

    Hall, Jane; Fiebig, Denzil G; King, Madeleine T; Hossain, Ishrat; Louviere, Jordan J

    2006-05-01

    This study explores factors that influence participation in genetic testing programs and the acceptance of multiple tests. Tay Sachs and cystic fibrosis are both genetically determined recessive disorders with differing severity, treatment availability, and prevalence in different population groups. We used a discrete choice experiment with a general community and an Ashkenazi Jewish sample; data were analysed using multinomial logit with random coefficients. Although Jewish respondents were more likely to be tested, both groups seem to be making very similar tradeoffs across attributes when they make genetic testing choices.

  19. Discrete Dynamics Lab

    NASA Astrophysics Data System (ADS)

    Wuensche, Andrew

    DDLab is interactive graphics software for creating, visualizing, and analyzing many aspects of Cellular Automata, Random Boolean Networks, and Discrete Dynamical Networks in general and studying their behavior, both from the time-series perspective — space-time patterns, and from the state-space perspective — attractor basins. DDLab is relevant to research, applications, and education in the fields of complexity, self-organization, emergent phenomena, chaos, collision-based computing, neural networks, content addressable memory, genetic regulatory networks, dynamical encryption, generative art and music, and the study of the abstract mathematical/physical/dynamical phenomena in their own right.

  20. A discrete model on Sierpinski gasket substrate for a conserved current equation with a conservative noise

    NASA Astrophysics Data System (ADS)

    Kim, Dae Ho; Kim, Jin Min

    2012-09-01

    A conserved discrete model on the Sierpinski gasket substrate is studied. The interface width W in the model follows the Family-Vicsek dynamic scaling form with growth exponent β ≈ 0.0542, roughness exponent α ≈ 0.240 and dynamic exponent z ≈ 4.42. They satisfy a scaling relation α + z = 2zrw, where zrw is the random walk exponent of the fractal substrate. Also, they are in a good agreement with the predicted values of a fractional Langevin equation \\frac{\\partial h}{\\partial t}={\

  1. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing. The strengths and limitations of GA for launch vehicle design optimization is studied.

  2. Price percolation model

    NASA Astrophysics Data System (ADS)

    Kanai, Yasuhiro; Abe, Keiji; Seki, Yoichi

    2015-06-01

    We propose a price percolation model to reproduce the price distribution of components used in industrial finished goods. The intent is to show, using the price percolation model and a component category as an example, that percolation behaviors, which exist in the matter system, the ecosystem, and human society, also exist in abstract, random phenomena satisfying the power law. First, we discretize the total potential demand for a component category, considering it a random field. Second, we assume that the discretized potential demand corresponding to a function of a finished good turns into actual demand if the difficulty of function realization is less than the maximum difficulty of the realization. The simulations using this model suggest that changes in a component category's price distribution are due to changes in the total potential demand corresponding to the lattice size and the maximum difficulty of realization, which is an occupation probability. The results are verified using electronic components' sales data.

  3. Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1994-01-01

    The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.

  4. A Local-Realistic Model of Quantum Mechanics Based on a Discrete Spacetime

    NASA Astrophysics Data System (ADS)

    Sciarretta, Antonio

    2018-01-01

    This paper presents a realistic, stochastic, and local model that reproduces nonrelativistic quantum mechanics (QM) results without using its mathematical formulation. The proposed model only uses integer-valued quantities and operations on probabilities, in particular assuming a discrete spacetime under the form of a Euclidean lattice. Individual (spinless) particle trajectories are described as random walks. Transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice nodes they visit during the walk. QM predictions are retrieved as probability distributions of similarly-prepared ensembles of particles. The scenarios considered to assess the model comprise of free particle, constant external force, harmonic oscillator, particle in a box, the Delta potential, particle on a ring, particle on a sphere and include quantization of energy levels and angular momentum, as well as momentum entanglement.

  5. Detecting dynamical changes in time series by using the Jensen Shannon divergence

    NASA Astrophysics Data System (ADS)

    Mateos, D. M.; Riveaud, L. E.; Lamberti, P. W.

    2017-08-01

    Most of the time series in nature are a mixture of signals with deterministic and random dynamics. Thus the distinction between these two characteristics becomes important. Distinguishing between chaotic and aleatory signals is difficult because they have a common wide band power spectrum, a delta like autocorrelation function, and share other features as well. In general, signals are presented as continuous records and require to be discretized for being analyzed. In this work, we introduce different schemes for discretizing and for detecting dynamical changes in time series. One of the main motivations is to detect transitions between the chaotic and random regime. The tools here used here originate from the Information Theory. The schemes proposed are applied to simulated and real life signals, showing in all cases a high proficiency for detecting changes in the dynamics of the associated time series.

  6. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    EPA Science Inventory



    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  7. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    NASA Astrophysics Data System (ADS)

    Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe

    2017-04-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.

  8. Stochastic Stability of Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.

  9. Discrete Emotion Effects on Lexical Decision Response Times

    PubMed Central

    Briesemeister, Benny B.; Kuchinke, Lars; Jacobs, Arthur M.

    2011-01-01

    Our knowledge about affective processes, especially concerning effects on cognitive demands like word processing, is increasing steadily. Several studies consistently document valence and arousal effects, and although there is some debate on possible interactions and different notions of valence, broad agreement on a two dimensional model of affective space has been achieved. Alternative models like the discrete emotion theory have received little interest in word recognition research so far. Using backward elimination and multiple regression analyses, we show that five discrete emotions (i.e., happiness, disgust, fear, anger and sadness) explain as much variance as two published dimensional models assuming continuous or categorical valence, with the variables happiness, disgust and fear significantly contributing to this account. Moreover, these effects even persist in an experiment with discrete emotion conditions when the stimuli are controlled for emotional valence and arousal levels. We interpret this result as evidence for discrete emotion effects in visual word recognition that cannot be explained by the two dimensional affective space account. PMID:21887307

  10. Discrete emotion effects on lexical decision response times.

    PubMed

    Briesemeister, Benny B; Kuchinke, Lars; Jacobs, Arthur M

    2011-01-01

    Our knowledge about affective processes, especially concerning effects on cognitive demands like word processing, is increasing steadily. Several studies consistently document valence and arousal effects, and although there is some debate on possible interactions and different notions of valence, broad agreement on a two dimensional model of affective space has been achieved. Alternative models like the discrete emotion theory have received little interest in word recognition research so far. Using backward elimination and multiple regression analyses, we show that five discrete emotions (i.e., happiness, disgust, fear, anger and sadness) explain as much variance as two published dimensional models assuming continuous or categorical valence, with the variables happiness, disgust and fear significantly contributing to this account. Moreover, these effects even persist in an experiment with discrete emotion conditions when the stimuli are controlled for emotional valence and arousal levels. We interpret this result as evidence for discrete emotion effects in visual word recognition that cannot be explained by the two dimensional affective space account.

  11. Discrete post-processing of total cloud cover ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian

    2017-04-01

    This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.

  12. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  13. Improved result on stability analysis of discrete stochastic neural networks with time delay

    NASA Astrophysics Data System (ADS)

    Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng

    2009-04-01

    This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.

  14. New Discrete Fibonacci Charge Pump Design, Evaluation and Measurement

    NASA Astrophysics Data System (ADS)

    Matoušek, David; Hospodka, Jiří; Šubrt, Ondřej

    2017-06-01

    This paper focuses on the practical aspects of the realisation of Dickson and Fibonacci charge pumps. Standard Dickson charge pump circuit solution and new Fibonacci charge pump implementation are compared. Both charge pumps were designed and then evaluated by LTspice XVII simulations and realised in a discrete form on printed circuit board (PCB). Finally, the key parameters as the output voltage, efficiency, rise time, variable power supply and clock frequency effects were measured.

  15. Randomized clinical trial of extended use of a hydrophobic condenser humidifier: 1 vs. 7 days.

    PubMed

    Thomachot, Laurent; Leone, Marc; Razzouk, Karim; Antonini, François; Vialet, Renaud; Martin, Claude

    2002-01-01

    To determine whether extended use (7 days) would affect the efficiency on heat and water preservation of a hydrophobic condenser humidifier as well as the rate of ventilation-acquired pneumonia, compared with 1 day of use. Prospective, controlled, randomized, not blinded, clinical study. Twelve-bed intensive care unit of a university hospital. One hundred and fifty-five consecutive patients undergoing mechanical ventilation for > or = 48 hrs. After randomization, patients were allocated to one of the two following groups: a) heat and moisture exchangers (HMEs) changed every 24 hrs; b) HMEs changed only once a week. Devices in both groups could be changed at the discretion of the staff when signs of occlusion or increased resistance were identified. Efficient airway humidification and heating were assessed by clinical variables (numbers of tracheal suctionings and instillations required, peak and mean airway pressures). The frequency rates of bronchial colonization and ventilation-acquired pneumonia were evaluated by using clinical and microbiological criteria. Endotracheal tube occlusion, ventilatory support variables, duration of mechanical ventilation, length of intensive care, acquired multiorgan dysfunction, and mortality rates also were recorded. The two groups were similar at the time of randomization. Endotracheal tube occlusion never occurred. In the targeted population (patients ventilated for > or = 7 days), the frequency rate of ventilation-acquired pneumonia was 24% in the HME 1-day group and 17% in the HME 7-day group (p > .05, not significant). Ventilation-acquired pneumonia rates per 1000 ventilatory support days were 16.4/1000 in the HME 1-day group and 12.4/1000 in the HME 7-day group (p > .05, not significant). No statistically significant differences were found between the two groups for duration of mechanical ventilation, intensive care unit length of stay, acquired organ system derangements, and mortality rate. There was indirect evidence of very little, if any, change in HME resistance. Changing the studied hydrophobic HME after 7 days did not affect efficiency, increase resistance, or altered bacterial colonization. The frequency rate of ventilation-acquired pneumonia was also unchanged. Use of HMEs for > 24 hrs and up to 7 days is safe.

  16. Identification of Novel Growth Regulators in Plant Populations Expressing Random Peptides1[OPEN

    PubMed Central

    Bao, Zhilong; Clancy, Maureen A.

    2017-01-01

    The use of chemical genomics approaches allows the identification of small molecules that integrate into biological systems, thereby changing discrete processes that influence growth, development, or metabolism. Libraries of chemicals are applied to living systems, and changes in phenotype are observed, potentially leading to the identification of new growth regulators. This work describes an approach that is the nexus of chemical genomics and synthetic biology. Here, each plant in an extensive population synthesizes a unique small peptide arising from a transgene composed of a randomized nucleic acid sequence core flanked by translational start, stop, and cysteine-encoding (for disulfide cyclization) sequences. Ten and 16 amino acid sequences, bearing a core of six and 12 random amino acids, have been synthesized in Arabidopsis (Arabidopsis thaliana) plants. Populations were screened for phenotypes from the seedling stage through senescence. Dozens of phenotypes were observed in over 2,000 plants analyzed. Ten conspicuous phenotypes were verified through separate transformation and analysis of multiple independent lines. The results indicate that these populations contain sequences that often influence discrete aspects of plant biology. Novel peptides that affect photosynthesis, flowering, and red light response are described. The challenge now is to identify the mechanistic integrations of these peptides into biochemical processes. These populations serve as a new tool to identify small molecules that modulate discrete plant functions that could be produced later in transgenic plants or potentially applied exogenously to impart their effects. These findings could usher in a new generation of agricultural growth regulators, herbicides, or defense compounds. PMID:28807931

  17. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  18. Quantum cryptography with finite resources: unconditional security bound for discrete-variable protocols with one-way postprocessing.

    PubMed

    Scarani, Valerio; Renner, Renato

    2008-05-23

    We derive a bound for the security of quantum key distribution with finite resources under one-way postprocessing, based on a definition of security that is composable and has an operational meaning. While our proof relies on the assumption of collective attacks, unconditional security follows immediately for standard protocols such as Bennett-Brassard 1984 and six-states protocol. For single-qubit implementations of such protocols, we find that the secret key rate becomes positive when at least N approximately 10(5) signals are exchanged and processed. For any other discrete-variable protocol, unconditional security can be obtained using the exponential de Finetti theorem, but the additional overhead leads to very pessimistic estimates.

  19. Topology and layout optimization of discrete and continuum structures

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Kikuchi, Noboru

    1993-01-01

    The basic features of the ground structure method for truss structure an continuum problems are described. Problems with a large number of potential structural elements are considered using the compliance of the structure as the objective function. The design problem is the minimization of compliance for a given structural weight, and the design variables for truss problems are the cross-sectional areas of the individual truss members, while for continuum problems they are the variable densities of material in each of the elements of the FEM discretization. It is shown how homogenization theory can be applied to provide a relation between material density and the effective material properties of a periodic medium with a known microstructure of material and voids.

  20. On the convergence of a fully discrete scheme of LES type to physically relevant solutions of the incompressible Navier-Stokes

    NASA Astrophysics Data System (ADS)

    Berselli, Luigi C.; Spirito, Stefano

    2018-06-01

    Obtaining reliable numerical simulations of turbulent fluids is a challenging problem in computational fluid mechanics. The large eddy simulation (LES) models are efficient tools to approximate turbulent fluids, and an important step in the validation of these models is the ability to reproduce relevant properties of the flow. In this paper, we consider a fully discrete approximation of the Navier-Stokes-Voigt model by an implicit Euler algorithm (with respect to the time variable) and a Fourier-Galerkin method (in the space variables). We prove the convergence to weak solutions of the incompressible Navier-Stokes equations satisfying the natural local entropy condition, hence selecting the so-called physically relevant solutions.

  1. Experimental study on discretely modulated continuous-variable quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yong; Zou Hongxin; Chen Pingxing

    2010-08-15

    We present a discretely modulated continuous-variable quantum key distribution system in free space by using strong coherent states. The amplitude noise in the laser source is suppressed to the shot-noise limit by using a mode cleaner combined with a frequency shift technique. Also, it is proven that the phase noise in the source has no impact on the final secret key rate. In order to increase the encoding rate, we use broadband homodyne detectors and the no-switching protocol. In a realistic model, we establish a secret key rate of 46.8 kbits/s against collective attacks at an encoding rate of 10more » MHz for a 90% channel loss when the modulation variance is optimal.« less

  2. Real-time measurement of quality during the compaction of subgrade soils.

    DOT National Transportation Integrated Search

    2012-12-01

    Conventional quality control of subgrade soils during their compaction is usually performed by monitoring moisture content and dry density at a few discrete locations. However, randomly selected points do not adequately represent the entire compacted...

  3. DiscML: an R package for estimating evolutionary rates of discrete characters using maximum likelihood.

    PubMed

    Kim, Tane; Hao, Weilong

    2014-09-27

    The study of discrete characters is crucial for the understanding of evolutionary processes. Even though great advances have been made in the analysis of nucleotide sequences, computer programs for non-DNA discrete characters are often dedicated to specific analyses and lack flexibility. Discrete characters often have different transition rate matrices, variable rates among sites and sometimes contain unobservable states. To obtain the ability to accurately estimate a variety of discrete characters, programs with sophisticated methodologies and flexible settings are desired. DiscML performs maximum likelihood estimation for evolutionary rates of discrete characters on a provided phylogeny with the options that correct for unobservable data, rate variations, and unknown prior root probabilities from the empirical data. It gives users options to customize the instantaneous transition rate matrices, or to choose pre-determined matrices from models such as birth-and-death (BD), birth-death-and-innovation (BDI), equal rates (ER), symmetric (SYM), general time-reversible (GTR) and all rates different (ARD). Moreover, we show application examples of DiscML on gene family data and on intron presence/absence data. DiscML was developed as a unified R program for estimating evolutionary rates of discrete characters with no restriction on the number of character states, and with flexibility to use different transition models. DiscML is ideal for the analyses of binary (1s/0s) patterns, multi-gene families, and multistate discrete morphological characteristics.

  4. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  5. Is processing of symbols and words influenced by writing system? Evidence from Chinese, Korean, English, and Greek.

    PubMed

    Altani, Angeliki; Georgiou, George K; Deng, Ciping; Cho, Jeung-Ryeul; Katopodi, Katerina; Wei, Wei; Protopapas, Athanassios

    2017-12-01

    We examined cross-linguistic effects in the relationship between serial and discrete versions of digit naming and word reading. In total, 113 Mandarin-speaking Chinese children, 100 Korean children, 112 English-speaking Canadian children, and 108 Greek children in Grade 3 were administered tasks of serial and discrete naming of words and digits. Interrelations among tasks indicated that the link between rapid naming and reading is largely determined by the format of the tasks across orthographies. Multigroup path analyses with discrete and serial word reading as dependent variables revealed commonalities as well as significant differences between writing systems. The path coefficient from discrete digits to discrete words was greater for the more transparent orthographies, consistent with more efficient sight-word processing. The effect of discrete word reading on serial word reading was stronger in alphabetic languages, where there was also a suppressive effect of discrete digit naming. However, the effect of serial digit naming on serial word reading did not differ among the four language groups. This pattern of relationships challenges a universal account of reading fluency acquisition while upholding a universal role of rapid serial naming, further distinguishing between multi-element interword and intraword processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Diagnosis of delay-deadline failures in real time discrete event models.

    PubMed

    Biswas, Santosh; Sarkar, Dipankar; Bhowal, Prodip; Mukhopadhyay, Siddhartha

    2007-10-01

    In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.

  7. Synchronous Parallel Emulation and Discrete Event Simulation System with Self-Contained Simulation Objects and Active Event Objects

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    1998-01-01

    The present invention is embodied in a method of performing object-oriented simulation and a system having inter-connected processor nodes operating in parallel to simulate mutual interactions of a set of discrete simulation objects distributed among the nodes as a sequence of discrete events changing state variables of respective simulation objects so as to generate new event-defining messages addressed to respective ones of the nodes. The object-oriented simulation is performed at each one of the nodes by assigning passive self-contained simulation objects to each one of the nodes, responding to messages received at one node by generating corresponding active event objects having user-defined inherent capabilities and individual time stamps and corresponding to respective events affecting one of the passive self-contained simulation objects of the one node, restricting the respective passive self-contained simulation objects to only providing and receiving information from die respective active event objects, requesting information and changing variables within a passive self-contained simulation object by the active event object, and producing corresponding messages specifying events resulting therefrom by the active event objects.

  8. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  9. Retention capacity of correlated surfaces.

    PubMed

    Schrenk, K J; Araújo, N A M; Ziff, R M; Herrmann, H J

    2014-06-01

    We extend the water retention model [C. L. Knecht et al., Phys. Rev. Lett. 108, 045703 (2012)] to correlated random surfaces. We find that the retention capacity of discrete random landscapes is strongly affected by spatial correlations among the heights. This phenomenon is related to the emergence of power-law scaling in the lake volume distribution. We also solve the uncorrelated case exactly for a small lattice and present bounds on the retention of uncorrelated landscapes.

  10. The Role of Emotion in Global Warming Policy Support and Opposition

    PubMed Central

    Smith, Nicholas; Leiserowitz, Anthony

    2014-01-01

    Prior research has found that affect and affective imagery strongly influence public support for global warming. This article extends this literature by exploring the separate influence of discrete emotions. Utilizing a nationally representative survey in the United States, this study found that discrete emotions were stronger predictors of global warming policy support than cultural worldviews, negative affect, image associations, or sociodemographic variables. In particular, worry, interest, and hope were strongly associated with increased policy support. The results contribute to experiential theories of risk information processing and suggest that discrete emotions play a significant role in public support for climate change policy. Implications for climate change communication are also discussed. PMID:24219420

  11. The role of emotion in global warming policy support and opposition.

    PubMed

    Smith, Nicholas; Leiserowitz, Anthony

    2014-05-01

    Prior research has found that affect and affective imagery strongly influence public support for global warming. This article extends this literature by exploring the separate influence of discrete emotions. Utilizing a nationally representative survey in the United States, this study found that discrete emotions were stronger predictors of global warming policy support than cultural worldviews, negative affect, image associations, or sociodemographic variables. In particular, worry, interest, and hope were strongly associated with increased policy support. The results contribute to experiential theories of risk information processing and suggest that discrete emotions play a significant role in public support for climate change policy. Implications for climate change communication are also discussed. © 2013 Society for Risk Analysis.

  12. Event-Based Variance-Constrained ${\\mathcal {H}}_{\\infty }$ Filtering for Stochastic Parameter Systems Over Sensor Networks With Successive Missing Measurements.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2018-03-01

    This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.

  13. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  14. Salivary flow rate and pH in patients with oral pathologies.

    PubMed

    Foglio-Bonda, P L; Brilli, K; Pattarino, F; Foglio-Bonda, A

    2017-01-01

    Determine salivary pH and flow rate (FR) in a sample of 164 patients who came to Oral Pathology ambulatory, 84 suffering from oral lesions and 80 without oral lesions. Another aim was to evaluate factors that influence salivary flow rate. Subjects underwent clinical examination and completed an anamnestic questionnaire in order to obtain useful information that was used to classify participants in different groups. Unstimulated whole saliva (UWS) was collected using the spitting method at 11:00 am. The FR was evaluated with the weighing technique and a portable pHmeter, equipped with a microelectrode, was used to measure pH. Both univariate and classification (single and Random Forest) analyses were performed. The data analysis showed that FR and pH showed significant differences (p < 0.001) between patients with oral lesions (FR = 0.336 mL/min, pH = 6.69) and the ones without oral lesions (FR = 0.492 mL/min, pH = 6.96). By Random Forest, oral lesions and antihypertensive drugs were ranked in the top two among the evaluated variables to discretize subjects with FR = 0.16 mL/min. Our study shows that there is a relationship between oral lesions, antihypertensive drugs and alteration of pH and FR.

  15. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  16. Image encryption using random sequence generated from generalized information domain

    NASA Astrophysics Data System (ADS)

    Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu

    2016-05-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.

  17. Bayesian Analysis of Structural Equation Models with Nonlinear Covariates and Latent Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2006-01-01

    In this article, we formulate a nonlinear structural equation model (SEM) that can accommodate covariates in the measurement equation and nonlinear terms of covariates and exogenous latent variables in the structural equation. The covariates can come from continuous or discrete distributions. A Bayesian approach is developed to analyze the…

  18. Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir

    2010-01-01

    A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…

  19. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  20. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    NASA Astrophysics Data System (ADS)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  1. Norms for 10,491 Spanish words for five discrete emotions: Happiness, disgust, anger, fear, and sadness.

    PubMed

    Stadthagen-González, Hans; Ferré, Pilar; Pérez-Sánchez, Miguel A; Imbault, Constance; Hinojosa, José Antonio

    2017-09-18

    The discrete emotion theory proposes that affective experiences can be reduced to a limited set of universal "basic" emotions, most commonly identified as happiness, sadness, anger, fear, and disgust. Here we present norms for 10,491 Spanish words for those five discrete emotions collected from a total of 2,010 native speakers, making it the largest set of norms for discrete emotions in any language to date. When used in conjunction with the norms from Hinojosa, Martínez-García et al. (Behavior Research Methods, 48, 272-284, 2016) and Ferré, Guasch, Martínez-García, Fraga, & Hinojosa (Behavior Research Methods, 49, 1082-1094, 2017), researchers now have access to ratings of discrete emotions for 13,633 Spanish words. Our norms show a high degree of inter-rater reliability and correlate highly with those from Ferré et al. (2017). Our exploration of the relationship between the five discrete emotions and relevant lexical and emotional variables confirmed findings of previous studies conducted with smaller datasets. The availability of such large set of norms will greatly facilitate the study of emotion, language and related fields. The norms are available as supplementary materials to this article.

  2. A BASIC Program for Use in Teaching Population Dynamics.

    ERIC Educational Resources Information Center

    Kidd, N. A. C.

    1984-01-01

    Describes an interactive simulation model which can be used to demonstrate population growth with discrete or overlapping populations and the effects of random, constant, or density-dependent mortality. The program listing (for Commodore PET 4032 microcomputer) is included. (Author/DH)

  3. A Discrete Fracture Network Model with Stress-Driven Nucleation and Growth

    NASA Astrophysics Data System (ADS)

    Lavoine, E.; Darcel, C.; Munier, R.; Davy, P.

    2017-12-01

    The realism of Discrete Fracture Network (DFN) models, beyond the bulk statistical properties, relies on the spatial organization of fractures, which is not issued by purely stochastic DFN models. The realism can be improved by injecting prior information in DFN from a better knowledge of the geological fracturing processes. We first develop a model using simple kinematic rules for mimicking the growth of fractures from nucleation to arrest, in order to evaluate the consequences of the DFN structure on the network connectivity and flow properties. The model generates fracture networks with power-law scaling distributions and a percentage of T-intersections that are consistent with field observations. Nevertheless, a larger complexity relying on the spatial variability of natural fractures positions cannot be explained by the random nucleation process. We propose to introduce a stress-driven nucleation in the timewise process of this kinematic model to study the correlations between nucleation, growth and existing fracture patterns. The method uses the stress field generated by existing fractures and remote stress as an input for a Monte-Carlo sampling of nuclei centers at each time step. Networks so generated are found to have correlations over a large range of scales, with a correlation dimension that varies with time and with the function that relates the nucleation probability to stress. A sensibility analysis of input parameters has been performed in 3D to quantify the influence of fractures and remote stress field orientations.

  4. Reduction of display artifacts by random sampling

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.

    1983-01-01

    The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.

  5. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    NASA Astrophysics Data System (ADS)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  6. Classes and continua of hippocampal CA1 inhibitory neurons revealed by single-cell transcriptomics.

    PubMed

    Harris, Kenneth D; Hochgerner, Hannah; Skene, Nathan G; Magno, Lorenza; Katona, Linda; Bengtsson Gonzales, Carolina; Somogyi, Peter; Kessaris, Nicoletta; Linnarsson, Sten; Hjerling-Leffler, Jens

    2018-06-18

    Understanding any brain circuit will require a categorization of its constituent neurons. In hippocampal area CA1, at least 23 classes of GABAergic neuron have been proposed to date. However, this list may be incomplete; additionally, it is unclear whether discrete classes are sufficient to describe the diversity of cortical inhibitory neurons or whether continuous modes of variability are also required. We studied the transcriptomes of 3,663 CA1 inhibitory cells, revealing 10 major GABAergic groups that divided into 49 fine-scale clusters. All previously described and several novel cell classes were identified, with three previously described classes unexpectedly found to be identical. A division into discrete classes, however, was not sufficient to describe the diversity of these cells, as continuous variation also occurred between and within classes. Latent factor analysis revealed that a single continuous variable could predict the expression levels of several genes, which correlated similarly with it across multiple cell types. Analysis of the genes correlating with this variable suggested it reflects a range from metabolically highly active faster-spiking cells that proximally target pyramidal cells to slower-spiking cells targeting distal dendrites or interneurons. These results elucidate the complexity of inhibitory neurons in one of the simplest cortical structures and show that characterizing these cells requires continuous modes of variation as well as discrete cell classes.

  7. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  8. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  9. Violation of Bell's Inequality Using Continuous Variable Measurements

    NASA Astrophysics Data System (ADS)

    Thearle, Oliver; Janousek, Jiri; Armstrong, Seiji; Hosseini, Sara; Schünemann Mraz, Melanie; Assad, Syed; Symul, Thomas; James, Matthew R.; Huntington, Elanor; Ralph, Timothy C.; Lam, Ping Koy

    2018-01-01

    A Bell inequality is a fundamental test to rule out local hidden variable model descriptions of correlations between two physically separated systems. There have been a number of experiments in which a Bell inequality has been violated using discrete-variable systems. We demonstrate a violation of Bell's inequality using continuous variable quadrature measurements. By creating a four-mode entangled state with homodyne detection, we recorded a clear violation with a Bell value of B =2.31 ±0.02 . This opens new possibilities for using continuous variable states for device independent quantum protocols.

  10. Maximally random discrete-spin systems with symmetric and asymmetric interactions and maximally degenerate ordering

    NASA Astrophysics Data System (ADS)

    Atalay, Bora; Berker, A. Nihat

    2018-05-01

    Discrete-spin systems with maximally random nearest-neighbor interactions that can be symmetric or asymmetric, ferromagnetic or antiferromagnetic, including off-diagonal disorder, are studied, for the number of states q =3 ,4 in d dimensions. We use renormalization-group theory that is exact for hierarchical lattices and approximate (Migdal-Kadanoff) for hypercubic lattices. For all d >1 and all noninfinite temperatures, the system eventually renormalizes to a random single state, thus signaling q ×q degenerate ordering. Note that this is the maximally degenerate ordering. For high-temperature initial conditions, the system crosses over to this highly degenerate ordering only after spending many renormalization-group iterations near the disordered (infinite-temperature) fixed point. Thus, a temperature range of short-range disorder in the presence of long-range order is identified, as previously seen in underfrustrated Ising spin-glass systems. The entropy is calculated for all temperatures, behaves similarly for ferromagnetic and antiferromagnetic interactions, and shows a derivative maximum at the short-range disordering temperature. With a sharp immediate contrast of infinitesimally higher dimension 1 +ɛ , the system is as expected disordered at all temperatures for d =1 .

  11. Derivation and computation of discrete-delay and continuous-delay SDEs in mathematical biology.

    PubMed

    Allen, Edward J

    2014-06-01

    Stochastic versions of several discrete-delay and continuous-delay differential equations, useful in mathematical biology, are derived from basic principles carefully taking into account the demographic, environmental, or physiological randomness in the dynamic processes. In particular, stochastic delay differential equation (SDDE) models are derived and studied for Nicholson's blowflies equation, Hutchinson's equation, an SIS epidemic model with delay, bacteria/phage dynamics, and glucose/insulin levels. Computational methods for approximating the SDDE models are described. Comparisons between computational solutions of the SDDEs and independently formulated Monte Carlo calculations support the accuracy of the derivations and of the computational methods.

  12. Power-law Exponent in Multiplicative Langevin Equation with Temporally Correlated Noise

    NASA Astrophysics Data System (ADS)

    Morita, Satoru

    2018-05-01

    Power-law distributions are ubiquitous in nature. Random multiplicative processes are a basic model for the generation of power-law distributions. For discrete-time systems, the power-law exponent is known to decrease as the autocorrelation time of the multiplier increases. However, for continuous-time systems, it is not yet clear how the temporal correlation affects the power-law behavior. Herein, we analytically investigated a multiplicative Langevin equation with colored noise. We show that the power-law exponent depends on the details of the multiplicative noise, in contrast to the case of discrete-time systems.

  13. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  14. Heralded processes on continuous-variable spaces as quantum maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi

    2014-12-04

    Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.

  15. A plane wave model for direct simulation of reflection and transmission by discretely inhomogeneous plane parallel media

    NASA Astrophysics Data System (ADS)

    Mackowski, Daniel; Ramezanpour, Bahareh

    2018-07-01

    A formulation is developed for numerically solving the frequency domain Maxwell's equations in plane parallel layers of inhomogeneous media. As was done in a recent work [1], the plane parallel layer is modeled as an infinite square lattice of W × W × H unit cells, with W being a sample width of the layer and H the layer thickness. As opposed to the 3D volume integral/discrete dipole formulation, the derivation begins with a Fourier expansion of the electric field amplitude in the lateral plane, and leads to a coupled system of 1D ordinary differential equations in the depth direction of the layer. A 1D dyadic Green's function is derived for this system and used to construct a set of coupled 1D integral equations for the field expansion coefficients. The resulting mathematical formulation is considerably simpler and more compact than that derived, for the same system, using the discrete dipole approximation applied to the periodic plane lattice. Furthermore, the fundamental property variable appearing in the formulation is the Fourier transformed complex permittivity distribution in the unit cell, and the method obviates any need to define or calculate a dipole polarizability. Although designed primarily for random media calculations, the method is also capable of predicting the single scattering properties of individual particles; comparisons are presented to demonstrate that the method can accurately reproduce, at scattering angles not too close to 90°, the polarimetric scattering properties of single and multiple spheres. The derivation of the dyadic Green's function allows for an analytical preconditioning of the equations, and it is shown that this can result in significantly accelerated solution times when applied to densely-packed systems of particles. Calculation results demonstrate that the method, when applied to inhomogeneous media, can predict coherent backscattering and polarization opposition effects.

  16. Hybrid ICA-Bayesian network approach reveals distinct effective connectivity differences in schizophrenia.

    PubMed

    Kim, D; Burge, J; Lane, T; Pearlson, G D; Kiehl, K A; Calhoun, V D

    2008-10-01

    We utilized a discrete dynamic Bayesian network (dDBN) approach (Burge, J., Lane, T., Link, H., Qiu, S., Clark, V.P., 2007. Discrete dynamic Bayesian network analysis of fMRI data. Hum Brain Mapp.) to determine differences in brain regions between patients with schizophrenia and healthy controls on a measure of effective connectivity, termed the approximate conditional likelihood score (ACL) (Burge, J., Lane, T., 2005. Learning Class-Discriminative Dynamic Bayesian Networks. Proceedings of the International Conference on Machine Learning, Bonn, Germany, pp. 97-104.). The ACL score represents a class-discriminative measure of effective connectivity by measuring the relative likelihood of the correlation between brain regions in one group versus another. The algorithm is capable of finding non-linear relationships between brain regions because it uses discrete rather than continuous values and attempts to model temporal relationships with a first-order Markov and stationary assumption constraint (Papoulis, A., 1991. Probability, random variables, and stochastic processes. McGraw-Hill, New York.). Since Bayesian networks are overly sensitive to noisy data, we introduced an independent component analysis (ICA) filtering approach that attempted to reduce the noise found in fMRI data by unmixing the raw datasets into a set of independent spatial component maps. Components that represented noise were removed and the remaining components reconstructed into the dimensions of the original fMRI datasets. We applied the dDBN algorithm to a group of 35 patients with schizophrenia and 35 matched healthy controls using an ICA filtered and unfiltered approach. We determined that filtering the data significantly improved the magnitude of the ACL score. Patients showed the greatest ACL scores in several regions, most markedly the cerebellar vermis and hemispheres. Our findings suggest that schizophrenia patients exhibit weaker connectivity than healthy controls in multiple regions, including bilateral temporal, frontal, and cerebellar regions during an auditory paradigm.

  17. Discrete emotions predict changes in cognition, judgment, experience, behavior, and physiology: a meta-analysis of experimental emotion elicitations.

    PubMed

    Lench, Heather C; Flores, Sarah A; Bench, Shane W

    2011-09-01

    Our purpose in the present meta-analysis was to examine the extent to which discrete emotions elicit changes in cognition, judgment, experience, behavior, and physiology; whether these changes are correlated as would be expected if emotions organize responses across these systems; and which factors moderate the magnitude of these effects. Studies (687; 4,946 effects, 49,473 participants) were included that elicited the discrete emotions of happiness, sadness, anger, and anxiety as independent variables with adults. Consistent with discrete emotion theory, there were (a) moderate differences among discrete emotions; (b) differences among discrete negative emotions; and (c) correlated changes in behavior, experience, and physiology (cognition and judgment were mostly not correlated with other changes). Valence, valence-arousal, and approach-avoidance models of emotion were not as clearly supported. There was evidence that these factors are likely important components of emotion but that they could not fully account for the pattern of results. Most emotion elicitations were effective, although the efficacy varied with the emotions being compared. Picture presentations were overall the most effective elicitor of discrete emotions. Stronger effects of emotion elicitations were associated with happiness versus negative emotions, self-reported experience, a greater proportion of women (for elicitations of happiness and sadness), omission of a cover story, and participants alone versus in groups. Conclusions are limited by the inclusion of only some discrete emotions, exclusion of studies that did not elicit discrete emotions, few available effect sizes for some contrasts and moderators, and the methodological rigor of included studies. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  18. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  19. Numerically Exact Computer Simulations of Light Scattering by Densely Packed, Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2011-01-01

    Direct computer simulations of electromagnetic scattering by discrete random media have become an active area of research. In this progress review, we summarize and analyze our main results obtained by means of numerically exact computer solutions of the macroscopic Maxwell equations. We consider finite scattering volumes with size parameters in the range, composed of varying numbers of randomly distributed particles with different refractive indices. The main objective of our analysis is to examine whether all backscattering effects predicted by the low-density theory of coherent backscattering (CB) also take place in the case of densely packed media. Based on our extensive numerical data we arrive at the following conclusions: (i) all backscattering effects predicted by the asymptotic theory of CB can also take place in the case of densely packed media; (ii) in the case of very large particle packing density, scattering characteristics of discrete random media can exhibit behavior not predicted by the low-density theories of CB and radiative transfer; (iii) increasing the absorptivity of the constituent particles can either enhance or suppress typical manifestations of CB depending on the particle packing density and the real part of the refractive index. Our numerical data strongly suggest that spectacular backscattering effects identified in laboratory experiments and observed for a class of high-albedo Solar System objects are caused by CB.

  20. Analysis of a mesoscale infiltration and water seepage test in unsaturated fractured rock: Spatial variabilities and discrete fracture patterns

    USGS Publications Warehouse

    Zhou, Q.; Salve, R.; Liu, H.-H.; Wang, J.S.Y.; Hudson, D.

    2006-01-01

    A mesoscale (21??m in flow distance) infiltration and seepage test was recently conducted in a deep, unsaturated fractured rock system at the crossover point of two underground tunnels. Water was released from a 3??m ?? 4??m infiltration plot on the floor of an alcove in the upper tunnel, and seepage was collected from the ceiling of a niche in the lower tunnel. Significant temporal and (particularly) spatial variabilities were observed in both measured infiltration and seepage rates. To analyze the test results, a three-dimensional unsaturated flow model was used. A column-based scheme was developed to capture heterogeneous hydraulic properties reflected by these spatial variabilities observed. Fracture permeability and van Genuchten ?? parameter [van Genuchten, M.T., 1980. A closed-form equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J. 44, 892-898] were calibrated for each rock column in the upper and lower hydrogeologic units in the test bed. The calibrated fracture properties for the infiltration and seepage zone enabled a good match between simulated and measured (spatially varying) seepage rates. The numerical model was also able to capture the general trend of the highly transient seepage processes through a discrete fracture network. The calibrated properties and measured infiltration/seepage rates were further compared with mapped discrete fracture patterns at the top and bottom boundaries. The measured infiltration rates and calibrated fracture permeability of the upper unit were found to be partially controlled by the fracture patterns on the infiltration plot (as indicated by their positive correlations with fracture density). However, no correlation could be established between measured seepage rates and density of fractures mapped on the niche ceiling. This lack of correlation indicates the complexity of (preferential) unsaturated flow within the discrete fracture network. This also indicates that continuum-based modeling of unsaturated flow in fractured rock at mesoscale or a larger scale is not necessarily conditional explicitly on discrete fracture patterns. ?? 2006 Elsevier B.V. All rights reserved.

  1. Dual methods and approximation concepts in structural synthesis

    NASA Technical Reports Server (NTRS)

    Fleury, C.; Schmit, L. A., Jr.

    1980-01-01

    Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.

  2. General properties of solutions to inhomogeneous Black-Scholes equations with discontinuous maturity payoffs

    NASA Astrophysics Data System (ADS)

    O, Hyong-Chol; Jo, Jong-Jun; Kim, Ji-Sok

    2016-02-01

    We provide representations of solutions to terminal value problems of inhomogeneous Black-Scholes equations and study such general properties as min-max estimates, gradient estimates, monotonicity and convexity of the solutions with respect to the stock price variable, which are important for financial security pricing. In particular, we focus on finding representation of the gradient (with respect to the stock price variable) of solutions to the terminal value problems with discontinuous terminal payoffs or inhomogeneous terms. Such terminal value problems are often encountered in pricing problems of compound-like options such as Bermudan options or defaultable bonds with discrete default barrier, default intensity and endogenous default recovery. Our results can be used in pricing real defaultable bonds under consideration of existence of discrete coupons or taxes on coupons.

  3. Continuous-time discrete-space models for animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.

    2015-01-01

    The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.

  4. A discrete decentralized variable structure robotic controller

    NASA Technical Reports Server (NTRS)

    Tumeh, Zuheir S.

    1989-01-01

    A decentralized trajectory controller for robotic manipulators is designed and tested using a multiprocessor architecture and a PUMA 560 robot arm. The controller is made up of a nominal model-based component and a correction component based on a variable structure suction control approach. The second control component is designed using bounds on the difference between the used and actual values of the model parameters. Since the continuous manipulator system is digitally controlled along a trajectory, a discretized equivalent model of the manipulator is used to derive the controller. The motivation for decentralized control is that the derived algorithms can be executed in parallel using a distributed, relatively inexpensive, architecture where each joint is assigned a microprocessor. Nonlinear interaction and coupling between joints is treated as a disturbance torque that is estimated and compensated for.

  5. Further Results on Sufficient LMI Conditions for H∞ Static Output Feedback Control of Discrete-Time Systems

    NASA Astrophysics Data System (ADS)

    Feng, Zhi-Yong; Xu, Li; Matsushita, Shin-Ya; Wu, Min

    Further results on sufficient LMI conditions for H∞ static output feedback (SOF) control of discrete-time systems are presented in this paper, which provide some new insights into this issue. First, by introducing a slack variable with block-triangular structure and choosing the coordinate transformation matrix properly, the conservativeness of one kind of existing sufficient LMI condition is further reduced. Then, by introducing a slack variable with linear matrix equality constraint, another kind of sufficient LMI condition is proposed. Furthermore, the relation of these two kinds of LMI conditions are revealed for the first time through analyzing the effect of different choices of coordinate transformation matrices. Finally, a numerical example is provided to demonstrate the effectiveness and merits of the proposed methods.

  6. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  7. Extended Plefka expansion for stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Bravi, B.; Sollich, P.; Opper, M.

    2016-05-01

    We propose an extension of the Plefka expansion, which is well known for the dynamics of discrete spins, to stochastic differential equations with continuous degrees of freedom and exhibiting generic nonlinearities. The scenario is sufficiently general to allow application to e.g. biochemical networks involved in metabolism and regulation. The main feature of our approach is to constrain in the Plefka expansion not just first moments akin to magnetizations, but also second moments, specifically two-time correlations and responses for each degree of freedom. The end result is an effective equation of motion for each single degree of freedom, where couplings to other variables appear as a self-coupling to the past (i.e. memory term) and a coloured noise. This constitutes a new mean field approximation that should become exact in the thermodynamic limit of a large network, for suitably long-ranged couplings. For the analytically tractable case of linear dynamics we establish this exactness explicitly by appeal to spectral methods of random matrix theory, for Gaussian couplings with arbitrary degree of symmetry.

  8. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  9. Base stock system for patient vs impatient customers with varying demand distribution

    NASA Astrophysics Data System (ADS)

    Fathima, Dowlath; Uduman, P. Sheik

    2013-09-01

    An optimal Base-Stock inventory policy for Patient and Impatient Customers using finite-horizon models is examined. The Base stock system for Patient and Impatient customers is a different type of inventory policy. In case of the model I, Base stock for Patient customer case is evaluated using the Truncated Exponential Distribution. The model II involves the study of Base-stock inventory policies for Impatient customer. A study on these systems reveals that the Customers wait until the arrival of the next order or the customers leaves the system which leads to lost sale. In both the models demand during the period [0, t] is taken to be a random variable. In this paper, Truncated Exponential Distribution satisfies the Base stock policy for the patient customer as a continuous model. So far the Base stock for Impatient Customers leaded to a discrete case but, in this paper we have modeled this condition into a continuous case. We justify this approach mathematically and also numerically.

  10. Stochastic optimization model for order acceptance with multiple demand classes and uncertain demand/supply

    NASA Astrophysics Data System (ADS)

    Yang, Wen; Fung, Richard Y. K.

    2014-06-01

    This article considers an order acceptance problem in a make-to-stock manufacturing system with multiple demand classes in a finite time horizon. Demands in different periods are random variables and are independent of one another, and replenishments of inventory deviate from the scheduled quantities. The objective of this work is to maximize the expected net profit over the planning horizon by deciding the fraction of the demand that is going to be fulfilled. This article presents a stochastic order acceptance optimization model and analyses the existence of the optimal promising policies. An example of a discrete problem is used to illustrate the policies by applying the dynamic programming method. In order to solve the continuous problems, a heuristic algorithm based on stochastic approximation (HASA) is developed. Finally, the computational results of a case example illustrate the effectiveness and efficiency of the HASA approach, and make the application of the proposed model readily acceptable.

  11. Staging workers' use of hearing protection devices: application of the transtheoretical model.

    PubMed

    Raymond, Delbert M; Lusk, Sally L

    2006-04-01

    The threat of noise-induced hearing loss is a serious concern for many workers. This study explores use of the transtheoretical model as a framework for defining stages of workers' acceptance of hearing protection devices. A secondary analysis was performed using a cross-section of data from a randomized, controlled clinical trial of an intervention to increase use of hearing protection. Use of hearing protection devices was well distributed across the theorized stages of change. Chi-square analysis and analysis of variance revealed significant differences between stages for the variables studied. Discrete stages of hearing protection device use can be identified, laying the foundation for further work investigating use of the transtheoretical model for promoting hearing protection device use. The model can provide a framework for tailoring interventions and evaluating their effects. With further development of the transtheoretical model, nurses may be able to easily identify workers' readiness to use hearing protection devices and tailor training toward that goal.

  12. Synchronous parallel system for emulation and discrete event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    1992-01-01

    A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to state variables of the simulation object attributable to the event object, and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring the events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.

  13. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  14. Synchronous Parallel System for Emulation and Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    2001-01-01

    A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to the state variables of the simulation object attributable to the event object and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.

  15. Discrete and continuous variables for measurement-device-independent quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Feihu; Curty, Marcos; Qi, Bing

    In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less

  16. Discrete and continuous variables for measurement-device-independent quantum cryptography

    DOE PAGES

    Xu, Feihu; Curty, Marcos; Qi, Bing; ...

    2015-11-16

    In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less

  17. A mass-conserving mixed Fourier-Galerkin B-Spline-collocation method for Direct Numerical Simulation of the variable-density Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Reuter, Bryan; Oliver, Todd; Lee, M. K.; Moser, Robert

    2017-11-01

    We present an algorithm for a Direct Numerical Simulation of the variable-density Navier-Stokes equations based on the velocity-vorticity approach introduced by Kim, Moin, and Moser (1987). In the current work, a Helmholtz decomposition of the momentum is performed. Evolution equations for the curl and the Laplacian of the divergence-free portion are formulated by manipulation of the momentum equations and the curl-free portion is reconstructed by enforcing continuity. The solution is expanded in Fourier bases in the homogeneous directions and B-Spline bases in the inhomogeneous directions. Discrete equations are obtained through a mixed Fourier-Galerkin and collocation weighted residual method. The scheme is designed such that the numerical solution conserves mass locally and globally by ensuring the discrete divergence projection is exact through the use of higher order splines in the inhomogeneous directions. The formulation is tested on multiple variable-density flow problems.

  18. Numerical solution of the two-dimensional time-dependent incompressible Euler equations

    NASA Technical Reports Server (NTRS)

    Whitfield, David L.; Taylor, Lafayette K.

    1994-01-01

    A numerical method is presented for solving the artificial compressibility form of the 2D time-dependent incompressible Euler equations. The approach is based on using an approximate Riemann solver for the cell face numerical flux of a finite volume discretization. Characteristic variable boundary conditions are developed and presented for all boundaries and in-flow out-flow situations. The system of algebraic equations is solved using the discretized Newton-relaxation (DNR) implicit method. Numerical results are presented for both steady and unsteady flow.

  19. Iterative spectral methods and spectral solutions to compressible flows

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1982-01-01

    A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.

  20. Automatic Methods and Tools for the Verification of Real Time Systems

    DTIC Science & Technology

    1997-07-31

    real - time systems . This was accomplished by extending techniques, based on automata theory and temporal logic, that have been successful for the verification of time-independent reactive systems. As system specification lanmaage for embedded real - time systems , we introduced hybrid automata, which equip traditional discrete automata with real-numbered clock variables and continuous environment variables. As requirements specification languages, we introduced temporal logics with clock variables for expressing timing constraints.

  1. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  2. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.

    2011-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.

  3. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Thomas, James L.; Nishikawa, Hiroaki; Diskin, Boris

    2009-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and highly stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Actual cycle results are verified using quantitative analysis methods in which parts of the cycle are replaced by their idealized counterparts.

  4. A phase screen model for simulating numerically the propagation of a laser beam in rain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukin, I P; Rychkov, D S; Falits, A V

    2009-09-30

    The method based on the generalisation of the phase screen method for a continuous random medium is proposed for simulating numerically the propagation of laser radiation in a turbulent atmosphere with precipitation. In the phase screen model for a discrete component of a heterogeneous 'air-rain droplet' medium, the amplitude screen describing the scattering of an optical field by discrete particles of the medium is replaced by an equivalent phase screen with a spectrum of the correlation function of the effective dielectric constant fluctuations that is similar to the spectrum of a discrete scattering component - water droplets in air. Themore » 'turbulent' phase screen is constructed on the basis of the Kolmogorov model, while the 'rain' screen model utiises the exponential distribution of the number of rain drops with respect to their radii as a function of the rain intensity. Theresults of the numerical simulation are compared with the known theoretical estimates for a large-scale discrete scattering medium. (propagation of laser radiation in matter)« less

  5. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  6. On Reductions of the Hirota-Miwa Equation

    NASA Astrophysics Data System (ADS)

    Hone, Andrew N. W.; Kouloukas, Theodoros E.; Ward, Chloe

    2017-07-01

    The Hirota-Miwa equation (also known as the discrete KP equation, or the octahedron recurrence) is a bilinear partial difference equation in three independent variables. It is integrable in the sense that it arises as the compatibility condition of a linear system (Lax pair). The Hirota-Miwa equation has infinitely many reductions of plane wave type (including a quadratic exponential gauge transformation), defined by a triple of integers or half-integers, which produce bilinear ordinary difference equations of Somos/Gale-Robinson type. Here it is explained how to obtain Lax pairs and presymplectic structures for these reductions, in order to demonstrate Liouville integrability of some associated maps, certain of which are related to reductions of discrete Toda and discrete KdV equations.

  7. Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network

    NASA Technical Reports Server (NTRS)

    Kuhn, D. Richard; Kacker, Raghu; Lei, Yu

    2010-01-01

    This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.

  8. OCT Amplitude and Speckle Statistics of Discrete Random Media.

    PubMed

    Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J

    2017-11-01

    Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.

  9. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  10. Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing

    NASA Technical Reports Server (NTRS)

    Jones, Robert L.; Goode, Plesent W. (Technical Monitor)

    2000-01-01

    The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.

  11. Secure Hashing of Dynamic Hand Signatures Using Wavelet-Fourier Compression with BioPhasor Mixing and [InlineEquation not available: see fulltext.] Discretization

    NASA Astrophysics Data System (ADS)

    Wai Kuan, Yip; Teoh, Andrew B. J.; Ngo, David C. L.

    2006-12-01

    We introduce a novel method for secure computation of biometric hash on dynamic hand signatures using BioPhasor mixing and[InlineEquation not available: see fulltext.] discretization. The use of BioPhasor as the mixing process provides a one-way transformation that precludes exact recovery of the biometric vector from compromised hashes and stolen tokens. In addition, our user-specific[InlineEquation not available: see fulltext.] discretization acts both as an error correction step as well as a real-to-binary space converter. We also propose a new method of extracting compressed representation of dynamic hand signatures using discrete wavelet transform (DWT) and discrete fourier transform (DFT). Without the conventional use of dynamic time warping, the proposed method avoids storage of user's hand signature template. This is an important consideration for protecting the privacy of the biometric owner. Our results show that the proposed method could produce stable and distinguishable bit strings with equal error rates (EERs) of[InlineEquation not available: see fulltext.] and[InlineEquation not available: see fulltext.] for random and skilled forgeries for stolen token (worst case) scenario, and[InlineEquation not available: see fulltext.] for both forgeries in the genuine token (optimal) scenario.

  12. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  13. Bell's theorem, the measurement problem, Newton's self-gravitation and its connections to violations of the discrete symmetries C, P, T

    NASA Astrophysics Data System (ADS)

    Hiesmayr, Beatrix C.

    2015-07-01

    About 50 years ago John St. Bell published his famous Bell theorem that initiated a new field in physics. This contribution discusses how discrete symmetries relate to the big open questions of quantum mechanics, in particular: (i) how correlations stronger than those predicted by theories sharing randomness (Bell's theorem) relate to the violation of the CP symmetry and the P symmetry; and its relation to the security of quantum cryptography, (ii) how the measurement problem (“why do we observe no tables in superposition?”) can be polled in weakly decaying systems, (iii) how strongly and weakly interacting quantum systems are affected by Newton's self gravitation. These presented preliminary results show that the meson-antimeson systems and the hyperon- antihyperon systems are a unique laboratory to tackle deep fundamental questions and to contribute to the understand what impact the violation of discrete symmetries has.

  14. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  15. Covalent heterogenization of discrete bis(8-quinolinolato)dioxomolybdenum(VI) and dioxotungsten(VI) complexes by a metal-template/metal-exchange method: Cyclooctene epoxidation catalysts with enhanced performances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ying; Chattopadhyay, Soma; Shibata, Tomohiro

    A metal-template/metal-exchange method was used to imprint covalently attached bis(8- quinolinolato)dioxomolybdenum(VI) and dioxotungsten(VI) complexes onto large surface-area, mesoporous SBA-15 silica to obtain discrete MoO2 VIT and WO2 VIT catalysts bearing different metal loadings, respectively. Homogeneous counterparts, MoO2 VIN and WO2 VIN, as well as randomly ligandgrafted heterogeneous analogues, MoO2 VIG and WO2 VIG, were also prepared for comparison. X-ray absorption fine structure (XAFS), pair distribution function (PDF) and UV–vis data demonstrate that MoO2 VIT and WO2 VIT adopt a more solution-like bis(8-quinolinol) coordination environment than MoO2 VIG and WO2 VIG, respectively. Correspondingly, the templated MoVI and WVI catalysts show superiormore » performances to their randomly grafted counterparts and neat analogues in the epoxidation of cyclooctene. It is found that the representative MoO2 VIT-10% catalyst can be recycled up to five times without significant loss of reactivity, and heterogeneity test confirms the high stability of MoO2 VIT-10% catalyst against leaching of active species into solution. The homogeneity of the discrete bis(8-quinolinol) metal spheres templated on SBA-15 should be responsible for the superior performances.« less

  16. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  17. Variable-Domain Displacement Transfer Functions for Converting Surface Strains into Deflections for Structural Deformed Shape Predictions

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran

    2015-01-01

    Variable-Domain Displacement Transfer Functions were formulated for shape predictions of complex wing structures, for which surface strain-sensing stations must be properly distributed to avoid jointed junctures, and must be increased in the high strain gradient region. Each embedded beam (depth-wise cross section of structure along a surface strain-sensing line) was discretized into small variable domains. Thus, the surface strain distribution can be described with a piecewise linear or a piecewise nonlinear function. Through discretization, the embedded beam curvature equation can be piece-wisely integrated to obtain the Variable-Domain Displacement Transfer Functions (for each embedded beam), which are expressed in terms of geometrical parameters of the embedded beam and the surface strains along the strain-sensing line. By inputting the surface strain data into the Displacement Transfer Functions, slopes and deflections along each embedded beam can be calculated for mapping out overall structural deformed shapes. A long tapered cantilever tubular beam was chosen for shape prediction analysis. The input surface strains were analytically generated from finite-element analysis. The shape prediction accuracies of the Variable- Domain Displacement Transfer Functions were then determined in light of the finite-element generated slopes and deflections, and were fofound to be comparable to the accuracies of the constant-domain Displacement Transfer Functions

  18. WavePacket: A Matlab package for numerical quantum dynamics. I: Closed quantum systems and discrete variable representations

    NASA Astrophysics Data System (ADS)

    Schmidt, Burkhard; Lorenz, Ulf

    2017-04-01

    WavePacket is an open-source program package for the numerical simulation of quantum-mechanical dynamics. It can be used to solve time-independent or time-dependent linear Schrödinger and Liouville-von Neumann-equations in one or more dimensions. Also coupled equations can be treated, which allows to simulate molecular quantum dynamics beyond the Born-Oppenheimer approximation. Optionally accounting for the interaction with external electric fields within the semiclassical dipole approximation, WavePacket can be used to simulate experiments involving tailored light pulses in photo-induced physics or chemistry. The graphical capabilities allow visualization of quantum dynamics 'on the fly', including Wigner phase space representations. Being easy to use and highly versatile, WavePacket is well suited for the teaching of quantum mechanics as well as for research projects in atomic, molecular and optical physics or in physical or theoretical chemistry. The present Part I deals with the description of closed quantum systems in terms of Schrödinger equations. The emphasis is on discrete variable representations for spatial discretization as well as various techniques for temporal discretization. The upcoming Part II will focus on open quantum systems and dimension reduction; it also describes the codes for optimal control of quantum dynamics. The present work introduces the MATLAB version of WavePacket 5.2.1 which is hosted at the Sourceforge platform, where extensive Wiki-documentation as well as worked-out demonstration examples can be found.

  19. 3D ductile crack propagation within a polycrystalline microstructure using XFEM

    NASA Astrophysics Data System (ADS)

    Beese, Steffen; Loehnert, Stefan; Wriggers, Peter

    2018-02-01

    In this contribution we present a gradient enhanced damage based method to simulate discrete crack propagation in 3D polycrystalline microstructures. Discrete cracks are represented using the eXtended finite element method. The crack propagation criterion and the crack propagation direction for each point along the crack front line is based on the gradient enhanced damage variable. This approach requires the solution of a coupled problem for the balance of momentum and the additional global equation for the gradient enhanced damage field. To capture the discontinuity of the displacements as well as the gradient enhanced damage along the discrete crack, both fields are enriched using the XFEM in combination with level sets. Knowing the crack front velocity, level set methods are used to compute the updated crack geometry after each crack propagation step. The applied material model is a crystal plasticity model often used for polycrystalline microstructures of metals in combination with the gradient enhanced damage model. Due to the inelastic material behaviour after each discrete crack propagation step a projection of the internal variables from the old to the new crack configuration is required. Since for arbitrary crack geometries ill-conditioning of the equation system may occur due to (near) linear dependencies between standard and enriched degrees of freedom, an XFEM stabilisation technique based on a singular value decomposition of the element stiffness matrix is proposed. The performance of the presented methodology to capture crack propagation in polycrystalline microstructures is demonstrated with a number of numerical examples.

  20. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  1. A Descriptive Study Comparing GPA, Retention and Graduation of First-Time, Full-Time, Provisionally Admitted First-Generation College Students and Their Peers

    ERIC Educational Resources Information Center

    Lodhavia, Rajalakshmi

    2009-01-01

    This quantitative research study used ex post facto data to analyze possible relationships between a discrete set of independent variables and academic achievement among provisionally admitted students at a public, four-year historically black university located in the mid-Atlantic United States. The independent variables were first-generation…

  2. Building Coherent Validation Arguments for the Measurement of Latent Constructs with Unified Statistical Frameworks

    ERIC Educational Resources Information Center

    Rupp, Andre A.

    2012-01-01

    In the focus article of this issue, von Davier, Naemi, and Roberts essentially coupled: (1) a short methodological review of structural similarities of latent variable models with discrete and continuous latent variables; and (2) 2 short empirical case studies that show how these models can be applied to real, rather than simulated, large-scale…

  3. Assessing Fit of Models with Discrete Proficiency Variable in Educational Assessment. Research Report. RR-04-07

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Almond, Russell; Yan, Duanli

    2004-01-01

    Model checking is a crucial part of any statistical analysis. As educators tie models for testing to cognitive theory of the domains, there is a natural tendency to represent participant proficiencies with latent variables representing the presence or absence of the knowledge, skills, and proficiencies to be tested (Mislevy, Almond, Yan, &…

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  5. The Effect of Varying Teacher Presentation Rates on Responding during Discrete Trial Training for Two Children with Autism

    ERIC Educational Resources Information Center

    Roxburgh, Carole A.; Carbone, Vincent J.

    2013-01-01

    Recent research has emphasized the importance of manipulating antecedent variables to reduce interfering behaviors when teaching persons with autism. Few studies have focused on the effects of the rate of teacher-presented instructional demands as an independent variable. In this study, an alternating treatment design was used to evaluate the…

  6. When to be discrete: the importance of time formulation in understanding animal movement.

    PubMed

    McClintock, Brett T; Johnson, Devin S; Hooten, Mevin B; Ver Hoef, Jay M; Morales, Juan M

    2014-01-01

    Animal movement is essential to our understanding of population dynamics, animal behavior, and the impacts of global change. Coupled with high-resolution biotelemetry data, exciting new inferences about animal movement have been facilitated by various specifications of contemporary models. These approaches differ, but most share common themes. One key distinction is whether the underlying movement process is conceptualized in discrete or continuous time. This is perhaps the greatest source of confusion among practitioners, both in terms of implementation and biological interpretation. In general, animal movement occurs in continuous time but we observe it at fixed discrete-time intervals. Thus, continuous time is conceptually and theoretically appealing, but in practice it is perhaps more intuitive to interpret movement in discrete intervals. With an emphasis on state-space models, we explore the differences and similarities between continuous and discrete versions of mechanistic movement models, establish some common terminology, and indicate under which circumstances one form might be preferred over another. Counter to the overly simplistic view that discrete- and continuous-time conceptualizations are merely different means to the same end, we present novel mathematical results revealing hitherto unappreciated consequences of model formulation on inferences about animal movement. Notably, the speed and direction of movement are intrinsically linked in current continuous-time random walk formulations, and this can have important implications when interpreting animal behavior. We illustrate these concepts in the context of state-space models with multiple movement behavior states using northern fur seal (Callorhinus ursinus) biotelemetry data.

  7. When to be discrete: The importance of time formulation in understanding animal movement

    USGS Publications Warehouse

    McClintock, Brett T.; Johnson, Devin S.; Hooten, Mevin B.; Ver Hoef, Jay M.; Morales, Juan M.

    2014-01-01

    Animal movement is essential to our understanding of population dynamics, animal behavior, and the impacts of global change. Coupled with high-resolution biotelemetry data, exciting new inferences about animal movement have been facilitated by various specifications of contemporary models. These approaches differ, but most share common themes. One key distinction is whether the underlying movement process is conceptualized in discrete or continuous time. This is perhaps the greatest source of confusion among practitioners, both in terms of implementation and biological interpretation. In general, animal movement occurs in continuous time but we observe it at fixed discrete-time intervals. Thus, continuous time is conceptually and theoretically appealing, but in practice it is perhaps more intuitive to interpret movement in discrete intervals. With an emphasis on state-space models, we explore the differences and similarities between continuous and discrete versions of mechanistic movement models, establish some common terminology, and indicate under which circumstances one form might be preferred over another. Counter to the overly simplistic view that discrete- and continuous-time conceptualizations are merely different means to the same end, we present novel mathematical results revealing hitherto unappreciated consequences of model formulation on inferences about animal movement. Notably, the speed and direction of movement are intrinsically linked in current continuous-time random walk formulations, and this can have important implications when interpreting animal behavior. We illustrate these concepts in the context of state-space models with multiple movement behavior states using northern fur seal (Callorhinus ursinus) biotelemetry data.

  8. Convergence Time towards Periodic Orbits in Discrete Dynamical Systems

    PubMed Central

    San Martín, Jesús; Porter, Mason A.

    2014-01-01

    We investigate the convergence towards periodic orbits in discrete dynamical systems. We examine the probability that a randomly chosen point converges to a particular neighborhood of a periodic orbit in a fixed number of iterations, and we use linearized equations to examine the evolution near that neighborhood. The underlying idea is that points of stable periodic orbit are associated with intervals. We state and prove a theorem that details what regions of phase space are mapped into these intervals (once they are known) and how many iterations are required to get there. We also construct algorithms that allow our theoretical results to be implemented successfully in practice. PMID:24736594

  9. Five Misunderstandings About Cultural Evolution.

    PubMed

    Henrich, Joseph; Boyd, Robert; Richerson, Peter J

    2008-06-01

    Recent debates about memetics have revealed some widespread misunderstandings about Darwinian approaches to cultural evolution. Drawing from these debates, this paper disputes five common claims: (1) mental representations are rarely discrete, and therefore models that assume discrete, gene-like particles (i.e., replicators) are useless; (2) replicators are necessary for cumulative, adaptive evolution; (3) content-dependent psychological biases are the only important processes that affect the spread of cultural representations; (4) the "cultural fitness" of a mental representation can be inferred from its successful transmission; and (5) selective forces only matter if the sources of variation are random. We close by sketching the outlines of a unified evolutionary science of culture.

  10. Random walks and diffusion on networks

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud

    2017-11-01

    Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.

  11. Implementation of continuous-variable quantum key distribution with discrete modulation

    NASA Astrophysics Data System (ADS)

    Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2017-06-01

    We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.

  12. FDTD modelling of induced polarization phenomena in transient electromagnetics

    NASA Astrophysics Data System (ADS)

    Commer, Michael; Petrov, Peter V.; Newman, Gregory A.

    2017-04-01

    The finite-difference time-domain scheme is augmented in order to treat the modelling of transient electromagnetic signals containing induced polarization effects from 3-D distributions of polarizable media. Compared to the non-dispersive problem, the discrete dispersive Maxwell system contains costly convolution operators. Key components to our solution for highly digitized model meshes are Debye decomposition and composite memory variables. We revert to the popular Cole-Cole model of dispersion to describe the frequency-dependent behaviour of electrical conductivity. Its inversely Laplace-transformed Debye decomposition results in a series of time convolutions between electric field and exponential decay functions, with the latter reflecting each Debye constituents' individual relaxation time. These function types in the discrete-time convolution allow for their substitution by memory variables, annihilating the otherwise prohibitive computing demands. Numerical examples demonstrate the efficiency and practicality of our algorithm.

  13. Variable speed wind turbine control by discrete-time sliding mode approach.

    PubMed

    Torchani, Borhen; Sellami, Anis; Garcia, Germain

    2016-05-01

    The aim of this paper is to propose a new design variable speed wind turbine control by discrete-time sliding mode approach. This methodology is designed for linear saturated system. The saturation constraint is reported on inputs vector. To this end, the back stepping design procedure is followed to construct a suitable sliding manifold that guarantees the attainment of a stabilization control objective. It is well known that the mechanisms are investigated in term of the most proposed assumptions to deal with the damping, shaft stiffness and inertia effect of the gear. The objectives are to synthesize robust controllers that maximize the energy extracted from wind, while reducing mechanical loads and rotor speed tracking combined with an electromagnetic torque. Simulation results of the proposed scheme are presented. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Six-component semi-discrete integrable nonlinear Schrödinger system

    NASA Astrophysics Data System (ADS)

    Vakhnenko, Oleksiy O.

    2018-01-01

    We suggest the six-component integrable nonlinear system on a quasi-one-dimensional lattice. Due to its symmetrical form, the general system permits a number of reductions; one of which treated as the semi-discrete integrable nonlinear Schrödinger system on a lattice with three structural elements in the unit cell is considered in considerable details. Besides six truly independent basic field variables, the system is characterized by four concomitant fields whose background values produce three additional types of inter-site resonant interactions between the basic fields. As a result, the system dynamics becomes associated with the highly nonstandard form of Poisson structure. The elementary Poisson brackets between all field variables are calculated and presented explicitly. The richness of system dynamics is demonstrated on the multi-component soliton solution written in terms of properly parameterized soliton characteristics.

  15. Environmental Noise Could Promote Stochastic Local Stability of Behavioral Diversity Evolution

    NASA Astrophysics Data System (ADS)

    Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi

    2018-05-01

    In this Letter, we investigate stochastic stability in a two-phenotype evolutionary game model for an infinite, well-mixed population undergoing discrete, nonoverlapping generations. We assume that the fitness of a phenotype is an exponential function of its expected payoff following random pairwise interactions whose outcomes randomly fluctuate with time. We show that the stochastic local stability of a constant interior equilibrium can be promoted by the random environmental noise even if the system may display a complicated nonlinear dynamics. This result provides a new perspective for a better understanding of how environmental fluctuations may contribute to the evolution of behavioral diversity.

  16. Cavity master equation for the continuous time dynamics of discrete-spin models.

    PubMed

    Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  17. Cavity master equation for the continuous time dynamics of discrete-spin models

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  18. Chemical Distances for Percolation of Planar Gaussian Free Fields and Critical Random Walk Loop Soups

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Li, Li

    2018-05-01

    We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.

  19. Chemical Distances for Percolation of Planar Gaussian Free Fields and Critical Random Walk Loop Soups

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Li, Li

    2018-06-01

    We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.

  20. A study of MRI gradient echo signals from discrete magnetic particles with considerations of several parameters in simulations.

    PubMed

    Kokeny, Paul; Cheng, Yu-Chung N; Xie, He

    2018-05-01

    Modeling MRI signal behaviors in the presence of discrete magnetic particles is important, as magnetic particles appear in nanoparticle labeled cells, contrast agents, and other biological forms of iron. Currently, many models that take into account the discrete particle nature in a system have been used to predict magnitude signal decays in the form of R2* or R2' from one single voxel. Little work has been done for predicting phase signals. In addition, most calculations of phase signals rely on the assumption that a system containing discrete particles behaves as a continuous medium. In this work, numerical simulations are used to investigate MRI magnitude and phase signals from discrete particles, without diffusion effects. Factors such as particle size, number density, susceptibility, volume fraction, particle arrangements for their randomness, and field of view have been considered in simulations. The results are compared to either a ground truth model, theoretical work based on continuous mediums, or previous literature. Suitable parameters used to model particles in several voxels that lead to acceptable magnetic field distributions around particle surfaces and accurate MR signals are identified. The phase values as a function of echo time from a central voxel filled by particles can be significantly different from those of a continuous cubic medium. However, a completely random distribution of particles can lead to an R2' value which agrees with the prediction from the static dephasing theory. A sphere with a radius of at least 4 grid points used in simulations is found to be acceptable to generate MR signals equivalent from a larger sphere. Increasing number of particles with a fixed volume fraction in simulations reduces the resulting variance in the phase behavior, and converges to almost the same phase value for different particle numbers at each echo time. The variance of phase values is also reduced when increasing the number of particles in a fixed voxel. These results indicate that MRI signals from voxels containing discrete particles, even with a sufficient number of particles per voxel, cannot be properly modeled by a continuous medium with an equivalent susceptibility value in the voxel. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Efficient numerical method for investigating diatomic molecules with single active electron subjected to intense and ultrashort laser fields

    NASA Astrophysics Data System (ADS)

    Kiss, Gellért Zsolt; Borbély, Sándor; Nagy, Ladislau

    2017-12-01

    We have presented here an efficient numerical approach for the ab initio numerical solution of the time-dependent Schrödinger Equation describing diatomic molecules, which interact with ultrafast laser pulses. During the construction of the model we have assumed a frozen nuclear configuration and a single active electron. In order to increase efficiency our system was described using prolate spheroidal coordinates, where the wave function was discretized using the finite-element discrete variable representation (FE-DVR) method. The discretized wave functions were efficiently propagated in time using the short-iterative Lanczos algorithm. As a first test we have studied here how the laser induced bound state dynamics in H2+ is influenced by the strength of the driving laser field.

  2. Gradient modeling of conifer species using random forests

    Treesearch

    Jeffrey S. Evans; Samuel A. Cushman

    2009-01-01

    Landscape ecology often adopts a patch mosaic model of ecological patterns. However, many ecological attributes are inherently continuous and classification of species composition into vegetation communities and discrete patches provides an overly simplistic view of the landscape. If one adopts a nichebased, individualistic concept of biotic communities then it may...

  3. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  4. Performance on perceptual word identification is mediated by discrete states.

    PubMed

    Swagman, April R; Province, Jordan M; Rouder, Jeffrey N

    2015-02-01

    We contrast predictions from discrete-state models of all-or-none information loss with signal-detection models of graded strength for the identification of briefly flashed English words. Previous assessments have focused on whether ROC curves are straight or not, which is a test of a discrete-state model where detection leads to the highest confidence response with certainty. We along with many others argue this certainty assumption is too constraining, and, consequently, the straight-line ROC test is too stringent. Instead, we assess a core property of discrete-state models, conditional independence, where the pattern of responses depends only on which state is entered. The conditional independence property implies that confidence ratings are a mixture of detect and guess state responses, and that stimulus strength factors, the duration of the flashed word in this report, affect only the probability of entering a state and not responses conditional on a state. To assess this mixture property, 50 participants saw words presented briefly on a computer screen at three variable flash durations followed by either a two-alternative confidence ratings task or a yes-no confidence ratings task. Comparable discrete-state and signal-detection models were fit to the data for each participant and task. The discrete-state models outperformed the signal detection models for 90 % of participants in the two-alternative task and for 68 % of participants in the yes-no task. We conclude discrete-state models are viable for predicting performance across stimulus conditions in a perceptual word identification task.

  5. Periodicity and chaos from switched flow systems - Contrasting examples of discretely controlled continuous systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joseph; Ramadge, Peter J.

    1993-01-01

    We analyze two examples of the discrete control of a continuous variable system. These examples exhibit what may be regarded as the two extremes of complexity of the closed-loop behavior: one is eventually periodic, the other is chaotic. Our examples are derived from sampled deterministic flow models. These are of interest in their own right but have also been used as models for certain aspects of manufacturing systems. In each case, we give a precise characterization of the closed-loop behavior.

  6. Angular Distributions of Discrete Mesoscale Mapping Functions

    NASA Astrophysics Data System (ADS)

    Kroszczyński, Krzysztof

    2015-08-01

    The paper presents the results of analyses of numerical experiments concerning GPS signal propagation delays in the atmosphere and the discrete mapping functions defined on their basis. The delays were determined using data from the mesoscale non-hydrostatic weather model operated in the Centre of Applied Geomatics, Military University of Technology. A special attention was paid to investigating angular characteristics of GPS slant delays for low angles of elevation. The investigation proved that the temporal and spatial variability of the slant delays depends to a large extent on current weather conditions.

  7. Prediction of road accidents: A Bayesian hierarchical approach.

    PubMed

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H

    2013-03-01

    In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any road network provided that the required data are available. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Discrete-element modeling of nacre-like materials: Effects of random microstructures on strain localization and mechanical performance

    NASA Astrophysics Data System (ADS)

    Abid, Najmul; Mirkhalaf, Mohammad; Barthelat, Francois

    2018-03-01

    Natural materials such as nacre, collagen, and spider silk are composed of staggered stiff and strong inclusions in a softer matrix. This type of hybrid microstructure results in remarkable combinations of stiffness, strength, and toughness and it now inspires novel classes of high-performance composites. However, the analytical and numerical approaches used to predict and optimize the mechanics of staggered composites often neglect statistical variations and inhomogeneities, which may have significant impacts on modulus, strength, and toughness. Here we present an analysis of localization using small representative volume elements (RVEs) and large scale statistical volume elements (SVEs) based on the discrete element method (DEM). DEM is an efficient numerical method which enabled the evaluation of more than 10,000 microstructures in this study, each including about 5,000 inclusions. The models explore the combined effects of statistics, inclusion arrangement, and interface properties. We find that statistical variations have a negative effect on all properties, in particular on the ductility and energy absorption because randomness precipitates the localization of deformations. However, the results also show that the negative effects of random microstructures can be offset by interfaces with large strain at failure accompanied by strain hardening. More specifically, this quantitative study reveals an optimal range of interface properties where the interfaces are the most effective at delaying localization. These findings show how carefully designed interfaces in bioinspired staggered composites can offset the negative effects of microstructural randomness, which is inherent to most current fabrication methods.

  9. Hybrid Methods in Quantum Information

    NASA Astrophysics Data System (ADS)

    Marshall, Kevin

    Today, the potential power of quantum information processing comes as no surprise to physicist or science-fiction writer alike. However, the grand promises of this field remain unrealized, despite significant strides forward, due to the inherent difficulties of manipulating quantum systems. Simply put, it turns out that it is incredibly difficult to interact, in a controllable way, with the quantum realm when we seem to live our day to day lives in a classical world. In an effort to solve this challenge, people are exploring a variety of different physical platforms, each with their strengths and weaknesses, in hopes of developing new experimental methods that one day might allow us to control a quantum system. One path forward rests in combining different quantum systems in novel ways to exploit the benefits of different systems while circumventing their respective weaknesses. In particular, quantum systems come in two different flavours: either discrete-variable systems or continuous-variable ones. The field of hybrid quantum information seeks to combine these systems, in clever ways, to help overcome the challenges blocking the path between what is theoretically possible and what is achievable in a laboratory. In this thesis we explore four topics in the context of hybrid methods in quantum information, in an effort to contribute to the resolution of existing challenges and to stimulate new avenues of research. First, we explore the manipulation of a continuous-variable quantum system consisting of phonons in a linear chain of trapped ions where we use the discretized internal levels to mediate interactions. Using our proposed interaction we are able to implement, for example, the acoustic equivalent of a beam splitter with modest experimental resources. Next we propose an experimentally feasible implementation of the cubic phase gate, a primitive non-Gaussian gate required for universal continuous-variable quantum computation, based off sequential photon subtraction. We then discuss the notion of embedding a finite dimensional state into a continuous-variable system, and propose a method of performing quantum computations on encrypted continuous-variable states. This protocol allows for a client, of limited quantum ability, to outsource a computation while hiding their information. Next, we discuss the possibility of performing universal quantum computation on discrete-variable logical states encoded in mixed continuous-variable quantum states. Finally, we present an account of open problems related to our results, and possible future avenues of research.

  10. Discrete Angle Radiative Transfer in Uniform and Extremely Variable Clouds.

    NASA Astrophysics Data System (ADS)

    Gabriel, Philip Mitri

    The transfer of radiant energy in highly inhomogeneous media is a difficult problem that is encountered in many geophysical applications. It is the purpose of this thesis to study some problems connected with the scattering of solar radiation in natural clouds. Extreme variability in the optical density of these clouds is often believed to occur regularly. In order to facilitate study of very inhomogeneous optical media such as clouds, the difficult angular part of radiative transfer calculations is simplified by considering a series of models in which conservative scattering only occurs in discrete directions. Analytic and numerical results for the radiative properties of these Discrete Angle Radiative Transfer (DART) systems are obtained in the limits of both optically thin and thick media. Specific results include: (a) In thick homogeneous media, the albedo (reflection coefficient), unlike the transmission, cannot be obtained by a diffusion equation. (b) With the aid of an exact analogy with an early model of conductor/superconductor mixtures, it is argued that inhomogeneous media with embedded holes, neither the transmission, nor the albedo can be described by diffusive random walks. (c) Using renormalization methods, it is shown that thin cloud behaviour is sensitive to the scattering phase functions since it is associated with a repelling fixed point, whereas, the thick cloud limit is universal in that it is phase function independent, and associated with an attracting fixed point. (d) In fractal media, the optical thickness required for a given albedo or transmission can differ by large factors from that required in the corresponding plane parallel geometry. The relevant scaling exponents have been calculated in a very simple example. (e) Important global meteorological and climatological implications of the above are discussed when applied to the scattering of visible light in clouds. In the remote sensing context, an analysis of satellite data reveals that augmenting a satellite's resolution reveals increasingly detailed structures that are found to occupy a decreasing fraction of the image, while simultaneously brightening to compensate. By systematically degrading the resolution of visible and infra red satellite cloud and surface data as well as radar rain data, resolution -independent co-dimension functions were defined which were useful in describing the spatial distribution of image features as well as the resolution dependence of the intensities themselves. The scale invariant functions so obtained fit into theoretically predicted functional forms. These multifractal techniques have implications for our ability to meaningfully estimate cloud brightness fraction, total cloud amount, as well as other remotely sensed quantities.

  11. The discrete regime of flame propagation

    NASA Astrophysics Data System (ADS)

    Tang, Francois-David; Goroshin, Samuel; Higgins, Andrew

    The propagation of laminar dust flames in iron dust clouds was studied in a low-gravity envi-ronment on-board a parabolic flight aircraft. The elimination of buoyancy-induced convection and particle settling permitted measurements of fundamental combustion parameters such as the burning velocity and the flame quenching distance over a wide range of particle sizes and in different gaseous mixtures. The discrete regime of flame propagation was observed by substitut-ing nitrogen present in air with xenon, an inert gas with a significantly lower heat conductivity. Flame propagation in the discrete regime is controlled by the heat transfer between neighbor-ing particles, rather than by the particle burning rate used by traditional continuum models of heterogeneous flames. The propagation mechanism of discrete flames depends on the spa-tial distribution of particles, and thus such flames are strongly influenced by local fluctuations in the fuel concentration. Constant pressure laminar dust flames were observed inside 70 cm long, 5 cm diameter Pyrex tubes. Equally-spaced plate assemblies forming rectangular chan-nels were placed inside each tube to determine the quenching distance defined as the minimum channel width through which a flame can successfully propagate. High-speed video cameras were used to measure the flame speed and a fiber optic spectrometer was used to measure the flame temperature. Experimental results were compared with predictions obtained from a numerical model of a three-dimensional flame developed to capture both the discrete nature and the random distribution of particles in the flame. Though good qualitative agreement was obtained between model predictions and experimental observations, residual g-jitters and the short reduced-gravity periods prevented further investigations of propagation limits in the dis-crete regime. The full exploration of the discrete flame phenomenon would require high-quality, long duration reduced gravity environment available only on orbital platforms.

  12. Modeling marbled murrelet (Brachyramphus marmoratus) habitat using LiDAR-derived canopy data

    USGS Publications Warehouse

    Hagar, Joan C.; Eskelson, Bianca N.I.; Haggerty, Patricia K.; Nelson, S. Kim; Vesely, David G.

    2014-01-01

    LiDAR (Light Detection And Ranging) is an emerging remote-sensing tool that can provide fine-scale data describing vertical complexity of vegetation relevant to species that are responsive to forest structure. We used LiDAR data to estimate occupancy probability for the federally threatened marbled murrelet (Brachyramphus marmoratus) in the Oregon Coast Range of the United States. Our goal was to address the need identified in the Recovery Plan for a more accurate estimate of the availability of nesting habitat by developing occupancy maps based on refined measures of nest-strand structure. We used murrelet occupancy data collected by the Bureau of Land Management Coos Bay District, and canopy metrics calculated from discrete return airborne LiDAR data, to fit a logistic regression model predicting the probability of occupancy. Our final model for stand-level occupancy included distance to coast, and 5 LiDAR-derived variables describing canopy structure. With an area under the curve value (AUC) of 0.74, this model had acceptable discrimination and fair agreement (Cohen's κ = 0.24), especially considering that all sites in our sample were regarded by managers as potential habitat. The LiDAR model provided better discrimination between occupied and unoccupied sites than did a model using variables derived from Gradient Nearest Neighbor maps that were previously reported as important predictors of murrelet occupancy (AUC = 0.64, κ = 0.12). We also evaluated LiDAR metrics at 11 known murrelet nest sites. Two LiDAR-derived variables accurately discriminated nest sites from random sites (average AUC = 0.91). LiDAR provided a means of quantifying 3-dimensional canopy structure with variables that are ecologically relevant to murrelet nesting habitat, and have not been as accurately quantified by other mensuration methods.

  13. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    PubMed

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  14. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  15. Software For Integer Programming

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1992-01-01

    Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.

  16. Force-Time Entropy of Isometric Impulse.

    PubMed

    Hsieh, Tsung-Yu; Newell, Karl M

    2016-01-01

    The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.

  17. Discrete wavelet transform and energy eigen value for rotor bars fault detection in variable speed field-oriented control of induction motor drive.

    PubMed

    Ameid, Tarek; Menacer, Arezki; Talhaoui, Hicham; Azzoug, Youness

    2018-05-03

    This paper presents a methodology for the broken rotor bars fault detection is considered when the rotor speed varies continuously and the induction machine is controlled by Field-Oriented Control (FOC). The rotor fault detection is obtained by analyzing a several mechanical and electrical quantities (i.e., rotor speed, stator phase current and output signal of the speed regulator) by the Discrete Wavelet Transform (DWT) in variable speed drives. The severity of the fault is obtained by stored energy calculation for active power signal. Hence, it can be a useful solution as fault indicator. The FOC is implemented in order to preserve a good performance speed control; to compensate the broken rotor bars effect in the mechanical speed and to ensure the operation continuity and to investigate the fault effect in the variable speed. The effectiveness of the technique is evaluated in simulation and in a real-time implementation by using Matlab/Simulink with the real-time interface (RTI) based on dSpace 1104 board. Copyright © 2018. Published by Elsevier Ltd.

  18. Stronger steerability criterion for more uncertain continuous-variable systems

    NASA Astrophysics Data System (ADS)

    Chowdhury, Priyanka; Pramanik, Tanumoy; Majumdar, A. S.

    2015-10-01

    We derive a fine-grained uncertainty relation for the measurement of two incompatible observables on a single quantum system of continuous variables, and show that continuous-variable systems are more uncertain than discrete-variable systems. Using the derived fine-grained uncertainty relation, we formulate a stronger steering criterion that is able to reveal the steerability of NOON states that has hitherto not been possible using other criteria. We further obtain a monogamy relation for our steering inequality which leads to an, in principle, improved lower bound on the secret key rate of a one-sided device independent quantum key distribution protocol for continuous variables.

  19. Central Tropical Pacific Variability And ENSO Response To Changing Climate Boundary Conditions: Evidence From Individual Line Island Foraminifera

    NASA Astrophysics Data System (ADS)

    Rustic, G. T.; Polissar, P. J.; Ravelo, A. C.; White, S. M.

    2017-12-01

    The El Niño Southern Oscillation (ENSO) plays a dominant role in Earth's climate variability. Paleoceanographic evidence suggests that ENSO has changed in the past, and these changes have been linked to large-scale climatic shifts. While a close relationship between ENSO evolution and climate boundary conditions has been predicted, testing these predictions remains challenging. These climate boundary conditions, including insolation, the mean surface temperature gradient of the tropical Pacific, global ice volume, and tropical thermocline depth, often co-vary and may work together to suppress or enhance the ocean-atmosphere feedbacks that drive ENSO variability. Furthermore, suitable paleo-archives spanning multiple climate states are sparse. We have aimed to test ENSO response to changing climate boundary conditions by generating new reconstructions of mixed-layer variability from sedimentary archives spanning the last three glacial-interglacial cycles from the Central Tropical Pacific Line Islands, where El Niño is strongly expressed. We analyzed Mg/Ca ratios from individual foraminifera to reconstruct mixed-layer variability at discrete time intervals representing combinations of climatic boundary conditions from the middle Holocene to Marine Isotope Stage (MIS) 8. We observe changes in the mixed-layer temperature variability during MIS 5 and during the previous interglacial (MIS 7) showing significant reductions in ENSO amplitude. Differences in variability during glacial and interglacial intervals are also observed. Additionally, we reconstructed mixed-layer and thermocline conditions using multi-species Mg/Ca and stable isotope measurements to more fully characterize the state of the Central Tropical Pacific during these intervals. These reconstructions provide us with a unique view of Central Tropical Pacific variability and water-column structure at discrete intervals under varying boundary climate conditions with which to assess factors that shape ENSO variability.

  20. 78 FR 75942 - Certain Mobile Phones and Tablet Computers, and Components Thereof; Commission Determination To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-13

    ... implemented using a variable voltage source'' is an appropriate modifier of the corresponding structure for... the Nokia-Qualcomm agreement. The parties have been invited to brief only the discrete issues...

Top