Sample records for distributed discrete event

  1. Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification

    DTIC Science & Technology

    2014-09-18

    and full/scale experimental verifications towards ground/ satellite quantum key distribution0 Oat Qhotonics 4235>9+7,=5;9!អ \\58^ Zin K. Dao Z. Miu T...Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification DISSERTATION Jeffrey D. Morris... QUANTUM KEY DISTRIBUTION SIMULATION FRAMEWORK USING THE DISCRETE EVENT SYSTEM SPECIFICATION DISSERTATION Presented to the Faculty Department of Systems

  2. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  3. USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation

    DTIC Science & Technology

    2016-09-01

    release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However

  4. Requirements analysis for a hardware, discrete-event, simulation engine accelerator

    NASA Astrophysics Data System (ADS)

    Taylor, Paul J., Jr.

    1991-12-01

    An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.

  5. Discrete Event Simulation of Distributed Team Communication

    DTIC Science & Technology

    2012-03-22

    performs, and auditory information that is provided through multiple audio devices with speech response. This paper extends previous discrete event workload...2008, pg. 1) notes that “Architecture modeling furnishes abstrac- tions for use in managing complexities, allowing engineers to visualise the proposed

  6. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  7. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  8. Modeling and simulation of count data.

    PubMed

    Plan, E L

    2014-08-13

    Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.

  9. An integrated logit model for contamination event detection in water distribution systems.

    PubMed

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  11. Swarming Reconnaissance Using Unmanned Aerial Vehicles in a Parallel Discrete Event Simulation

    DTIC Science & Technology

    2004-03-01

    60 4.3.1.4 Data Distribution Management . . . . . . . . . 60 4.3.1.5 Breathing Time Warp Algorithm/ Rolling Back . 61...58 BTW Breathing Time Warp . . . . . . . . . . . . . . . . . . . . . . . . . 59 DDM Data Distribution Management . . . . . . . . . . . . . . . . . . . . 60...events based on the 58 process algorithm. Data proxies/ distribution management is the vital portion of the SPEEDES im- plementation that allows objects

  12. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Improving Aircraft Refueling Procedures at Naval Air Station Oceana

    DTIC Science & Technology

    2012-06-01

    Station (NAS) Oceana, VA, using aircraft waiting time for fuel as a measure of performance. We develop a computer-assisted discrete-event simulation to...Station (NAS) Oceana, VA, using aircraft waiting time for fuel as a measure of performance. We develop a computer-assisted discrete-event simulation...server queue, with general interarrival and service time distributions gpm Gallons per minute JDK Java development kit M/M/1 Single-server queue

  14. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  15. Synchronous Parallel Emulation and Discrete Event Simulation System with Self-Contained Simulation Objects and Active Event Objects

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    1998-01-01

    The present invention is embodied in a method of performing object-oriented simulation and a system having inter-connected processor nodes operating in parallel to simulate mutual interactions of a set of discrete simulation objects distributed among the nodes as a sequence of discrete events changing state variables of respective simulation objects so as to generate new event-defining messages addressed to respective ones of the nodes. The object-oriented simulation is performed at each one of the nodes by assigning passive self-contained simulation objects to each one of the nodes, responding to messages received at one node by generating corresponding active event objects having user-defined inherent capabilities and individual time stamps and corresponding to respective events affecting one of the passive self-contained simulation objects of the one node, restricting the respective passive self-contained simulation objects to only providing and receiving information from die respective active event objects, requesting information and changing variables within a passive self-contained simulation object by the active event object, and producing corresponding messages specifying events resulting therefrom by the active event objects.

  16. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  17. Hybrid Markov-mass action law model for cell activation by rare binding events: Application to calcium induced vesicular release at neuronal synapses.

    PubMed

    Guerrier, Claire; Holcman, David

    2016-10-18

    Binding of molecules, ions or proteins to small target sites is a generic step of cell activation. This process relies on rare stochastic events where a particle located in a large bulk has to find small and often hidden targets. We present here a hybrid discrete-continuum model that takes into account a stochastic regime governed by rare events and a continuous regime in the bulk. The rare discrete binding events are modeled by a Markov chain for the encounter of small targets by few Brownian particles, for which the arrival time is Poissonian. The large ensemble of particles is described by mass action laws. We use this novel model to predict the time distribution of vesicular release at neuronal synapses. Vesicular release is triggered by the binding of few calcium ions that can originate either from the synaptic bulk or from the entry through calcium channels. We report here that the distribution of release time is bimodal although it is triggered by a single fast action potential. While the first peak follows a stimulation, the second corresponds to the random arrival over much longer time of ions located in the synaptic terminal to small binding vesicular targets. To conclude, the present multiscale stochastic modeling approach allows studying cellular events based on integrating discrete molecular events over several time scales.

  18. Time Warp Operating System (TWOS)

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.

    1993-01-01

    Designed to support parallel discrete-event simulation, TWOS is complete implementation of Time Warp mechanism - distributed protocol for virtual time synchronization based on process rollback and message annihilation.

  19. The use of discrete-event simulation modeling to compare handwritten and electronic prescribing systems.

    PubMed

    Ghany, Ahmad; Vassanji, Karim; Kuziemsky, Craig; Keshavjee, Karim

    2013-01-01

    Electronic prescribing (e-prescribing) is expected to bring many benefits to Canadian healthcare, such as a reduction in errors and adverse drug reactions. As there currently is no functioning e-prescribing system in Canada that is completely electronic, we are unable to evaluate the performance of a live system. An alternative approach is to use simulation modeling for evaluation. We developed two discrete-event simulation models, one of the current handwritten prescribing system and one of a proposed e-prescribing system, to compare the performance of these two systems. We were able to compare the number of processes in each model, workflow efficiency, and the distribution of patients or prescriptions. Although we were able to compare these models to each other, using discrete-event simulation software was challenging. We were limited in the number of variables we could measure. We discovered non-linear processes and feedback loops in both models that could not be adequately represented using discrete-event simulation software. Finally, interactions between entities in both models could not be modeled using this type of software. We have come to the conclusion that a more appropriate approach to modeling both the handwritten and electronic prescribing systems would be to use a complex adaptive systems approach using agent-based modeling or systems-based modeling.

  20. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  1. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  2. Implementing ARFORGEN: Installation Capability and Feasibility Study of Meeting ARFORGEN Guidelines

    DTIC Science & Technology

    2007-07-26

    aligning troop requirements with the Army’s new strategic mission, the force stabilization element of ARFORGEN was developed to raise the morale of...a discrete event simulation model developed for the project to mirror the reset process. The Unit Reset model is implemented in Java as a discrete...and transportation. Further, the typical installation support staff is manned by a Table of Distribution and Allowance ( TDA ) designed to

  3. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  4. A SAS-based solution to evaluate study design efficiency of phase I pediatric oncology trials via discrete event simulation.

    PubMed

    Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M

    2008-06-01

    Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.

  5. Real-time detection of organic contamination events in water distribution systems by principal components analysis of ultraviolet spectral data.

    PubMed

    Zhang, Jian; Hou, Dibo; Wang, Ke; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo

    2017-05-01

    The detection of organic contaminants in water distribution systems is essential to protect public health from potential harmful compounds resulting from accidental spills or intentional releases. Existing methods for detecting organic contaminants are based on quantitative analyses such as chemical testing and gas/liquid chromatography, which are time- and reagent-consuming and involve costly maintenance. This study proposes a novel procedure based on discrete wavelet transform and principal component analysis for detecting organic contamination events from ultraviolet spectral data. Firstly, the spectrum of each observation is transformed using discrete wavelet with a coiflet mother wavelet to capture the abrupt change along the wavelength. Principal component analysis is then employed to approximate the spectra based on capture and fusion features. The significant value of Hotelling's T 2 statistics is calculated and used to detect outliers. An alarm of contamination event is triggered by sequential Bayesian analysis when the outliers appear continuously in several observations. The effectiveness of the proposed procedure is tested on-line using a pilot-scale setup and experimental data.

  6. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  7. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  8. Particle Filter Based Tracking in a Detection Sparse Discrete Event Simulation Environment

    DTIC Science & Technology

    2007-03-01

    obtained by disqualifying a large number of particles. 52 (a) (b) ( c ) Figure 31. Particle Disqualification via Sanitization b...1 B. RESEARCH APPROACH..............................................................................5 C . THESIS ORGANIZATION...38 b. Detection Distribution Sampling............................................43 c . Estimated Position Calculation

  9. Simulating Mission Command for Planning and Analysis

    DTIC Science & Technology

    2015-06-01

    mission plan. 14. SUBJECT TERMS Mission Planning, CPM , PERT, Simulation, DES, Simkit, Triangle Distribution, Critical Path 15. NUMBER OF...Battalion Task Force CO Company CPM Critical Path Method DES Discrete Event Simulation FA BAT Field Artillery Battalion FEL Future Event List FIST...management tools that can be utilized to find the critical path in military projects. These are the Critical Path Method ( CPM ) and the Program Evaluation and

  10. An Object Description Language for Distributed Discrete Event Simulations

    DTIC Science & Technology

    2001-05-24

    some tremendous improvements in simulation speed and fidelity. This dissertation describes a new programming language that is useful in creating...104 CHAPTER 8. GLUT- BASED USER INTERFACE....................................il 8. 1. OUTPUT CONCERNS...143 9.3. GLUT BASED DEMONSTRATIONS ......................................................... 145 9.3

  11. Spatial effects in discrete generation population models.

    PubMed

    Carrillo, C; Fife, P

    2005-02-01

    A framework is developed for constructing a large class of discrete generation, continuous space models of evolving single species populations and finding their bifurcating patterned spatial distributions. Our models involve, in separate stages, the spatial redistribution (through movement laws) and local regulation of the population; and the fundamental properties of these events in a homogeneous environment are found. Emphasis is placed on the interaction of migrating individuals with the existing population through conspecific attraction (or repulsion), as well as on random dispersion. The nature of the competition of these two effects in a linearized scenario is clarified. The bifurcation of stationary spatially patterned population distributions is studied, with special attention given to the role played by that competition.

  12. Distribution of breakage events in random packings of rodlike particles.

    PubMed

    Grof, Zdeněk; Štěpánek, František

    2013-07-01

    Uniaxial compaction and breakage of rodlike particle packing has been studied using a discrete element method simulation. A scaling relationship between the applied stress, the number of breakage events, and the number-mean particle length has been derived and compared with computational experiments. Based on results for a wide range of intrinsic particle strengths and initial particle lengths, it seems that a single universal relation can be used to describe the incidence of breakage events during compaction of rodlike particle layers.

  13. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  14. Network Science Research Laboratory (NSRL) Discrete Event Toolkit

    DTIC Science & Technology

    2016-01-01

    ARL-TR-7579 ● JAN 2016 US Army Research Laboratory Network Science Research Laboratory (NSRL) Discrete Event Toolkit by...Laboratory (NSRL) Discrete Event Toolkit by Theron Trout and Andrew J Toth Computational and Information Sciences Directorate, ARL...Research Laboratory (NSRL) Discrete Event Toolkit 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Theron Trout

  15. ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE-EVENT SIMULATION

    DTIC Science & Technology

    2016-03-24

    ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...in the United States. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...UNLIMITED. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION Erich W

  16. Diagnosability of Stochastic Chemical Kinetic Systems: A Discrete Event Systems Approach (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    USA. E -mail: thorsley@u.washington.edu. This research is partially supported by the 2006 AFOSR MURI award “High Confidence Design for Distributed...occurrence of the finite sample path ω. These distributions are defined recursively to be π0(x) := π0(x), πωσ(x ′) := ∑ x∈X πω(x)r(x ′,σ | x) e −r(x ′,σ|x... e −rxτ . (2) This probability is this probability that the arrival time of the first event is greater than τ . For finite sample paths with strings

  17. Supervisory Control of Discrete Event Systems Modeled by Mealy Automata with Nondeterministic Output Functions

    NASA Astrophysics Data System (ADS)

    Ushio, Toshimitsu; Takai, Shigemasa

    Supervisory control is a general framework of logical control of discrete event systems. A supervisor assigns a set of control-disabled controllable events based on observed events so that the controlled discrete event system generates specified languages. In conventional supervisory control, it is assumed that observed events are determined by internal events deterministically. But, this assumption does not hold in a discrete event system with sensor errors and a mobile system, where each observed event depends on not only an internal event but also a state just before the occurrence of the internal event. In this paper, we model such a discrete event system by a Mealy automaton with a nondeterministic output function. We introduce two kinds of supervisors: one assigns each control action based on a permissive policy and the other based on an anti-permissive one. We show necessary and sufficient conditions for the existence of each supervisor. Moreover, we discuss the relationship between the supervisors in the case that the output function is determinisitic.

  18. Modeling Anti-Air Warfare With Discrete Event Simulation and Analyzing Naval Convoy Operations

    DTIC Science & Technology

    2016-06-01

    WARFARE WITH DISCRETE EVENT SIMULATION AND ANALYZING NAVAL CONVOY OPERATIONS by Ali E. Opcin June 2016 Thesis Advisor: Arnold H. Buss Co...REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MODELING ANTI-AIR WARFARE WITH DISCRETE EVENT...In this study, a discrete event simulation (DES) was built by modeling ships, and their sensors and weapons, to simulate convoy operations under

  19. Generating Discrete Power-Law Distributions from a Death- Multiple Immigration Population Process

    NASA Astrophysics Data System (ADS)

    Matthews, J. O.; Jakeman, E.; Hopcraft, K. I.

    2003-04-01

    We consider the evolution of a simple population process governed by deaths and multiple immigrations that arrive with rates particular to their order. For a particular choice of rates, the equilibrium solution has a discrete power-law form. The model is a generalization of a process investigated previously where immigrants arrived in pairs [1]. The general properties of this model are discussed in a companion paper. The population is initiated with precisely M individuals present and evolves to an equilibrium distribution with a power-law tail. However the power-law tails of the equilibrium distribution are established immediately, so that moments and correlation properties of the population are undefined for any non-zero time. The technique we develop to characterize this process utilizes external monitoring that counts the emigrants leaving the population in specified time intervals. This counting distribution also possesses a power-law tail for all sampling times and the resulting time series exhibits two features worthy of note, a large variation in the strength of the signal, reflecting the power-law PDF; and secondly, intermittency of the emissions. We show that counting with a detector of finite dynamic range regularizes naturally the fluctuations, in effect `clipping' the events. All previously undefined characteristics such as the mean, autocorrelation and probabilities to the first event and time between events are well defined and derived. These properties, although obtained by discarding much data, nevertheless possess embedded power-law regimes that characterize the population in a way that is analogous to box averaging determination of fractal-dimension.

  20. In Defense of the Chi-Square Continuity Correction.

    ERIC Educational Resources Information Center

    Veldman, Donald J.; McNemar, Quinn

    Published studies of the sampling distribution of chi-square with and without Yates' correction for continuity have been interpreted as discrediting the correction. Yates' correction actually produces a biased chi-square value which in turn yields a better estimate of the exact probability of the discrete event concerned when used in conjunction…

  1. Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions

    NASA Technical Reports Server (NTRS)

    Mcguire, Stephen C.

    1987-01-01

    The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.

  2. DynamO: a free O(N) general event-driven molecular dynamics simulator.

    PubMed

    Bannerman, M N; Sargant, R; Lue, L

    2011-11-30

    Molecular dynamics algorithms for systems of particles interacting through discrete or "hard" potentials are fundamentally different to the methods for continuous or "soft" potential systems. Although many software packages have been developed for continuous potential systems, software for discrete potential systems based on event-driven algorithms are relatively scarce and specialized. We present DynamO, a general event-driven simulation package, which displays the optimal O(N) asymptotic scaling of the computational cost with the number of particles N, rather than the O(N) scaling found in most standard algorithms. DynamO provides reference implementations of the best available event-driven algorithms. These techniques allow the rapid simulation of both complex and large (>10(6) particles) systems for long times. The performance of the program is benchmarked for elastic hard sphere systems, homogeneous cooling and sheared inelastic hard spheres, and equilibrium Lennard-Jones fluids. This software and its documentation are distributed under the GNU General Public license and can be freely downloaded from http://marcusbannerman.co.uk/dynamo. Copyright © 2011 Wiley Periodicals, Inc.

  3. A Simulation of Alternatives for Wholesale Inventory Replenishment

    DTIC Science & Technology

    2016-03-01

    algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item

  4. VME rollback hardware for time warp multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Robb, Michael J.; Buzzell, Calvin A.

    1992-01-01

    The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.

  5. An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation

    DOE PAGES

    Nutaro, James

    2014-11-03

    In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.

  6. Cascadia Slow Earthquakes: Strategies for Time Independent Inversion of Displacement Fields

    NASA Astrophysics Data System (ADS)

    Szeliga, W. M.; Melbourne, T. I.; Miller, M. M.; Santillan, V. M.

    2004-12-01

    Continuous observations using Global Positioning System geodesy (CGPS) have revealed periodic slow or silent earthquakes along the Cascadia subduction zone with a spectrum of timing and periodicity. These creep events perturb time series of GPS observations and yield coherent displacement fields that relate to the extent and magnitude of fault displacement. In this study, time independent inversions of the surface displacement fields that accompany eight slow earthquakes characterize slip distributions along the plate interface for each event. The inversions employed in this study utilize Okada's elastic dislocation model and a non- negative least squares approach. Methodologies for optimizing the slip distribution smoothing parameter for a particular station distribution have also been investigated, significantly reducing the number of possible slip distributions and the range of estimates for total moment release for each event. The discretized slip distribution calculated for multiple creep events identifies areas of the Cascadia plate interface where slip persistently recurs. The current hypothesis, that slow earthquakes are modulated by forced fluid flow, leads to the possibility that some regions of the Cascadia plate interface may display fault patches preferentially exploited by fluid flow. Thus, the identification of regions of the plate interface that repeatedly slip during slow events may yield important information regarding the identification of these fluid pathways.

  7. Rheology of U-Shaped Granular Particles

    NASA Astrophysics Data System (ADS)

    Hill, Matthew; Franklin, Scott

    We study the response of cylindrical samples of U-shaped granular particles (staples) to extensional loads. Samples elongate in discrete bursts (events) corresponding to particles rearranging and re-entangling. Previous research on samples of constant cross-sectional area found a Weibullian weakest-link theory could explain the distribution of yield points. We now vary the cross-sectional area, and find that the maximum yield pressure (force/area) is a function of particle number density and independent of area. The probability distribution function of important event characteristics -- the stress increase before an event and stress released during an event -- both fall of inversely with magnitude, reminiscent of avalanche dynamics. Fourier transforms of the fluctuating force (or stress) scales inversely with frequency, suggesting dry friction plays a role in the rearrangements. Finally, there is some evidence that dynamics are sensitive to the stiffness of the tensile testing machine, although an explanation for this behavior is unknown.

  8. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  9. Context Fear Learning Specifically Activates Distinct Populations of Neurons in Amygdala and Hypothalamus

    ERIC Educational Resources Information Center

    Trogrlic, Lidia; Wilson, Yvette M.; Newman, Andrew G.; Murphy, Mark

    2011-01-01

    The identity and distribution of neurons that are involved in any learning or memory event is not known. In previous studies, we identified a discrete population of neurons in the lateral amygdala that show learning-specific activation of a c-"fos"-regulated transgene following context fear conditioning. Here, we have extended these studies to…

  10. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  11. A Process Improvement Study on a Military System of Clinics to Manage Patient Demand and Resource Utilization Using Discrete-Event Simulation, Sensitivity Analysis, and Cost-Benefit Analysis

    DTIC Science & Technology

    2015-03-12

    26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to

  12. Extreme climatic events constrain space use and survival of a ground-nesting bird.

    PubMed

    Tanner, Evan P; Elmore, R Dwayne; Fuhlendorf, Samuel D; Davis, Craig A; Dahlgren, David K; Orange, Jeremy P

    2017-05-01

    Two fundamental issues in ecology are understanding what influences the distribution and abundance of organisms through space and time. While it is well established that broad-scale patterns of abiotic and biotic conditions affect organisms' distributions and population fluctuations, discrete events may be important drivers of space use, survival, and persistence. These discrete extreme climatic events can constrain populations and space use at fine scales beyond that which is typically measured in ecological studies. Recently, a growing body of literature has identified thermal stress as a potential mechanism in determining space use and survival. We sought to determine how ambient temperature at fine temporal scales affected survival and space use for a ground-nesting quail species (Colinus virginianus; northern bobwhite). We modeled space use across an ambient temperature gradient (ranging from -20 to 38 °C) through a maxent algorithm. We also used Andersen-Gill proportional hazard models to assess the influence of ambient temperature-related variables on survival through time. Estimated available useable space ranged from 18.6% to 57.1% of the landscape depending on ambient temperature. The lowest and highest ambient temperature categories (<-15 °C and >35 °C, respectively) were associated with the least amount of estimated useable space (18.6% and 24.6%, respectively). Range overlap analysis indicated dissimilarity in areas where Colinus virginianus were restricted during times of thermal extremes (range overlap = 0.38). This suggests that habitat under a given condition is not necessarily a habitat under alternative conditions. Further, we found survival was most influenced by weekly minimum ambient temperatures. Our results demonstrate that ecological constraints can occur along a thermal gradient and that understanding the effects of these discrete events and how they change over time may be more important to conservation of organisms than are average and broad-scale conditions as typically measured in ecological studies. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  13. Changes in the Martian atmosphere induced by auroral electron precipitation

    NASA Astrophysics Data System (ADS)

    Shematovich, V. I.; Bisikalo, D. V.; Gérard, J.-C.; Hubert, B.

    2017-09-01

    Typical auroral events in the Martian atmosphere, such as discrete and diffuse auroral emissions detected by UV spectrometers onboard ESA Mars Express and NASA MAVEN, are investigated. Auroral electron kinetic energy distribution functions and energy spectra of the upward and downward electron fluxes are obtained by electron transport calculations using the kinetic Monte Carlo model. These characteristics of auroral electron fluxes make it possible to calculate both the precipitation-induced changes in the atmosphere and the observed manifestations of auroral events on Mars. In particular, intensities of discrete and diffuse auroral emissions in the UV and visible wavelength ranges (Soret et al., 2016; Bisikalo et al., 2017; Gérard et al., 2017). For these conditions of auroral events, the analysis is carried out, and the contribution of the fluxes of precipitating electrons to the heating and ionization of the Martian atmosphere is estimated. Numerical calculations show that in the case of discrete auroral events the effect of the residual crustal magnetic field leads to a significant increase in the upward fluxes of electrons, which causes a decrease in the rates of heating and ionization of the atmospheric gas in comparison with the calculations without taking into account the residual magnetic field. It is shown that all the above-mentioned impact factors of auroral electron precipitation processes should be taken into account both in the photochemical models of the Martian atmosphere and in the interpretation of observations of the chemical composition and its variations using the ACS instrument onboard ExoMars.

  14. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  15. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  16. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  17. Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems.

    PubMed

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2017-07-01

    This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.

  18. Improving the Teaching of Discrete-Event Control Systems Using a LEGO Manufacturing Prototype

    ERIC Educational Resources Information Center

    Sanchez, A.; Bucio, J.

    2012-01-01

    This paper discusses the usefulness of employing LEGO as a teaching-learning aid in a post-graduate-level first course on the control of discrete-event systems (DESs). The final assignment of the course is presented, which asks students to design and implement a modular hierarchical discrete-event supervisor for the coordination layer of a…

  19. DEVS representation of dynamical systems - Event-based intelligent control. [Discrete Event System Specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.

    1989-01-01

    It is shown how systems can be advantageously represented as discrete-event models by using DEVS (discrete-event system specification), a set-theoretic formalism. Such DEVS models provide a basis for the design of event-based logic control. In this control paradigm, the controller expects to receive confirming sensor responses to its control commands within definite time windows determined by its DEVS model of the system under control. The event-based contral paradigm is applied in advanced robotic and intelligent automation, showing how classical process control can be readily interfaced with rule-based symbolic reasoning systems.

  20. The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations

    DTIC Science & Technology

    2011-12-01

    modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism

  1. A Summary of Some Discrete-Event System Control Problems

    NASA Astrophysics Data System (ADS)

    Rudie, Karen

    A summary of the area of control of discrete-event systems is given. In this research area, automata and formal language theory is used as a tool to model physical problems that arise in technological and industrial systems. The key ingredients to discrete-event control problems are a process that can be modeled by an automaton, events in that process that cannot be disabled or prevented from occurring, and a controlling agent that manipulates the events that can be disabled to guarantee that the process under control either generates all the strings in some prescribed language or as many strings as possible in some prescribed language. When multiple controlling agents act on a process, decentralized control problems arise. In decentralized discrete-event systems, it is presumed that the agents effecting control cannot each see all event occurrences. Partial observation leads to some problems that cannot be solved in polynomial time and some others that are not even decidable.

  2. A networks-based discrete dynamic systems approach to volcanic seismicity

    NASA Astrophysics Data System (ADS)

    Suteanu, Mirela

    2013-04-01

    The detection and relevant description of pattern change concerning earthquake events is an important, but challenging task. In this paper, earthquake events related to volcanic activity are considered manifestations of a dynamic system evolving over time. The system dynamics is seen as a succession of events with point-like appearance both in time and in space. Each event is characterized by a position in three-dimensional space, a moment of occurrence, and an event size (magnitude). A weighted directed network is constructed to capture the effects of earthquakes on subsequent events. Each seismic event represents a node. Relations among events represent edges. Edge directions are given by the temporal succession of the events. Edges are also characterized by weights reflecting the strengths of the relation between the nodes. Weights are calculated as a function of (i) the time interval separating the two events, (ii) the spatial distance between the events, (iii) the magnitude of the earliest event among the two. Different ways of addressing weight components are explored, and their implications for the properties of the produced networks are analyzed. The resulting networks are then characterized in terms of degree- and weight distributions. Subsequently, the distribution of system transitions is determined for all the edges connecting related events in the network. Two- and three-dimensional diagrams are constructed to reflect transition distributions for each set of events. Networks are thus generated for successive temporal windows of different size, and the evolution of (a) network properties and (b) system transition distributions are followed over time and compared to the timeline of documented geologic processes. Applications concerning volcanic seismicity on the Big Island of Hawaii show that this approach is capable of revealing novel aspects of change occurring in the volcanic system on different scales in time and in space.

  3. Application of Bayesian Reliability Concepts to Cruise Missile Electronic Components

    DTIC Science & Technology

    1989-09-01

    and contrast them with the more prevalent classical inference view. 3 II. literature Review Introduction This literature review will consider current ...events on the basis of whatever evidence is currently available. Then if additional evidence is subsequently obtained, the initial probabilities are...Chay contends there is no longer any need to approximate continuous prior distributions through discretization because current computer calculations

  4. Distributed solid state programmable thermostat/power controller

    NASA Technical Reports Server (NTRS)

    Smith, Dennis A. (Inventor); Alexander, Jane C. (Inventor); Howard, David E. (Inventor)

    2008-01-01

    A self-contained power controller having a power driver switch, programmable controller, communication port, and environmental parameter measuring device coupled to a controllable device. The self-contained power controller needs only a single voltage source to power discrete devices, analog devices, and the controlled device. The programmable controller has a run mode which, when selected, upon the occurrence of a trigger event changes the state of a power driver switch and wherein the power driver switch is maintained by the programmable controller at the same state until the occurrence of a second event.

  5. Enhancement of the Logistics Battle Command Model: Architecture Upgrades and Attrition Module Development

    DTIC Science & Technology

    2017-01-05

    module. 15. SUBJECT TERMS Logistics, attrition, discrete event simulation, Simkit, LBC 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...stochastics, and discrete event model programmed in Java building largely on the Simkit library. The primary purpose of the LBC model is to support...equations makes them incompatible with the discrete event construct of LBC. Bullard further advances this methodology by developing a stochastic

  6. A Simulation of Readiness-Based Sparing Policies

    DTIC Science & Technology

    2017-06-01

    variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the...available in the optimization tools. 14. SUBJECT TERMS readiness-based sparing, discrete event simulation, optimization, multi-indenture...variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the

  7. Evaluation of the Navys Sea/Shore Flow Policy

    DTIC Science & Technology

    2016-06-01

    Std. Z39.18 i Abstract CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of...a more steady manning level, but the variability remains, even if the system is optimized. In building a Discrete -Event Simulation model, we...steady-state model. In FY 2014, CNA developed a Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model

  8. Discrete event simulation and the resultant data storage system response in the operational mission environment of Jupiter-Saturn /Voyager/ spacecraft

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1978-01-01

    The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.

  9. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  10. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horiike, S.; Okazaki, Y.

    This paper describes a performance estimation tool developed for modeling and simulation of open distributed energy management systems to support their design. The approach of discrete event simulation with detailed models is considered for efficient performance estimation. The tool includes basic models constituting a platform, e.g., Ethernet, communication protocol, operating system, etc. Application softwares are modeled by specifying CPU time, disk access size, communication data size, etc. Different types of system configurations for various system activities can be easily studied. Simulation examples show how the tool is utilized for the efficient design of open distributed energy management systems.

  12. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  13. Evaluation of the Navys Sea/Shore Flow Policy

    DTIC Science & Technology

    2016-06-01

    CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of alternative sea/shore flow policies. In this study...remains, even if the system is optimized. In building a Discrete -Event Simulation model, we discovered key factors that should be included in the... Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model) and compared the results with the SSFM for one

  14. ReSTART: A Novel Framework for Resource-Based Triage in Mass-Casualty Events.

    PubMed

    Mills, Alex F; Argon, Nilay T; Ziya, Serhan; Hiestand, Brian; Winslow, James

    2014-01-01

    Current guidelines for mass-casualty triage do not explicitly use information about resource availability. Even though this limitation has been widely recognized, how it should be addressed remains largely unexplored. The authors present a novel framework developed using operations research methods to account for resource limitations when determining priorities for transportation of critically injured patients. To illustrate how this framework can be used, they also develop two specific example methods, named ReSTART and Simple-ReSTART, both of which extend the widely adopted triage protocol Simple Triage and Rapid Treatment (START) by using a simple calculation to determine priorities based on the relative scarcity of transportation resources. The framework is supported by three techniques from operations research: mathematical analysis, optimization, and discrete-event simulation. The authors? algorithms were developed using mathematical analysis and optimization and then extensively tested using 9,000 discrete-event simulations on three distributions of patient severity (representing low, random, and high acuity). For each incident, the expected number of survivors was calculated under START, ReSTART, and Simple-ReSTART. A web-based decision support tool was constructed to help providers make prioritization decisions in the aftermath of mass-casualty incidents based on ReSTART. In simulations, ReSTART resulted in significantly lower mortality than START regardless of which severity distribution was used (paired t test, p<.01). Mean decrease in critical mortality, the percentage of immediate and delayed patients who die, was 8.5% for low-acuity distribution (range ?2.2% to 21.1%), 9.3% for random distribution (range ?0.2% to 21.2%), and 9.1% for high-acuity distribution (range ?0.7% to 21.1%). Although the critical mortality improvement due to ReSTART was different for each of the three severity distributions, the variation was less than 1 percentage point, indicating that the ReSTART policy is relatively robust to different severity distributions. Taking resource limitations into account in mass-casualty situations, triage has the potential to increase the expected number of survivors. Further validation is required before field implementation; however, the framework proposed in here can serve as the foundation for future work in this area. 2014.

  15. Pore invasion dynamics during fluid front displacement in porous media determine functional pore size distribution and phase entrapment

    NASA Astrophysics Data System (ADS)

    Moebius, F.; Or, D.

    2012-12-01

    Dynamics of fluid fronts in porous media shape transport properties of the unsaturated zone and affect management of petroleum reservoirs and their storage properties. What appears macroscopically as smooth and continuous motion of a displacement fluid front may involve numerous rapid interfacial jumps often resembling avalanches of invasion events. Direct observations using high-speed camera and pressure sensors in sintered glass micro-models provide new insights on the influence of flow rates, pore size, and gravity on invasion events and on burst size distribution. Fundamental differences emerge between geometrically-defined pores and "functional" pores invaded during a single burst (invasion event). The waiting times distribution of individual invasion events and decay times of inertial oscillations (following a rapid interfacial jump) are characteristics of different displacement regimes. An invasion percolation model with gradients and including the role of inertia provide a framework for linking flow regimes with invasion sequences and phase entrapment. Model results were compared with measurements and with early studies on invasion burst sizes and waiting times distribution during slow drainage processes by Måløy et al. [1992]. The study provides new insights into the discrete invasion events and their weak links with geometrically-deduced pore geometry. Results highlight factors controlling pore invasion events that exert strong influence on macroscopic phenomena such as front morphology and residual phase entrapment shaping hydraulic properties after the passage of a fluid front.

  16. Evolution of Particle Size Distributions in Fragmentation Over Time

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.

  17. An algebra of discrete event processes

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Meyer, George

    1991-01-01

    This report deals with an algebraic framework for modeling and control of discrete event processes. The report consists of two parts. The first part is introductory, and consists of a tutorial survey of the theory of concurrency in the spirit of Hoare's CSP, and an examination of the suitability of such an algebraic framework for dealing with various aspects of discrete event control. To this end a new concurrency operator is introduced and it is shown how the resulting framework can be applied. It is further shown that a suitable theory that deals with the new concurrency operator must be developed. In the second part of the report the formal algebra of discrete event control is developed. At the present time the second part of the report is still an incomplete and occasionally tentative working paper.

  18. Parallel discrete-event simulation schemes with heterogeneous processing elements.

    PubMed

    Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung

    2014-07-01

    To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.

  19. Distributed Event-Based Set-Membership Filtering for a Class of Nonlinear Systems With Sensor Saturations Over Sensor Networks.

    PubMed

    Ma, Lifeng; Wang, Zidong; Lam, Hak-Keung; Kyriakoulis, Nikos

    2017-11-01

    In this paper, the distributed set-membership filtering problem is investigated for a class of discrete time-varying system with an event-based communication mechanism over sensor networks. The system under consideration is subject to sector-bounded nonlinearity, unknown but bounded noises and sensor saturations. Each intelligent sensing node transmits the data to its neighbors only when certain triggering condition is violated. By means of a set of recursive matrix inequalities, sufficient conditions are derived for the existence of the desired distributed event-based filter which is capable of confining the system state in certain ellipsoidal regions centered at the estimates. Within the established theoretical framework, two additional optimization problems are formulated: one is to seek the minimal ellipsoids (in the sense of matrix trace) for the best filtering performance, and the other is to maximize the triggering threshold so as to reduce the triggering frequency with satisfactory filtering performance. A numerically attractive chaos algorithm is employed to solve the optimization problems. Finally, an illustrative example is presented to demonstrate the effectiveness and applicability of the proposed algorithm.

  20. Microflares and the Statistics of X-Ray Flares

    NASA Technical Reports Server (NTRS)

    Hannah, I. G.; Hudson, H. S.; Battaglia, M.; Christe, S.; Kasparova, J.; Krucker, S.; Kundu, M. R.; Veronig, A.

    2011-01-01

    This review surveys the statistics of solar X-ray flares, emphasising the new views that Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) has given us of the weaker events (the microflares). The new data reveal that these microflares strongly resemble more energetic events in most respects; they occur solely within active regions and exhibit high-temperature/nonthermal emissions in approximately the same proportion as major events. We discuss the distributions of flare parameters (e.g., peak flux) and how these parameters correlate, for instance via the Neupert effect. We also highlight the systematic biases involved in intercomparing data representing many decades of event magnitude. The intermittency of the flare/microflare occurrence, both in space and in time, argues that these discrete events do not explain general coronal heating, either in active regions or in the quiet Sun.

  1. Time Warp Operating System, Version 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; hide

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  2. [Correlative analysis of the diversity patterns of regional surface water, NDVI and thermal environment].

    PubMed

    Duan, Jin-Long; Zhang, Xue-Lei

    2012-10-01

    Taking Zhengzhou City, the capital of Henan Province in Central China, as the study area, and by using the theories and methodologies of diversity, a discreteness evaluation on the regional surface water, normalized difference vegetation index (NDVI), and land surface temperature (LST) distribution was conducted in a 2 km x 2 km grid scale. Both the NDVI and the LST were divided into 4 levels, their spatial distribution diversity indices were calculated, and their connections were explored. The results showed that it was of operability and practical significance to use the theories and methodologies of diversity in the discreteness evaluation of the spatial distribution of regional thermal environment. There was a higher overlap of location between the distributions of surface water and the lowest temperature region, and the high vegetation coverage was often accompanied by low land surface temperature. In 1988-2009, the discreteness of the surface water distribution in the City had an obvious decreasing trend. The discreteness of the surface water distribution had a close correlation with the discreteness of the temperature region distribution, while the discreteness of the NDVI classification distribution had a more complicated correlation with the discreteness of the temperature region distribution. Therefore, more environmental factors were needed to be included for a better evaluation.

  3. Non-Lipschitz Dynamics Approach to Discrete Event Systems

    NASA Technical Reports Server (NTRS)

    Zak, M.; Meyers, R.

    1995-01-01

    This paper presents and discusses a mathematical formalism for simulation of discrete event dynamics (DED) - a special type of 'man- made' system designed to aid specific areas of information processing. A main objective is to demonstrate that the mathematical formalism for DED can be based upon the terminal model of Newtonian dynamics which allows one to relax Lipschitz conditions at some discrete points.

  4. Running Parallel Discrete Event Simulators on Sierra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, P. D.; Jefferson, D. R.

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  5. A network of discrete events for the representation and analysis of diffusion dynamics.

    PubMed

    Pintus, Alberto M; Pazzona, Federico G; Demontis, Pierfranco; Suffritti, Giuseppe B

    2015-11-14

    We developed a coarse-grained description of the phenomenology of diffusive processes, in terms of a space of discrete events and its representation as a network. Once a proper classification of the discrete events underlying the diffusive process is carried out, their transition matrix is calculated on the basis of molecular dynamics data. This matrix can be represented as a directed, weighted network where nodes represent discrete events, and the weight of edges is given by the probability that one follows the other. The structure of this network reflects dynamical properties of the process of interest in such features as its modularity and the entropy rate of nodes. As an example of the applicability of this conceptual framework, we discuss here the physics of diffusion of small non-polar molecules in a microporous material, in terms of the structure of the corresponding network of events, and explain on this basis the diffusivity trends observed. A quantitative account of these trends is obtained by considering the contribution of the various events to the displacement autocorrelation function.

  6. Pinning time statistics for vortex lines in disordered environments.

    PubMed

    Dobramysl, Ulrich; Pleimling, Michel; Täuber, Uwe C

    2014-12-01

    We study the pinning dynamics of magnetic flux (vortex) lines in a disordered type-II superconductor. Using numerical simulations of a directed elastic line model, we extract the pinning time distributions of vortex line segments. We compare different model implementations for the disorder in the surrounding medium: discrete, localized pinning potential wells that are either attractive and repulsive or purely attractive, and whose strengths are drawn from a Gaussian distribution; as well as continuous Gaussian random potential landscapes. We find that both schemes yield power-law distributions in the pinned phase as predicted by extreme-event statistics, yet they differ significantly in their effective scaling exponents and their short-time behavior.

  7. A continuous analog of run length distributions reflecting accumulated fractionation events.

    PubMed

    Yu, Zhe; Sankoff, David

    2016-11-11

    We propose a new, continuous model of the fractionation process (duplicate gene deletion after polyploidization) on the real line. The aim is to infer how much DNA is deleted at a time, based on segment lengths for alternating deleted (invisible) and undeleted (visible) regions. After deriving a number of analytical results for "one-sided" fractionation, we undertake a series of simulations that help us identify the distribution of segment lengths as a gamma with shape and rate parameters evolving over time. This leads to an inference procedure based on observed length distributions for visible and invisible segments. We suggest extensions of this mathematical and simulation work to biologically realistic discrete models, including two-sided fractionation.

  8. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  9. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  10. Simulating Sustainment for an Unmanned Logistics System Concept of Operation in Support of Distributed Operations

    DTIC Science & Technology

    2017-06-01

    designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily in...systems, simulation, discrete event simulation, design of experiments, data analysis, simplekit, nearly orthogonal and balanced designs 15. NUMBER OF... designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily

  11. Terminal Dynamics Approach to Discrete Event Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Meyers, Ronald

    1995-01-01

    This paper presents and discusses a mathematical formalism for simulation of discrete event dynamic (DED)-a special type of 'man-made' systems to serve specific purposes of information processing. The main objective of this work is to demonstrate that the mathematical formalism for DED can be based upon a terminal model of Newtonian dynamics which allows one to relax Lipschitz conditions at some discrete points.!.

  12. Discrete Analysis of Damage and Shear Banding in Argillaceous Rocks

    NASA Astrophysics Data System (ADS)

    Dinç, Özge; Scholtès, Luc

    2018-05-01

    A discrete approach is proposed to study damage and failure processes taking place in argillaceous rocks which present a transversely isotropic behavior. More precisely, a dedicated discrete element method is utilized to provide a micromechanical description of the mechanisms involved. The purpose of the study is twofold: (1) presenting a three-dimensional discrete element model able to simulate the anisotropic macro-mechanical behavior of the Callovo-Oxfordian claystone as a particular case of argillaceous rocks; (2) studying how progressive failure develops in such material. Material anisotropy is explicitly taken into account in the numerical model through the introduction of weakness planes distributed at the interparticle scale following predefined orientation and intensity. Simulations of compression tests under plane-strain and triaxial conditions are performed to clarify the development of damage and the appearance of shear bands through micromechanical analyses. The overall mechanical behavior and shear banding patterns predicted by the numerical model are in good agreement with respect to experimental observations. Both tensile and shear microcracks emerging from the modeling also present characteristics compatible with microstructural observations. The numerical results confirm that the global failure of argillaceous rocks is well correlated with the mechanisms taking place at the local scale. Specifically, strain localization is shown to directly result from shear microcracking developing with a preferential orientation distribution related to the orientation of the shear band. In addition, localization events presenting characteristics similar to shear bands are observed from the early stages of the loading and might thus be considered as precursors of strain localization.

  13. Combined discrete particle and continuum model predicting solid-state fermentation in a drum fermentor.

    PubMed

    Schutyser, M A I; Briels, W J; Boom, R M; Rinzema, A

    2004-05-20

    The development of mathematical models facilitates industrial (large-scale) application of solid-state fermentation (SSF). In this study, a two-phase model of a drum fermentor is developed that consists of a discrete particle model (solid phase) and a continuum model (gas phase). The continuum model describes the distribution of air in the bed injected via an aeration pipe. The discrete particle model describes the solid phase. In previous work, mixing during SSF was predicted with the discrete particle model, although mixing simulations were not carried out in the current work. Heat and mass transfer between the two phases and biomass growth were implemented in the two-phase model. Validation experiments were conducted in a 28-dm3 drum fermentor. In this fermentor, sufficient aeration was provided to control the temperatures near the optimum value for growth during the first 45-50 hours. Several simulations were also conducted for different fermentor scales. Forced aeration via a single pipe in the drum fermentors did not provide homogeneous cooling in the substrate bed. Due to large temperature gradients, biomass yield decreased severely with increasing size of the fermentor. Improvement of air distribution would be required to avoid the need for frequent mixing events, during which growth is hampered. From these results, it was concluded that the two-phase model developed is a powerful tool to investigate design and scale-up of aerated (mixed) SSF fermentors. Copyright 2004 Wiley Periodicals, Inc.

  14. A discrete event simulation tool to support and predict hospital and clinic staffing.

    PubMed

    DeRienzo, Christopher M; Shaw, Ryan J; Meanor, Phillip; Lada, Emily; Ferranti, Jeffrey; Tanaka, David

    2017-06-01

    We demonstrate how to develop a simulation tool to help healthcare managers and administrators predict and plan for staffing needs in a hospital neonatal intensive care unit using administrative data. We developed a discrete event simulation model of nursing staff needed in a neonatal intensive care unit and then validated the model against historical data. The process flow was translated into a discrete event simulation model. Results demonstrated that the model can be used to give a respectable estimate of annual admissions, transfers, and deaths based upon two different staffing levels. The discrete event simulation tool model can provide healthcare managers and administrators with (1) a valid method of modeling patient mix, patient acuity, staffing needs, and costs in the present state and (2) a forecast of how changes in a unit's staffing, referral patterns, or patient mix would affect a unit in a future state.

  15. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.

    PubMed

    Caro, J Jaime

    2016-07-01

    Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

  16. Event-Based Variance-Constrained ${\\mathcal {H}}_{\\infty }$ Filtering for Stochastic Parameter Systems Over Sensor Networks With Successive Missing Measurements.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2018-03-01

    This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.

  17. Discrete diffusion models to study the effects of Mg2+ concentration on the PhoPQ signal transduction system

    PubMed Central

    2010-01-01

    Background The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the “in silico” stochastic event based modeling approach to find the molecular dynamics of the system. Results In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Conclusions Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics. PMID:21143785

  18. Discrete diffusion models to study the effects of Mg2+ concentration on the PhoPQ signal transduction system.

    PubMed

    Ghosh, Preetam; Ghosh, Samik; Basu, Kalyan; Das, Sajal K; Zhang, Chaoyang

    2010-12-01

    The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the "in silico" stochastic event based modeling approach to find the molecular dynamics of the system. In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics.

  19. First X-ray Statistical Tests for Clumpy-Torus Models: Constraints from RXTEmonitoring of Seyfert AGN

    NASA Astrophysics Data System (ADS)

    Markowitz, Alex; Krumpe, Mirko; Nikutta, R.

    2016-06-01

    In two papers (Markowitz, Krumpe, & Nikutta 2014, and Nikutta et al., in prep.), we derive the first X-ray statistical constraints for clumpy-torus models in Seyfert AGN by quantifying multi-timescale variability in line of-sight X-ray absorbing gas as a function of optical classification.We systematically search for discrete absorption events in the vast archive of RXTE monitoring of 55 nearby type Is and Compton-thin type IIs. We are sensitive to discrete absorption events due to clouds of full-covering, neutral/mildly ionized gas transiting the line of sight. Our results apply to both dusty and non-dusty clumpy media, and probe model parameter space complementary to that for eclipses observed with XMM-Newton, Suzaku, and Chandra.We detect twelve eclipse events in eight Seyferts, roughly tripling the number previously published from this archive. Event durations span hours to years. Most of our detected clouds are Compton-thin, and most clouds' distances from the black hole are inferred to be commensurate with the outer portions of the BLR or the inner regions of infrared-emitting dusty tori.We present the density profiles of the highest-quality eclipse events; the column density profile for an eclipsing cloud in NGC 3783 is doubly spiked, possibly indicating a cloud that is being tidallysheared. We discuss implications for cloud distributions in the context of clumpy-torus models. We calculate eclipse probabilities for orientation-dependent Type I/II unification schemes.We present constraints on cloud sizes, stability, and radial distribution. We infer that clouds' small angular sizes as seen from the SMBH imply 107 clouds required across the BLR + torus. Cloud size is roughly proportional to distance from the black hole, hinting at the formation processes (e.g., disk fragmentation). All observed clouds are sub-critical with respect to tidal disruption; self-gravity alone cannot contain them. External forces, such as magnetic fields or ambient pressure, are needed to contain them; otherwise, clouds must be short-lived.

  20. Complex discrete dynamics from simple continuous population models.

    PubMed

    Gamarra, Javier G P; Solé, Ricard V

    2002-05-01

    Nonoverlapping generations have been classically modelled as difference equations in order to account for the discrete nature of reproductive events. However, other events such as resource consumption or mortality are continuous and take place in the within-generation time. We have realistically assumed a hybrid ODE bidimensional model of resources and consumers with discrete events for reproduction. Numerical and analytical approaches showed that the resulting dynamics resembles a Ricker map, including the doubling route to chaos. Stochastic simulations with a handling-time parameter for indirect competition of juveniles may affect the qualitative behaviour of the model.

  1. Methods for assessing long-term mean pathogen count in drinking water and risk management implications.

    PubMed

    Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y

    2012-06-01

    Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.

  2. Synchronization of autonomous objects in discrete event simulation

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1990-01-01

    Autonomous objects in event-driven discrete event simulation offer the potential to combine the freedom of unrestricted movement and positional accuracy through Euclidean space of time-driven models with the computational efficiency of event-driven simulation. The principal challenge to autonomous object implementation is object synchronization. The concept of a spatial blackboard is offered as a potential methodology for synchronization. The issues facing implementation of a spatial blackboard are outlined and discussed.

  3. Discrete event command and control for networked teams with multiple missions

    NASA Astrophysics Data System (ADS)

    Lewis, Frank L.; Hudas, Greg R.; Pang, Chee Khiang; Middleton, Matthew B.; McMurrough, Christopher

    2009-05-01

    During mission execution in military applications, the TRADOC Pamphlet 525-66 Battle Command and Battle Space Awareness capabilities prescribe expectations that networked teams will perform in a reliable manner under changing mission requirements, varying resource availability and reliability, and resource faults. In this paper, a Command and Control (C2) structure is presented that allows for computer-aided execution of the networked team decision-making process, control of force resources, shared resource dispatching, and adaptability to change based on battlefield conditions. A mathematically justified networked computing environment is provided called the Discrete Event Control (DEC) Framework. DEC has the ability to provide the logical connectivity among all team participants including mission planners, field commanders, war-fighters, and robotic platforms. The proposed data management tools are developed and demonstrated on a simulation study and an implementation on a distributed wireless sensor network. The results show that the tasks of multiple missions are correctly sequenced in real-time, and that shared resources are suitably assigned to competing tasks under dynamically changing conditions without conflicts and bottlenecks.

  4. Quantifying short-lived events in multistate ionic current measurements.

    PubMed

    Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute

    2014-02-25

    We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.

  5. Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.

    PubMed

    Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H

    2006-11-15

    The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.

  6. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  7. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  8. Churn-Resilient Replication Strategy for Peer-to-Peer Distributed Hash-Tables

    NASA Astrophysics Data System (ADS)

    Legtchenko, Sergey; Monnet, Sébastien; Sens, Pierre; Muller, Gilles

    DHT-based P2P systems provide a fault-tolerant and scalable mean to store data blocks in a fully distributed way. Unfortunately, recent studies have shown that if connection/disconnection frequency is too high, data blocks may be lost. This is true for most current DHT-based system's implementations. To avoid this problem, it is necessary to build really efficient replication and maintenance mechanisms. In this paper, we study the effect of churn on an existing DHT-based P2P system such as DHash or PAST. We then propose solutions to enhance churn tolerance and evaluate them through discrete event simulations.

  9. Generalized Detectability for Discrete Event Systems

    PubMed Central

    Shu, Shaolong; Lin, Feng

    2011-01-01

    In our previous work, we investigated detectability of discrete event systems, which is defined as the ability to determine the current and subsequent states of a system based on observation. For different applications, we defined four types of detectabilities: (weak) detectability, strong detectability, (weak) periodic detectability, and strong periodic detectability. In this paper, we extend our results in three aspects. (1) We extend detectability from deterministic systems to nondeterministic systems. Such a generalization is necessary because there are many systems that need to be modeled as nondeterministic discrete event systems. (2) We develop polynomial algorithms to check strong detectability. The previous algorithms are based on observer whose construction is of exponential complexity, while the new algorithms are based on a new automaton called detector. (3) We extend detectability to D-detectability. While detectability requires determining the exact state of a system, D-detectability relaxes this requirement by asking only to distinguish certain pairs of states. With these extensions, the theory on detectability of discrete event systems becomes more applicable in solving many practical problems. PMID:21691432

  10. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†

    PubMed Central

    Khan, Md. Ashfaquzzaman; Herbordt, Martin C.

    2011-01-01

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327

  11. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.

    PubMed

    Khan, Md Ashfaquzzaman; Herbordt, Martin C

    2011-07-20

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.

  12. Combining point and distributed snowpack data with landscape-based discretization for hydrologic modeling of the snow-dominated Maipo River basin, in the semi-arid Andes of Central Chile.

    NASA Astrophysics Data System (ADS)

    McPhee, James; Videla, Yohann

    2014-05-01

    The 5000-km2 upper Maipo River Basin, in central Chile's Andes, has an adequate streamgage network but almost no meteorological or snow accumulation data. Therefore, hydrologic model parameterization is strongly subject to model errors stemming from input and model-state uncertainty. In this research, we apply the Cold Regions Hydrologic Model (CRHM) to the basin, force it with reanalysis data downscaled to an appropriate resolution, and inform a parsimonious basin discretization, based on the hydrologic response unit concept, with distributed data on snowpack properties obtained through snow surveys for two seasons. With minimal calibration the model is able to reproduce the seasonal accumulation and melt cycle as recorded in the one snow pillow available for the basin, and although a bias in maximum accumulation persists, snowpack persistence in time is appropriately simulated based on snow water equivalent and snow covered area observations. Blowing snow events were simulated by the model whenever daily wind speed surpassed 8 m/s, although the use of daily instead of hourly data to force the model suggests that this phenomenon could be underestimated. We investigate the representation of snow redistribution by the model, and compare it with small-scale observations of wintertime snow accumulation on glaciers, in a first step towards characterizing ice distribution within a HRU spatial discretization. Although built at a different spatial scale, we present a comparison of simulated results with distributed snow depth data obtained within a 40 km2 sub-basin of the main Maipo watershed in two snow surveys carried out at the end of winter seasons 2011 and 2012, and compare basin-wide SWE estimates with a regression tree extrapolation of the observed data.

  13. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  14. Discrete disorder models for many-body localization

    NASA Astrophysics Data System (ADS)

    Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub

    2018-04-01

    Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.

  15. SIGMA--A Graphical Approach to Teaching Simulation.

    ERIC Educational Resources Information Center

    Schruben, Lee W.

    1992-01-01

    SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…

  16. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  17. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  18. Variable Weight Fractional Collisions for Multiple Species Mixtures

    DTIC Science & Technology

    2017-08-28

    DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in

  19. Simulation modeling for the health care manager.

    PubMed

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  20. Regional Evaluation of the Severity-Based Stroke Triage Algorithm for Emergency Medical Services Using Discrete Event Simulation.

    PubMed

    Bogle, Brittany M; Asimos, Andrew W; Rosamond, Wayne D

    2017-10-01

    The Severity-Based Stroke Triage Algorithm for Emergency Medical Services endorses routing patients with suspected large vessel occlusion acute ischemic strokes directly to endovascular stroke centers (ESCs). We sought to evaluate different specifications of this algorithm within a region. We developed a discrete event simulation environment to model patients with suspected stroke transported according to algorithm specifications, which varied by stroke severity screen and permissible additional transport time for routing patients to ESCs. We simulated King County, Washington, and Mecklenburg County, North Carolina, distributing patients geographically into census tracts. Transport time to the nearest hospital and ESC was estimated using traffic-based travel times. We assessed undertriage, overtriage, transport time, and the number-needed-to-route, defined as the number of patients enduring additional transport to route one large vessel occlusion patient to an ESC. Undertriage was higher and overtriage was lower in King County compared with Mecklenburg County for each specification. Overtriage variation was primarily driven by screen (eg, 13%-55% in Mecklenburg County and 10%-40% in King County). Transportation time specifications beyond 20 minutes increased overtriage and decreased undertriage in King County but not Mecklenburg County. A low- versus high-specificity screen routed 3.7× more patients to ESCs. Emergency medical services spent nearly twice the time routing patients to ESCs in King County compared with Mecklenburg County. Our results demonstrate how discrete event simulation can facilitate informed decision making to optimize emergency medical services stroke severity-based triage algorithms. This is the first step toward developing a mature simulation to predict patient outcomes. © 2017 American Heart Association, Inc.

  1. Discrete event simulation: the preferred technique for health economic evaluations?

    PubMed

    Caro, Jaime J; Möller, Jörgen; Getsios, Denis

    2010-12-01

    To argue that discrete event simulation should be preferred to cohort Markov models for economic evaluations in health care. The basis for the modeling techniques is reviewed. For many health-care decisions, existing data are insufficient to fully inform them, necessitating the use of modeling to estimate the consequences that are relevant to decision-makers. These models must reflect what is known about the problem at a level of detail sufficient to inform the questions. Oversimplification will result in estimates that are not only inaccurate, but potentially misleading. Markov cohort models, though currently popular, have so many limitations and inherent assumptions that they are inadequate to inform most health-care decisions. An event-based individual simulation offers an alternative much better suited to the problem. A properly designed discrete event simulation provides more accurate, relevant estimates without being computationally prohibitive. It does require more data and may be a challenge to convey transparently, but these are necessary trade-offs to provide meaningful and valid results. In our opinion, discrete event simulation should be the preferred technique for health economic evaluations today. © 2010, International Society for Pharmacoeconomics and Outcomes Research (ISPOR).

  2. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  3. Simulator of Non-homogenous Alumina and Current Distribution in an Aluminum Electrolysis Cell to Predict Low-Voltage Anode Effects

    NASA Astrophysics Data System (ADS)

    Dion, Lukas; Kiss, László I.; Poncsák, Sándor; Lagacé, Charles-Luc

    2018-04-01

    Perfluorocarbons are important contributors to aluminum production greenhouse gas inventories. Tetrafluoromethane and hexafluoroethane are produced in the electrolysis process when a harmful event called anode effect occurs in the cell. This incident is strongly related to the lack of alumina and the current distribution in the cell and can be classified into two categories: high-voltage and low-voltage anode effects. The latter is hard to detect during the normal electrolysis process and, therefore, new tools are necessary to predict this event and minimize its occurrence. This paper discusses a new approach to model the alumina distribution behavior in an electrolysis cell by dividing the electrolytic bath into non-homogenous concentration zones using discrete elements. The different mechanisms related to the alumina distribution are discussed in detail. Moreover, with a detailed electrical model, it is possible to calculate the current distribution among the different anodic assemblies. With this information, the model can evaluate if low-voltage emissions are likely to be present under the simulated conditions. Using the simulator will help the understanding of the role of the alumina distribution which, in turn, will improve the cell energy consumption and stability while reducing the occurrence of high- and low-voltage anode effects.

  4. Hierarchical Discrete Event Supervisory Control of Aircraft Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Yasar, Murat; Tolani, Devendra; Ray, Asok; Shah, Neerav; Litt, Jonathan S.

    2004-01-01

    This paper presents a hierarchical application of Discrete Event Supervisory (DES) control theory for intelligent decision and control of a twin-engine aircraft propulsion system. A dual layer hierarchical DES controller is designed to supervise and coordinate the operation of two engines of the propulsion system. The two engines are individually controlled to achieve enhanced performance and reliability, necessary for fulfilling the mission objectives. Each engine is operated under a continuously varying control system that maintains the specified performance and a local discrete-event supervisor for condition monitoring and life extending control. A global upper level DES controller is designed for load balancing and overall health management of the propulsion system.

  5. A priori discretization quality metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  6. Discrete Events as Units of Perceived Time

    ERIC Educational Resources Information Center

    Liverence, Brandon M.; Scholl, Brian J.

    2012-01-01

    In visual images, we perceive both space (as a continuous visual medium) and objects (that inhabit space). Similarly, in dynamic visual experience, we perceive both continuous time and discrete events. What is the relationship between these units of experience? The most intuitive answer may be similar to the spatial case: time is perceived as an…

  7. Modeling Repeatable Events Using Discrete-Time Data: Predicting Marital Dissolution

    ERIC Educational Resources Information Center

    Teachman, Jay

    2011-01-01

    I join two methodologies by illustrating the application of multilevel modeling principles to hazard-rate models with an emphasis on procedures for discrete-time data that contain repeatable events. I demonstrate this application using data taken from the 1995 National Survey of Family Growth (NSFG) to ascertain the relationship between multiple…

  8. An Advanced Simulation Framework for Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Li, P. P.; Tyrrell, R. Yeung D.; Adhami, N.; Li, T.; Henry, H.

    1994-01-01

    Discrete-event simulation (DEVS) users have long been faced with a three-way trade-off of balancing execution time, model fidelity, and number of objects simulated. Because of the limits of computer processing power the analyst is often forced to settle for less than desired performances in one or more of these areas.

  9. Continuous and discrete extreme climatic events affecting the dynamics of a high-arctic reindeer population.

    PubMed

    Chan, Kung-Sik; Mysterud, Atle; Øritsland, Nils Are; Severinsen, Torbjørn; Stenseth, Nils Chr

    2005-10-01

    Climate at northern latitudes are currently changing both with regard to the mean and the temporal variability at any given site, increasing the frequency of extreme events such as cold and warm spells. Here we use a conceptually new modelling approach with two different dynamic terms of the climatic effects on a Svalbard reindeer population (the Brøggerhalvøya population) which underwent an extreme icing event ("locked pastures") with 80% reduction in population size during one winter (1993/94). One term captures the continuous and linear effect depending upon the Arctic Oscillation and another the discrete (rare) "event" process. The introduction of an "event" parameter describing the discrete extreme winter resulted in a more parsimonious model. Such an approach may be useful in strongly age-structured ungulate populations, with young and very old individuals being particularly prone to mortality factors during adverse conditions (resulting in a population structure that differs before and after extreme climatic events). A simulation study demonstrates that our approach is able to properly detect the ecological effects of such extreme climate events.

  10. Regularity of a renewal process estimated from binary data.

    PubMed

    Rice, John D; Strawderman, Robert L; Johnson, Brent A

    2017-10-09

    Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.

  11. Analysis hierarchical model for discrete event systems

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  12. A Discrete Element Method Approach to Progressive Localization of Damage in Granular Rocks and Associated Seismicity

    NASA Astrophysics Data System (ADS)

    Vora, H.; Morgan, J.

    2017-12-01

    Brittle failure in rock under confined biaxial conditions is accompanied by release of seismic energy, known as acoustic emissions (AE). The objective our study is to understand the influence of elastic properties of rock and its stress state on deformation patterns, and associated seismicity in granular rocks. Discrete Element Modeling is used to simulate biaxial tests on granular rocks of defined grain size distribution. Acoustic Energy and seismic moments are calculated from microfracture events as rock is taken to conditions of failure under different confining pressure states. Dimensionless parameters such as seismic b-value and fractal parameter for deformation, D-value, are used to quantify seismic character and distribution of damage in rock. Initial results suggest that confining pressure has the largest control on distribution of induced microfracturing, while fracture energy and seismic magnitudes are highly sensitive to elastic properties of rock. At low confining pressures, localized deformation (low D-values) and high seismic b-values are observed. Deformation at high confining pressures is distributed in nature (high D-values) and exhibit low seismic b-values as shearing becomes the dominant mode of microfracturing. Seismic b-values and fractal D-values obtained from microfracturing exhibit a linear inverse relationship, similar to trends observed in earthquakes. Mode of microfracturing in our simulations of biaxial compression tests show mechanistic similarities to propagation of fractures and faults in nature.

  13. Nanogeochronology of discordant zircon measured by atom probe microscopy of Pb-enriched dislocation loops

    PubMed Central

    Peterman, Emily M.; Reddy, Steven M.; Saxey, David W.; Snoeyenbos, David R.; Rickard, William D. A.; Fougerouse, Denis; Kylander-Clark, Andrew R. C.

    2016-01-01

    Isotopic discordance is a common feature in zircon that can lead to an erroneous age determination, and it is attributed to the mobilization and escape of radiogenic Pb during its post-crystallization geological evolution. The degree of isotopic discordance measured at analytical scales of ~10 μm often differs among adjacent analysis locations, indicating heterogeneous distributions of Pb at shorter length scales. We use atom probe microscopy to establish the nature of these sites and the mechanisms by which they form. We show that the nanoscale distribution of Pb in a ~2.1 billion year old discordant zircon that was metamorphosed c. 150 million years ago is defined by two distinct Pb reservoirs. Despite overall Pb loss during peak metamorphic conditions, the atom probe data indicate that a component of radiogenic Pb was trapped in 10-nm dislocation loops that formed during the annealing of radiation damage associated with the metamorphic event. A second Pb component, found outside the dislocation loops, represents homogeneous accumulation of radiogenic Pb in the zircon matrix after metamorphism. The 207Pb/206Pb ratios measured from eight dislocation loops are equivalent within uncertainty and yield an age consistent with the original crystallization age of the zircon, as determined by laser ablation spot analysis. Our results provide a specific mechanism for the trapping and retention of radiogenic Pb during metamorphism and confirm that isotopic discordance in this zircon is characterized by discrete nanoscale reservoirs of Pb that record different isotopic compositions and yield age data consistent with distinct geological events. These data may provide a framework for interpreting discordance in zircon as the heterogeneous distribution of discrete radiogenic Pb populations, each yielding geologically meaningful ages. PMID:27617295

  14. Cost-effective solutions to maintaining smart grid reliability

    NASA Astrophysics Data System (ADS)

    Qin, Qiu

    As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.

  15. Cell responses to single pheromone molecules may reflect the activation kinetics of olfactory receptor molecules.

    PubMed

    Minor, A V; Kaissling, K-E

    2003-03-01

    Olfactory receptor cells of the silkmoth Bombyx mori respond to single pheromone molecules with "elementary" electrical events that appear as discrete "bumps" a few milliseconds in duration, or bursts of bumps. As revealed by simulation, one bump may result from a series of random openings of one or several ion channels, producing an average inward membrane current of 1.5 pA. The distributions of durations of bumps and of gaps between bumps in a burst can be fitted by single exponentials with time constants of 10.2 ms and 40.5 ms, respectively. The distribution of burst durations is a sum of two exponentials; the number of bumps per burst obeyed a geometric distribution (mean 3.2 bumps per burst). Accordingly the elementary events could reflect transitions among three states of the pheromone receptor molecule: the vacant receptor (state 1), the pheromone-receptor complex (state 2), and the activated complex (state 3). The calculated rate constants of the transitions between states are k(21)=7.7 s(-1), k(23)=16.8 s(-1), and k(32)=98 s(-1).

  16. Discrete Latent Markov Models for Normally Distributed Response Data

    ERIC Educational Resources Information Center

    Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.

    2005-01-01

    Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…

  17. StratBAM: A Discrete-Event Simulation Model to Support Strategic Hospital Bed Capacity Decisions.

    PubMed

    Devapriya, Priyantha; Strömblad, Christopher T B; Bailey, Matthew D; Frazier, Seth; Bulger, John; Kemberling, Sharon T; Wood, Kenneth E

    2015-10-01

    The ability to accurately measure and assess current and potential health care system capacities is an issue of local and national significance. Recent joint statements by the Institute of Medicine and the Agency for Healthcare Research and Quality have emphasized the need to apply industrial and systems engineering principles to improving health care quality and patient safety outcomes. To address this need, a decision support tool was developed for planning and budgeting of current and future bed capacity, and evaluating potential process improvement efforts. The Strategic Bed Analysis Model (StratBAM) is a discrete-event simulation model created after a thorough analysis of patient flow and data from Geisinger Health System's (GHS) electronic health records. Key inputs include: timing, quantity and category of patient arrivals and discharges; unit-level length of care; patient paths; and projected patient volume and length of stay. Key outputs include: admission wait time by arrival source and receiving unit, and occupancy rates. Electronic health records were used to estimate parameters for probability distributions and to build empirical distributions for unit-level length of care and for patient paths. Validation of the simulation model against GHS operational data confirmed its ability to model real-world data consistently and accurately. StratBAM was successfully used to evaluate the system impact of forecasted patient volumes and length of stay in terms of patient wait times, occupancy rates, and cost. The model is generalizable and can be appropriately scaled for larger and smaller health care settings.

  18. Long-Term Time Variability of Thermal Emission in Jupiter

    NASA Astrophysics Data System (ADS)

    Orton, Glenn; Fletcher, Leigh; Fisher, Brendan; Yanamandra-Fisher, Padma; Greathouse, Thomas; Sinclair, James; Greco, Jennifer; Boydstun, Kimberly; Wakefield, Laura; Kim, Sonia; Fujiyoshi, Takuya

    2015-04-01

    Mid-infrared images of Jupiter's thermal emission in discrete filters between 4.8 and 24.5 μm from 1996 to the present day, spanning over a Jovian year, enable time-domain studies of its temperature field, minor-constituent distribution and cloud properties. The behavior of stratospheric (~10-mbar) and upper-tropospheric (~100-400 mbar) temperatures is generally consistent with predictions of seasonal variability. There also appear to be long-term periodicities of tropospheric temperatures, with meridionally dependent amplitudes, phases and periods. Temperatures near and south of the equator vary the least. During the 'global upheaval' or the corresponding 'revival' events that have produced dramatic changes in Jupiter's visible appearance and cloud cover, there were few large-scale variations of zonal mean temperatures in the stratosphere or troposphere, although there are colder discrete regions associated with the updraft events that marked the early stages of revivals. Changes in visible albedo during the upheavals are accompanied by increases in cloudiness at 700 mbar and higher pressures, along with increases in the ammonia-gas mixing ratio. In contrast to all these changes, the meridional distribution of the 240-mbar para-hydrogen fraction appears to be time-invariant. Jupiter also exhibits prominent temperature waves in both the upper troposphere and stratosphere that move slowly westward in System III. J. Sinclair is supported by a NASA Postdoctoral Program fellowship; J. Greco, K. Boydstun, L. Wakefield and S. Kim were supported by Caltech Summer Undergraduate Research Fellowships while resident at JPL.

  19. Discrete-event simulation of a wide-area health care network.

    PubMed Central

    McDaniel, J G

    1995-01-01

    OBJECTIVE: Predict the behavior and estimate the telecommunication cost of a wide-area message store-and-forward network for health care providers that uses the telephone system. DESIGN: A tool with which to perform large-scale discrete-event simulations was developed. Network models for star and mesh topologies were constructed to analyze the differences in performances and telecommunication costs. The distribution of nodes in the network models approximates the distribution of physicians, hospitals, medical labs, and insurers in the Province of Saskatchewan, Canada. Modeling parameters were based on measurements taken from a prototype telephone network and a survey conducted at two medical clinics. Simulation studies were conducted for both topologies. RESULTS: For either topology, the telecommunication cost of a network in Saskatchewan is projected to be less than $100 (Canadian) per month per node. The estimated telecommunication cost of the star topology is approximately half that of the mesh. Simulations predict that a mean end-to-end message delivery time of two hours or less is achievable at this cost. A doubling of the data volume results in an increase of less than 50% in the mean end-to-end message transfer time. CONCLUSION: The simulation models provided an estimate of network performance and telecommunication cost in a specific Canadian province. At the expected operating point, network performance appeared to be relatively insensitive to increases in data volume. Similar results might be anticipated in other rural states and provinces in North America where a telephone-based network is desired. PMID:7583646

  20. Relation of Parallel Discrete Event Simulation algorithms with physical models

    NASA Astrophysics Data System (ADS)

    Shchur, L. N.; Shchur, L. V.

    2015-09-01

    We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.

  1. Distributed decision making in action: diagnostic imaging investigations within the bigger picture.

    PubMed

    Makanjee, Chandra R; Bergh, Anne-Marie; Hoffmann, Willem A

    2018-03-01

    Decision making in the health care system - specifically with regard to diagnostic imaging investigations - occurs at multiple levels. Professional role players from various backgrounds are involved in making these decisions, from the point of referral to the outcomes of the imaging investigation. The aim of this study was to map the decision-making processes and pathways involved when patients are referred for diagnostic imaging investigations and to explore distributed decision-making events at the points of contact with patients within a health care system. A two-phased qualitative study was conducted in an academic public health complex with the district hospital as entry point. The first phase included case studies of 24 conveniently selected patients, and the second phase involved 12 focus group interviews with health care providers. Data analysis was based on Rapley's interpretation of decision making as being distributed across time, situations and actions, and including different role players and technologies. Clinical decisions incorporating imaging investigations are distributed across the three vital points of contact or decision-making events, namely the initial patient consultation, the diagnostic imaging investigation and the post-investigation consultation. Each of these decision-making events is made up of a sequence of discrete decision-making moments based on the transfer of retrospective, current and prospective information and its transformation into knowledge. This paper contributes to the understanding of the microstructural processes (the 'when' and 'where') involved in the distribution of decisions related to imaging investigations. It also highlights the interdependency in decision-making events of medical and non-medical providers within a single medical encounter. © 2017 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.

  2. The development of a simulation model of primary prevention strategies for coronary heart disease.

    PubMed

    Babad, Hannah; Sanderson, Colin; Naidoo, Bhash; White, Ian; Wang, Duolao

    2002-11-01

    This paper describes the present state of development of a discrete-event micro-simulation model for coronary heart disease prevention. The model is intended to support health policy makers in assessing the impacts on health care resources of different primary prevention strategies. For each person, a set of times to disease events, conditional on the individual's risk factor profile, is sampled from a set of probability distributions that are derived from a new analysis of the Framingham cohort study on coronary heart disease. Methods used to model changes in behavioural and physiological risk factors are discussed and a description of the simulation logic is given. The model incorporates POST (Patient Oriented Simulation Technique) simulation routines.

  3. Evaluating sample allocation and effort in detecting population differentiation for discrete and continuously distributed individuals

    Treesearch

    Erin L. Landguth; Michael K. Schwartz

    2014-01-01

    One of the most pressing issues in spatial genetics concerns sampling. Traditionally, substructure and gene flow are estimated for individuals sampled within discrete populations. Because many species may be continuously distributed across a landscape without discrete boundaries, understanding sampling issues becomes paramount. Given large-scale, geographically broad...

  4. Distributed Relaxation for Conservative Discretizations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2001-01-01

    A multigrid method is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work that is a small (less than 10) multiple of the operation count in one target-grid residual evaluation. The way to achieve this efficiency is the distributed relaxation approach. TME solvers employing distributed relaxation have already been demonstrated for nonconservative formulations of high-Reynolds-number viscous incompressible and subsonic compressible flow regimes. The purpose of this paper is to provide foundations for applications of distributed relaxation to conservative discretizations. A direct correspondence between the primitive variable interpolations for calculating fluxes in conservative finite-volume discretizations and stencils of the discretized derivatives in the nonconservative formulation has been established. Based on this correspondence, one can arrive at a conservative discretization which is very efficiently solved with a nonconservative relaxation scheme and this is demonstrated for conservative discretization of the quasi one-dimensional Euler equations. Formulations for both staggered and collocated grid arrangements are considered and extensions of the general procedure to multiple dimensions are discussed.

  5. Comparative Effectiveness of Tacrolimus-Based Steroid Sparing versus Steroid Maintenance Regimens in Kidney Transplantation: Results from Discrete Event Simulation.

    PubMed

    Desai, Vibha C A; Ferrand, Yann; Cavanaugh, Teresa M; Kelton, Christina M L; Caro, J Jaime; Goebel, Jens; Heaton, Pamela C

    2017-10-01

    Corticosteroids used as immunosuppressants to prevent acute rejection (AR) and graft loss (GL) following kidney transplantation are associated with serious cardiovascular and other adverse events. Evidence from short-term randomized controlled trials suggests that many patients on a tacrolimus-based immunosuppressant regimen can withdraw from steroids without increased AR or GL risk. To measure the long-term tradeoff between GL and adverse events for a heterogeneous-risk population and determine the optimal timing of steroid withdrawal. A discrete event simulation was developed including, as events, AR, GL, myocardial infarction (MI), stroke, cytomegalovirus, and new onset diabetes mellitus (NODM), among others. Data from the United States Renal Data System were used to estimate event-specific parametric regressions, which accounted for steroid-sparing regimen (avoidance, early 7-d withdrawal, 6-mo withdrawal, 12-mo withdrawal, and maintenance) as well as patients' demographics, immunologic risks, and comorbidities. Regression-equation results were used to derive individual time-to-event Weibull distributions, used, in turn, to simulate the course of patients over 20 y. Patients on steroid avoidance or an early-withdrawal regimen were more likely to experience AR (45.9% to 55.0% v. 33.6%, P < 0.05) and GL (51.5% to 68.8% v. 37.8%, P < 0.05) compared to patients on steroid maintenance. Patients in 6-mo and 12-mo steroid withdrawal groups were less likely to experience MI (11.1% v. 13.3%, P < 0.05), NODM (30.7% to 34.4% v. 37.7%, P < 0.05), and cardiac death (29.9% to 30.5% v. 32.4%, P < 0.05), compared to steroid maintenance. Strategies of 6- and 12-mo steroid withdrawal post-kidney transplantation are expected to reduce the rates of adverse cardiovascular events and other outcomes with no worsening of AR or GL rates compared with steroid maintenance.

  6. Improving Our Ability to Evaluate Underlying Mechanisms of Behavioral Onset and Other Event Occurrence Outcomes: A Discrete-Time Survival Mediation Model

    PubMed Central

    Fairchild, Amanda J.; Abara, Winston E.; Gottschall, Amanda C.; Tein, Jenn-Yun; Prinz, Ronald J.

    2015-01-01

    The purpose of this article is to introduce and describe a statistical model that researchers can use to evaluate underlying mechanisms of behavioral onset and other event occurrence outcomes. Specifically, the article develops a framework for estimating mediation effects with outcomes measured in discrete-time epochs by integrating the statistical mediation model with discrete-time survival analysis. The methodology has the potential to help strengthen health research by targeting prevention and intervention work more effectively as well as by improving our understanding of discretized periods of risk. The model is applied to an existing longitudinal data set to demonstrate its use, and programming code is provided to facilitate its implementation. PMID:24296470

  7. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    PubMed

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  8. Role of weakest links and system-size scaling in multiscale modeling of stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Ispánovity, Péter Dusán; Tüzes, Dániel; Szabó, Péter; Zaiser, Michael; Groma, István

    2017-02-01

    Plastic deformation of crystalline and amorphous matter often involves intermittent local strain burst events. To understand the physical background of the phenomenon a minimal stochastic mesoscopic model was introduced, where details of the microstructure evolution are statistically represented in terms of a fluctuating local yield threshold. In the present paper we propose a method for determining the corresponding yield stress distribution for the case of crystal plasticity from lower scale discrete dislocation dynamics simulations which we combine with weakest link arguments. The success of scale linking is demonstrated by comparing stress-strain curves obtained from the resulting mesoscopic and the underlying discrete dislocation models in the microplastic regime. As shown by various scaling relations they are statistically equivalent and behave identically in the thermodynamic limit. The proposed technique is expected to be applicable to different microstructures and also to amorphous materials.

  9. Hierarchical Discrete Event Supervisory Control of Aircraft Propulsion Systems

    DTIC Science & Technology

    2004-11-01

    Systems Murat Yasar, Devendra Tolani, and Asok Ray The Pennsylvania State University, University Park, Pennsylvania Neerav Shah Glenn Research Center...Hierarchical Discrete Event Supervisory Control of Aircraft Propulsion Systems Murat Yasar, Devendra Tolani, and Asok Ray The Pennsylvania State University...Systems Murat Yasar, Devendra Tolani, and Asok Ray The Pennsylvania State University University Park, Pennsylvania 16802 Neerav Shah National

  10. The identification of solar wind waves at discrete frequencies and the role of the spectral analysis techniques

    NASA Astrophysics Data System (ADS)

    Di Matteo, S.; Villante, U.

    2017-05-01

    The occurrence of waves at discrete frequencies in the solar wind (SW) parameters has been reported in the scientific literature with some controversial results, mostly concerning the existence (and stability) of favored sets of frequencies. On the other hand, the experimental results might be influenced by the analytical methods adopted for the spectral analysis. We focused attention on the fluctuations of the SW dynamic pressure (PSW) occurring in the leading edges of streams following interplanetary shocks and compared the results of the Welch method (WM) with those of the multitaper method (MTM). The results of a simulation analysis demonstrate that the identification of the wave occurrence and the frequency estimate might be strongly influenced by the signal characteristics and analytical methods, especially in the presence of multicomponent signals. In SW streams, PSW oscillations are routinely detected in the entire range f ≈ 1.2-5.0 mHz; nevertheless, the WM/MTM agreement in the identification and frequency estimate occurs in ≈50% of events and different sets of favored frequencies would be proposed for the same set of events by the WM and MTM analysis. The histogram of the frequency distribution of the events identified by both methods suggests more relevant percentages between f ≈ 1.7-1.9, f ≈ 2.7-3.4, and f ≈ 3.9-4.4 (with a most relevant peak at f ≈ 4.2 mHz). Extremely severe thresholds select a small number (14) of remarkable events, with a one-to-one correspondence between WM and MTM: interestingly, these events reveal a tendency for a favored occurrence in bins centered at f ≈ 2.9 and at f ≈ 4.2 mHz.

  11. Geomorphic and hydrologic assessment of erosion hazards at the Norman municipal landfill, Canadian River floodplain, central Oklahoma

    USGS Publications Warehouse

    Curtis, Jennifer A.; Whitney, John W.

    2003-01-01

    The Norman, Oklahoma, municipal landfill closed in 1985 after 63 years of operation, because it was identified as a point source of hazardous leachate composed of organic and inorganic compounds. The landfill is located on the floodplain of the Canadian River, a sand-bed river characterized by erodible channel boundaries and by large variation in mean monthly discharges. In 1986, floodwaters eroded riprap protection at the southern end of the landfill and penetrated the landfill's clay cap, thereby exposing the landfill contents. The impact of this moderate-magnitude flood event (Q12) was the catalyst to investigate erosion hazards at the Norman landfill. This geomorphic investigation analyzed floodplain geomorphology and historical channel changes, flood-frequency distributions, an erosion threshold, the geomorphic effectiveness of discharge events, and other factors that influence erosion hazards at the landfill site. The erosion hazard at the Norman landfill is a function of the location of the landfill with respect to the channel thalweg, erosional resistance of the channel margins, magnitude and duration of discrete discharge events, channel form and hydraulic geometry, and cumulative effects related to a series of discharge events. Based on current climatic conditions and historical channel changes, a minimum erosion threshold is set at bankfull discharge (Q = 572 m3/s). The annual probability of exceeding this threshold is 0.53. In addition, this analysis indicates that peak stream power is less informative than total energy expenditures when estimating the erosion potential or geomorphic effectiveness of discrete discharge events. On the Canadian River, long-duration, moderate-magnitude floods can have larger total energy expenditures than shorter-duration, high-magnitude floods and therefore represent the most serious erosion hazard to floodplain structures.

  12. SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) tomore » define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did yield different answers for the HFE. The HFE maximum grossly over estimates the HFE, while the HFE distribution occurs less than HFE median, and greater than HFE average. Conclusions: Dynamic task modeling can be perused through the framework of SPAR-H. Identification of distributions associated with each PSF needs to be defined, and may change depending upon the scenario. However it is very unlikely that each PSF level is equally likely as the resulting HEP distribution is strongly centered at 100%, which is unrealistic. Other distributions may need to be identified for PSFs, to facilitate the transition to dynamic task modeling. Additionally discrete distributions need to be exchanged for continuous so that simulations for the HFE can further advance. This paper provides a method to explore dynamic subtask to task translation and provides examples of the process using the SPAR-H method.« less

  13. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  14. Nonlinear Control and Discrete Event Systems

    NASA Technical Reports Server (NTRS)

    Meyer, George; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    As the operation of large systems becomes ever more dependent on extensive automation, the need for an effective solution to the problem of design and validation of the underlying software becomes more critical. Large systems possesses much detailed structure, typically hierarchical, and they are hybrid. Information processing at the top of the hierarchy is by means of formal logic and sentences; on the bottom it is by means of simple scalar differential equations and functions of time; and in the middle it is by an interacting mix of nonlinear multi-axis differential equations and automata, and functions of time and discrete events. The lecture will address the overall problem as it relates to flight vehicle management, describe the middle level, and offer a design approach that is based on Differential Geometry and Discrete Event Dynamic Systems Theory.

  15. An accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems using gradient-based diffusion and tau-leaping.

    PubMed

    Koh, Wonryull; Blackwell, Kim T

    2011-04-21

    Stochastic simulation of reaction-diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies.

  16. A non-orthogonal decomposition of flows into discrete events

    NASA Astrophysics Data System (ADS)

    Boxx, Isaac; Lewalle, Jacques

    1998-11-01

    This work is based on the formula for the inverse Hermitian wavelet transform. A signal can be interpreted as a (non-unique) superposition of near-singular, partially overlapping events arising from Dirac functions and/or its derivatives combined with diffusion.( No dynamics implied: dimensionless diffusion is related to the definition of the analyzing wavelets.) These events correspond to local maxima of spectral energy density. We successfully fitted model events of various orders on a succession of fields, ranging from elementary signals to one-dimensional hot-wire traces. We document edge effects, event overlap and its implications on the algorithm. The interpretation of the discrete singularities as flow events (such as coherent structures) and the fundamental non-uniqueness of the decomposition are discussed. The dynamics of these events will be examined in the companion paper.

  17. Estimating the proportion of true null hypotheses when the statistics are discrete.

    PubMed

    Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S

    2015-07-15

    In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Cross-Paradigm Simulation Modeling: Challenges and Successes

    DTIC Science & Technology

    2011-12-01

    is also highlighted. 2.1 Discrete-Event Simulation Discrete-event simulation ( DES ) is a modeling method for stochastic, dynamic models where...which almost anything can be coded; models can be incredibly detailed. Most commercial DES software has a graphical interface which allows the user to...results. Although the above definition is the commonly accepted definition of DES , there are two different worldviews that dominate DES modeling today: a

  20. Discretising the velocity distribution for directional dark matter experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavanagh, Bradley J., E-mail: bradley.kavanagh@cea.fr

    2015-07-01

    Dark matter (DM) direct detection experiments which are directionally-sensitive may be the only method of probing the full velocity distribution function (VDF) of the Galactic DM halo. We present an angular basis for the DM VDF which can be used to parametrise the distribution in order to mitigate astrophysical uncertainties in future directional experiments and extract information about the DM halo. This basis consists of discretising the VDF in a series of angular bins, with the VDF being only a function of the DM speed v within each bin. In contrast to other methods, such as spherical harmonic expansions, themore » use of this basis allows us to guarantee that the resulting VDF is everywhere positive and therefore physical. We present a recipe for calculating the event rates corresponding to the discrete VDF for an arbitrary number of angular bins N and investigate the discretisation error which is introduced in this way. For smooth, Standard Halo Model-like distribution functions, only N=3 angular bins are required to achieve an accuracy of around 01–30% in the number of events in each bin. Shortly after confirmation of the DM origin of the signal with around 50 events, this accuracy should be sufficient to allow the discretised velocity distribution to be employed reliably. For more extreme VDFs (such as streams), the discretisation error is typically much larger, but can be improved with increasing N. This method paves the way towards an astrophysics-independent analysis framework for the directional detection of dark matter.« less

  1. Discretising the velocity distribution for directional dark matter experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavanagh, Bradley J.; School of Physics & Astronomy, University of Nottingham,University Park, Nottingham, NG7 2RD

    2015-07-13

    Dark matter (DM) direct detection experiments which are directionally-sensitive may be the only method of probing the full velocity distribution function (VDF) of the Galactic DM halo. We present an angular basis for the DM VDF which can be used to parametrise the distribution in order to mitigate astrophysical uncertainties in future directional experiments and extract information about the DM halo. This basis consists of discretising the VDF in a series of angular bins, with the VDF being only a function of the DM speed v within each bin. In contrast to other methods, such as spherical harmonic expansions, themore » use of this basis allows us to guarantee that the resulting VDF is everywhere positive and therefore physical. We present a recipe for calculating the event rates corresponding to the discrete VDF for an arbitrary number of angular bins N and investigate the discretisation error which is introduced in this way. For smooth, Standard Halo Model-like distribution functions, only N=3 angular bins are required to achieve an accuracy of around 10–30% in the number of events in each bin. Shortly after confirmation of the DM origin of the signal with around 50 events, this accuracy should be sufficient to allow the discretised velocity distribution to be employed reliably. For more extreme VDFs (such as streams), the discretisation error is typically much larger, but can be improved with increasing N. This method paves the way towards an astrophysics-independent analysis framework for the directional detection of dark matter.« less

  2. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  3. PREFACE: 4th Symposium on Prospects in the Physics of Discrete Symmetries (DISCRETE2014)

    NASA Astrophysics Data System (ADS)

    Di Domenico, Antonio; Mavromatos, Nick E.; Mitsou, Vasiliki A.; Skliros, Dimitri P.

    2015-07-01

    The DISCRETE 2014: Fourth Symposium in the Physics of Discrete Symmetries took place at King's College London, Strand Campus, London WC2R 2LS, from Tuesday, December 2 2014 till Saturday, December 6 2014. This is the fourth Edition of the DISCRETE conference series, which is a biannual event, having been held previously in Valencia (Discrete'08), Rome (Discrete2010) and Lisbon (Discrete2012). The topics covered at the DISCRETE series of conferences are: T, C, P, CP symmetries; accidental symmetries (B, L conservation); CPT symmetry, decoherence and entangled states, Lorentz symmetry breaking (phenomenology and current bounds); neutrino mass and mixing; implications for cosmology and astroparticle physics, dark matter searches; experimental prospects at LHC, new facilities. In DISCRETE 2014 we have also introduced two new topics: cosmological aspects of non-commutative space-times as well as PT symmetric Hamiltonians (non-Hermitian but with real eigenvalues), a topic that has wide applications in particle physics and beyond. The conference was opened by the King's College London Vice Principal on Research and Innovation, Mr Chris Mottershead, followed by a welcome address by the Chair of DISCRETE 2014 (Professor Nick E. Mavromatos). After these introductory talks, the scientific programme of the DISCRETE 2014 symposium started. Following the tradition of DISCRETE series of conferences, the talks (138 in total) were divided into plenary-review talks (25), invited research talks (50) and shorter presentations (63) — selected by the conveners of each session in consultation with the organisers — from the submitted abstracts. We have been fortunate to have very high-quality, thought stimulating and interesting talks at all levels, which, together with the discussions among the participants, made the conference quite enjoyable. There were 152 registered participants for the event.

  4. A full-angle Monte-Carlo scattering technique including cumulative and single-event Rutherford scattering in plasmas

    NASA Astrophysics Data System (ADS)

    Higginson, Drew P.

    2017-11-01

    We describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event. We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10-3 to 0.3-0.7; the upper limit corresponds to Coulomb logarithm of 20-2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.

  5. Interpreting Significant Discrete-Time Periods in Survival Analysis.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Denson, Kathleen B.

    Discrete-time survival analysis is a new method for educational researchers to employ when looking at the timing of certain educational events. Previous continuous-time methods do not allow for the flexibility inherent in a discrete-time method. Because both time-invariant and time-varying predictor variables can now be used, the interaction of…

  6. Modeling of Stick-Slip Behavior in Sheared Granular Fault Gouge Using the Combined Finite-Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Ke; Euser, Bryan J.; Rougier, Esteban

    Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less

  7. Modeling of Stick-Slip Behavior in Sheared Granular Fault Gouge Using the Combined Finite-Discrete Element Method

    DOE PAGES

    Gao, Ke; Euser, Bryan J.; Rougier, Esteban; ...

    2018-06-20

    Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less

  8. Harmonic Fourier beads method for studying rare events on rugged energy surfaces.

    PubMed

    Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L

    2006-11-07

    We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.

  9. Disaster Response Modeling Through Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Wang, Jeffrey; Gilmer, Graham

    2012-01-01

    Organizations today are required to plan against a rapidly changing, high-cost environment. This is especially true for first responders to disasters and other incidents, where critical decisions must be made in a timely manner to save lives and resources. Discrete-event simulations enable organizations to make better decisions by visualizing complex processes and the impact of proposed changes before they are implemented. A discrete-event simulation using Simio software has been developed to effectively analyze and quantify the imagery capabilities of domestic aviation resources conducting relief missions. This approach has helped synthesize large amounts of data to better visualize process flows, manage resources, and pinpoint capability gaps and shortfalls in disaster response scenarios. Simulation outputs and results have supported decision makers in the understanding of high risk locations, key resource placement, and the effectiveness of proposed improvements.

  10. Small-scale plasticity critically needs a new mechanics description

    NASA Astrophysics Data System (ADS)

    Ngan, Alfonso H. W.

    2013-06-01

    Continuum constitutive laws describe the plastic deformation of materials as a smooth, continuously differentiable process. However, provided that the measurement is done with a fine enough resolution, the plastic deformation of real materials is often found to comprise discrete events usually nanometric in size. For bulk-sized specimens, such nanoscale events are minute compared with the specimen size, and so their associated strain changes are negligibly small, and this is why the continuum laws work well. However, when the specimen size is in the micrometer scale or smaller, the strain changes due to the discrete events could be significant, and the continuum description would be highly unsatisfactory. Yet, because of the advent of microtechnology and nanotechnolgy, small-sized materials will be increasingly used, and so there is a strong need to develop suitable replacement descriptions for plasticity of small materials. As the occurrence of the discrete plastic events is also strongly stochastic, their satisfactory description should also be one of a probabilistic, rather than deterministic, nature.

  11. Synchronous parallel system for emulation and discrete event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    1992-01-01

    A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to state variables of the simulation object attributable to the event object, and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring the events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.

  12. Synchronous Parallel System for Emulation and Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    2001-01-01

    A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to the state variables of the simulation object attributable to the event object and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.

  13. Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing

    NASA Technical Reports Server (NTRS)

    Jones, Robert L.; Goode, Plesent W. (Technical Monitor)

    2000-01-01

    The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.

  14. A Discrete Events Delay Differential System Model for Transmission of Vancomycin-Resistant Enterococcus (VRE) in Hospitals

    DTIC Science & Technology

    2010-09-19

    estimated directly form the surveillance data Infection control measures were implemented in the form of health care worker hand - hygiene before and after...hospital infections , is used to motivate possibilities of modeling nosocomial infec- tion dynamics. This is done in the context of hospital monitoring and...model development. Key Words: Delay equations, discrete events, nosocomial infection dynamics, surveil- lance data, inverse problems, parameter

  15. Program For Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  16. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    PubMed

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  17. THE RELATIONSHIP BETWEEN THE SEPTEMBER 2017 MARS GLOBAL AURORA EVENT AND CRUSTAL MAGNETIC FIELDS

    NASA Astrophysics Data System (ADS)

    Nasr, Camella-Rosa; Schneider, Nick; Connour, Kyle; Jain, Sonal; Deighan, Justin; Jakosky, Bruce; MAVEN/IUVS Team

    2018-01-01

    In September 2017, the Imaging UltraViolet Spectrograph (IUVS) on the MAVEN spacecraft observed global aurora on Mars caused by a surprisingly strong solar energetic particle event. Widespread “diffuse aurora” have previously been detected on Mars through more limited observations (Schneider et al., Science 350, (2015); DOI: 10.1126/science.aad0313), but recent observations established complete coverage of the observable portion of Mars’ nightside. The aurora was global due to Mars’s lack of a global magnetic field, which allowed energetic electrons from the Sun to directly precipitate into the atmosphere. On September 11th, IUVS detected aurora more than 25 times brighter than any prior IUVS observation, with high SNR detections of aurora at the limb and against the disk of the planet. Fainter auroral emission was seen around the nightside limb over 13 orbits spanning nearly 3 days.On September 14th, during the declining phase of the event, faint linear features and patches were detected by the spacecraft, which were higher than the noise floor, with a similar spatial distribution to “discrete aurora” patches observed on Mars by the SPICAM instrument on the Mars Express spacecraft (Bertaux et al., Nature 435, doi :10.1038/nature03603). Discrete aurora occur near areas of the crust affected by the magnetism left over from Mars’ once-strong dipole field. Emission is limited to regions of the crustal magnetic field where the field lines are likely to be open to solar wind interactions. Those regions are concentrated in Mars’ southern hemisphere centered on 180 degrees east longitude.We studied the localized emissions on 14 September to determine whether there might be a connection between the observed diffuse aurora event and discrete auroral processes. First, we investigated the localized emissions to confirm that the observed signal was consistent with expected auroral spectra. Second, their locations were projected on a map of the crustal magnetic fields to determine if they occurred near open magnetic field lines. We will report on the results of these two studies, and the ramifications for Mars auroral processes.

  18. Quantifying Discrete Fracture Network Connectivity in Hydraulic Fracturing Stimulation

    NASA Astrophysics Data System (ADS)

    Urbancic, T.; Ardakani, E. P.; Baig, A.

    2017-12-01

    Hydraulic fracture stimulations generally result in microseismicity that is associated with the activation or extension of pre-existing microfractures and discontinuities. Microseismic events acquired under 3D downhole sensor coverage provide accurate event locations outlining hydraulic fracture growth. Combined with source characteristics, these events provide a high quality input for seismic moment tensor inversion and eventually constructing the representative discrete fracture network (DFN). In this study, we investigate the strain and stress state, identified fracture orientation, and DFN connectivity and performance for example stages in a multistage perf and plug completion in a North American shale play. We use topology, the familiar concept in many areas of structural geology, to further describe the relationships between the activated fractures and their effectiveness in enhancing permeability. We explore how local perturbations of stress state lead to the activation of different fractures sets and how that effects the DFN interaction and complexity. In particular, we observe that a more heterogeneous stress state shows a higher percentage of sub-horizontal fractures or bedding plane slips. Based on topology, the fractures are evenly distributed from the injection point, with decreasing numbers of connections by distance. The dimensionless measure of connection per branch and connection per line are used for quantifying the DFN connectivity. In order to connect the concept of connectivity back to productive volume and stimulation efficiency, the connectivity is compared with the character of deformation in the reservoir as deduced from the collective behavior of microseismicity using robustly determined source parameters.

  19. Unconditional security proof of long-distance continuous-variable quantum key distribution with discrete modulation.

    PubMed

    Leverrier, Anthony; Grangier, Philippe

    2009-05-08

    We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.

  20. Nonlinear dynamic failure process of tunnel-fault system in response to strong seismic event

    NASA Astrophysics Data System (ADS)

    Yang, Zhihua; Lan, Hengxing; Zhang, Yongshuang; Gao, Xing; Li, Langping

    2013-03-01

    Strong earthquakes and faults have significant effect on the stability capability of underground tunnel structures. This study used a 3-Dimensional Discrete Element model and the real records of ground motion in the Wenchuan earthquake to investigate the dynamic response of tunnel-fault system. The typical tunnel-fault system was composed of one planned railway tunnel and one seismically active fault. The discrete numerical model was prudentially calibrated by means of the comparison between the field survey and numerical results of ground motion. It was then used to examine the detailed quantitative information on the dynamic response characteristics of tunnel-fault system, including stress distribution, strain, vibration velocity and tunnel failure process. The intensive tunnel-fault interaction during seismic loading induces the dramatic stress redistribution and stress concentration in the intersection of tunnel and fault. The tunnel-fault system behavior is characterized by the complicated nonlinear dynamic failure process in response to a real strong seismic event. It can be qualitatively divided into 5 main stages in terms of its stress, strain and rupturing behaviors: (1) strain localization, (2) rupture initiation, (3) rupture acceleration, (4) spontaneous rupture growth and (5) stabilization. This study provides the insight into the further stability estimation of underground tunnel structures under the combined effect of strong earthquakes and faults.

  1. Discrete Event Supervisory Control Applied to Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Shah, Neerav

    2005-01-01

    The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.

  2. Suboptimal distributed control and estimation: application to a four coupled tanks system

    NASA Astrophysics Data System (ADS)

    Orihuela, Luis; Millán, Pablo; Vivas, Carlos; Rubio, Francisco R.

    2016-06-01

    The paper proposes an innovative estimation and control scheme that enables the distributed monitoring and control of large-scale processes. The proposed approach considers a discrete linear time-invariant process controlled by a network of agents that may both collect information about the evolution of the plant and apply control actions to drive its behaviour. The problem makes full sense when local observability/controllability is not assumed and the communication between agents can be exploited to reach system-wide goals. Additionally, to reduce agents bandwidth requirements and power consumption, an event-based communication policy is studied. The design procedure guarantees system stability, allowing the designer to trade-off performance, control effort and communication requirements. The obtained controllers and observers are implemented in a fully distributed fashion. To illustrate the performance of the proposed technique, experimental results on a quadruple-tank process are provided.

  3. Profiling the metabolic signals involved in chemical communication between microbes using imaging mass spectrometry.

    PubMed

    Stasulli, Nikolas M; Shank, Elizabeth A

    2016-11-01

    The ability of microbes to secrete bioactive chemical signals into their environment has been known for over a century. However, it is only in the last decade that imaging mass spectrometry has provided us with the ability to directly visualize the spatial distributions of these microbial metabolites. This technology involves collecting mass spectra from multiple discrete locations across a biological sample, yielding chemical ‘maps’ that simultaneously reveal the distributions of hundreds of metabolites in two dimensions. Advances in microbial imaging mass spectrometry summarized here have included the identification of novel strain- or coculture-specific compounds, the visualization of biotransformation events (where one metabolite is converted into another by a neighboring microbe), and the implementation of a method to reconstruct the 3D subsurface distributions of metabolites, among others. Here we review the recent literature and discuss how imaging mass spectrometry has spurred novel insights regarding the chemical consequences of microbial interactions.

  4. The detection and analysis of point processes in biological signals

    NASA Technical Reports Server (NTRS)

    Anderson, D. J.; Correia, M. J.

    1977-01-01

    A pragmatic approach to the detection and analysis of discrete events in biomedical signals is taken. Examples from both clinical and basic research are provided. Introductory sections discuss not only discrete events which are easily extracted from recordings by conventional threshold detectors but also events embedded in other information carrying signals. The primary considerations are factors governing event-time resolution and the effects limits to this resolution have on the subsequent analysis of the underlying process. The analysis portion describes tests for qualifying the records as stationary point processes and procedures for providing meaningful information about the biological signals under investigation. All of these procedures are designed to be implemented on laboratory computers of modest computational capacity.

  5. A New ’Availability-Payment’ Model for Pricing Performance-Based Logistics Contracts

    DTIC Science & Technology

    2014-04-30

    maintenance network connected to the inventory and Original Equipment Manufacturer (OEM) used in this paper. The input to the Petri net in Figure 2 is the...contract structures. The model developed in this paper uses an affine controller to drive a discrete event simulator ( Petri net ) that produces...discrete event simulator ( Petri net ) that produces availability and cost measures. The model is used to explore the optimum availability assessment

  6. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    NASA Technical Reports Server (NTRS)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  7. Multiple Autonomous Discrete Event Controllers for Constellations

    NASA Technical Reports Server (NTRS)

    Esposito, Timothy C.

    2003-01-01

    The Multiple Autonomous Discrete Event Controllers for Constellations (MADECC) project is an effort within the National Aeronautics and Space Administration Goddard Space Flight Center's (NASA/GSFC) Information Systems Division to develop autonomous positioning and attitude control for constellation satellites. It will be accomplished using traditional control theory and advanced coordination algorithms developed by the Johns Hopkins University Applied Physics Laboratory (JHU/APL). This capability will be demonstrated in the discrete event control test-bed located at JHU/APL. This project will be modeled for the Leonardo constellation mission, but is intended to be adaptable to any constellation mission. To develop a common software architecture. the controllers will only model very high-level responses. For instance, after determining that a maneuver must be made. the MADECC system will output B (Delta)V (velocity change) value. Lower level systems must then decide which thrusters to fire and for how long to achieve that (Delta)V.

  8. Lyapunov Stability of Fuzzy Discrete Event Systems

    NASA Astrophysics Data System (ADS)

    Liu, Fuchun; Qiu, Daowen

    Fuzzy discrete event systems (FDESs) as a generalization of (crisp) discrete event systems (DESs) may better deal with the problems of fuzziness, impreciseness, and subjectivity. Qiu, Cao and Ying, Liu and Qiu interestingly developed the theory of FDESs. As a continuation of Qiu's work, this paper is to deal with the Lyapunov stability of FDESs, some main results of crisp DESs are generalized. We formalize the notions of the reachability of fuzzy states defined on a metric space. A linear algorithm of computing the r-reachable fuzzy state set is presented. Then we introduce the definitions of stability and asymptotical stability in the sense of Lyapunov to guarantee the convergence of the behaviors of fuzzy automaton to the desired fuzzy states when system engages in some illegal behaviors which can be tolerated. In particular, we present a necessary and sufficient condition for stability and another for asymptotical stability of FDESs.

  9. Distribution of Practice and Metacognition in Learning and Long-Term Retention of a Discrete Motor Task

    ERIC Educational Resources Information Center

    Dail, Teresa K.; Christina, Robert W.

    2004-01-01

    This study examined judgments of learning and the long-term retention of a discrete motor task (golf putting) as a function of practice distribution. The results indicated that participants in the distributed practice group performed more proficiently than those in the massed practice group during both acquisition and retention phases. No…

  10. Conditions for extinction events in chemical reaction networks with discrete state spaces.

    PubMed

    Johnston, Matthew D; Anderson, David F; Craciun, Gheorghe; Brijder, Robert

    2018-05-01

    We study chemical reaction networks with discrete state spaces and present sufficient conditions on the structure of the network that guarantee the system exhibits an extinction event. The conditions we derive involve creating a modified chemical reaction network called a domination-expanded reaction network and then checking properties of this network. Unlike previous results, our analysis allows algorithmic implementation via systems of equalities and inequalities and suggests sequences of reactions which may lead to extinction events. We apply the results to several networks including an EnvZ-OmpR signaling pathway in Escherichia coli.

  11. Temporal and Rate Coding for Discrete Event Sequences in the Hippocampus.

    PubMed

    Terada, Satoshi; Sakurai, Yoshio; Nakahara, Hiroyuki; Fujisawa, Shigeyoshi

    2017-06-21

    Although the hippocampus is critical to episodic memory, neuronal representations supporting this role, especially relating to nonspatial information, remain elusive. Here, we investigated rate and temporal coding of hippocampal CA1 neurons in rats performing a cue-combination task that requires the integration of sequentially provided sound and odor cues. The majority of CA1 neurons displayed sensory cue-, combination-, or choice-specific (simply, "event"-specific) elevated discharge activities, which were sustained throughout the event period. These event cells underwent transient theta phase precession at event onset, followed by sustained phase locking to the early theta phases. As a result of this unique single neuron behavior, the theta sequences of CA1 cell assemblies of the event sequences had discrete representations. These results help to update the conceptual framework for space encoding toward a more general model of episodic event representations in the hippocampus. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Among-character rate variation distributions in phylogenetic analysis of discrete morphological characters.

    PubMed

    Harrison, Luke B; Larsson, Hans C E

    2015-03-01

    Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Effect of the surface charge discretization on electric double layers: a Monte Carlo simulation study.

    PubMed

    Madurga, Sergio; Martín-Molina, Alberto; Vilaseca, Eudald; Mas, Francesc; Quesada-Pérez, Manuel

    2007-06-21

    The structure of the electric double layer in contact with discrete and continuously charged planar surfaces is studied within the framework of the primitive model through Monte Carlo simulations. Three different discretization models are considered together with the case of uniform distribution. The effect of discreteness is analyzed in terms of charge density profiles. For point surface groups, a complete equivalence with the situation of uniformly distributed charge is found if profiles are exclusively analyzed as a function of the distance to the charged surface. However, some differences are observed moving parallel to the surface. Significant discrepancies with approaches that do not account for discreteness are reported if charge sites of finite size placed on the surface are considered.

  14. Delay-time distribution in the scattering of time-narrow wave packets (II)—quantum graphs

    NASA Astrophysics Data System (ADS)

    Smilansky, Uzy; Schanz, Holger

    2018-02-01

    We apply the framework developed in the preceding paper in this series (Smilansky 2017 J. Phys. A: Math. Theor. 50 215301) to compute the time-delay distribution in the scattering of ultra short radio frequency pulses on complex networks of transmission lines which are modeled by metric (quantum) graphs. We consider wave packets which are centered at high wave number and comprise many energy levels. In the limit of pulses of very short duration we compute upper and lower bounds to the actual time-delay distribution of the radiation emerging from the network using a simplified problem where time is replaced by the discrete count of vertex-scattering events. The classical limit of the time-delay distribution is also discussed and we show that for finite networks it decays exponentially, with a decay constant which depends on the graph connectivity and the distribution of its edge lengths. We illustrate and apply our theory to a simple model graph where an algebraic decay of the quantum time-delay distribution is established.

  15. Discrete event simulation tool for analysis of qualitative models of continuous processing systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)

    1990-01-01

    An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.

  16. The Physical Mechanism for Retinal Discrete Dark Noise: Thermal Activation or Cellular Ultraweak Photon Emission?

    PubMed

    Salari, Vahid; Scholkmann, Felix; Bokkon, Istvan; Shahbazi, Farhad; Tuszynski, Jack

    2016-01-01

    For several decades the physical mechanism underlying discrete dark noise of photoreceptors in the eye has remained highly controversial and poorly understood. It is known that the Arrhenius equation, which is based on the Boltzmann distribution for thermal activation, can model only a part (e.g. half of the activation energy) of the retinal dark noise experimentally observed for vertebrate rod and cone pigments. Using the Hinshelwood distribution instead of the Boltzmann distribution in the Arrhenius equation has been proposed as a solution to the problem. Here, we show that the using the Hinshelwood distribution does not solve the problem completely. As the discrete components of noise are indistinguishable in shape and duration from those produced by real photon induced photo-isomerization, the retinal discrete dark noise is most likely due to 'internal photons' inside cells and not due to thermal activation of visual pigments. Indeed, all living cells exhibit spontaneous ultraweak photon emission (UPE), mainly in the optical wavelength range, i.e., 350-700 nm. We show here that the retinal discrete dark noise has a similar rate as UPE and therefore dark noise is most likely due to spontaneous cellular UPE and not due to thermal activation.

  17. X-33 Hypersonic Boundary Layer Transition

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Horvath, Thomas J.; Hollis, Brian R.; Thompson, Richard A.; Hamilton, H. Harris, II

    1999-01-01

    Boundary layer and aeroheating characteristics of several X-33 configurations have been experimentally examined in the Langley 20-Inch Mach 6 Air Tunnel. Global surface heat transfer distributions, surface streamline patterns, and shock shapes were measured on 0.013-scale models at Mach 6 in air. Parametric variations include angles-of-attack of 20-deg, 30-deg, and 40-deg; Reynolds numbers based on model length of 0.9 to 6.6 million; and body-flap deflections of 0, 10 and 20-deg. The effects of discrete and distributed roughness elements on boundary layer transition, which included trip height, size, location, and distribution, both on and off the windward centerline, were investigated. The discrete roughness results on centerline were used to provide a transition correlation for the X-33 flight vehicle that was applicable across the range of reentry angles of attack. The attachment line discrete roughness results were shown to be consistent with the centerline results, as no increased sensitivity to roughness along the attachment line was identified. The effect of bowed panels was qualitatively shown to be less effective than the discrete trips; however, the distributed nature of the bowed panels affected a larger percent of the aft-body windward surface than a single discrete trip.

  18. Properties of Spectral Shapes of Whistler-Mode Emissions

    NASA Astrophysics Data System (ADS)

    Macusova, E.; Santolik, O.; Pickett, J. S.; Gurnett, D. A.; Cornilleau-Wehrlin, N.

    2014-12-01

    Whistler-mode emissions play an important role in wave-particle interactions occurring in the radiation belt region. Whistler mode chorus emissions consist of discrete wave packets which exhibit different spectral shapes. Rising tones (events with positive value of the frequency sweep rate) are frequently observed. Other categories of chorus spectral shapes, such as falling tones, hooks, broadband patterns, are also known. Whistler-mode emissions can additionally occur as hiss or combinations of hiss with discrete patterns. In this study, we have analyzed more than 11 years of high-time resolution measurements provided by the Wideband Data (WBD) instrument onboard four Cluster spacecraft to identify different spectral shapes of whistler mode emissions. We determine the distribution of individual groups of chorus spectral shapes in the Earth's magnetosphere and the effect of the different geomagnetic conditions on their occurrence. We focus on average polarization and propagation properties of the different types of spectral shapes, obtained during visually identified time intervals from multicomponent measurements of the STAFF-SA instrument recorded with a time resolution of 4 seconds.

  19. Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan (Technical Monitor); Ray, Asok

    2004-01-01

    This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.

  20. Discrete-Event Simulation Unmasks the Quantum Cheshire Cat

    NASA Astrophysics Data System (ADS)

    Michielsen, Kristel; Lippert, Thomas; Raedt, Hans De

    2017-05-01

    It is shown that discrete-event simulation accurately reproduces the experimental data of a single-neutron interferometry experiment [T. Denkmayr {\\sl et al.}, Nat. Commun. 5, 4492 (2014)] and provides a logically consistent, paradox-free, cause-and-effect explanation of the quantum Cheshire cat effect without invoking the notion that the neutron and its magnetic moment separate. Describing the experimental neutron data using weak-measurement theory is shown to be useless for unravelling the quantum Cheshire cat effect.

  1. 40 CFR 86.1370-2007 - Not-To-Exceed test procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... that include discrete regeneration events and that send a recordable electronic signal indicating the start and end of the regeneration event, determine the minimum averaging period for each NTE event that... averaging period is used to determine whether the individual NTE event is a valid NTE event. For engines...

  2. Event Segmentation Improves Event Memory up to One Month Later

    ERIC Educational Resources Information Center

    Flores, Shaney; Bailey, Heather R.; Eisenberg, Michelle L.; Zacks, Jeffrey M.

    2017-01-01

    When people observe everyday activity, they spontaneously parse it into discrete meaningful events. Individuals who segment activity in a more normative fashion show better subsequent memory for the events. If segmenting events effectively leads to better memory, does asking people to attend to segmentation improve subsequent memory? To answer…

  3. Effects of tree roots on shallow landslides distribution and frequency in the European Alps using a new physically-based discrete element model

    NASA Astrophysics Data System (ADS)

    Cohen, Denis; Schwarz, Massimiliano

    2017-04-01

    Shallow landslides are hillslope processes that play a key role in shaping landscapes in forested catchments. Shallow landslides are, in some regions, the dominant regulating mechanisms by which soil is delivered from the hillslopes to steep channels and fluvial systems. Several studies have highlighted the importance of roots to better understand mechanisms of root reinforcement and their contributions to the stabilization of hillslopes. In this context, the spatio-temporal distribution of root reinforcement has a major repercussion on the dynamic of sediment transport at the catchment scale and on the availability of productive soils. Here we present a new model for shallow slope stability calculations, SOSlope, that specifically considers the effects of root reinforcement on shallow landslide initiation. The model is a strain-step discrete element model that reproduces the self-organized redistribution of forces on a slope during rainfall-triggered shallow landslides. Tree roots govern tensile and compressive force redistribution and determine the stability of the slope, the timing, location, and dimension of the failure mass. We use SOSlope to quantify the role of protection forest in several localities in the European Alps, making use of detailed field measurements of root densities and root-size distribution, and root tensile and compressive strength for three species common in the Alps (spruce, fir, and beech) to compute landslide distributions and frequency during landslide-triggering rainfall events. We show the mechanisms by which tree roots impart reinforcement to slopes and offer protection against shallow landslides.

  4. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  5. Multi-Aperture Digital Coherent Combining for Free-Space Optical Communication Receivers

    DTIC Science & Technology

    2016-04-21

    Distribution A: Public Release; unlimited distribution 2016 Optical Society of America OCIS codes: (060.1660) Coherent communications; (070.2025) Discrete ...Coherent combining algorithm Multi-aperture coherent combining enables using many discrete apertures together to create a large effective aperture. A

  6. A fuzzy discrete harmony search algorithm applied to annual cost reduction in radial distribution systems

    NASA Astrophysics Data System (ADS)

    Ameli, Kazem; Alfi, Alireza; Aghaebrahimi, Mohammadreza

    2016-09-01

    Similarly to other optimization algorithms, harmony search (HS) is quite sensitive to the tuning parameters. Several variants of the HS algorithm have been developed to decrease the parameter-dependency character of HS. This article proposes a novel version of the discrete harmony search (DHS) algorithm, namely fuzzy discrete harmony search (FDHS), for optimizing capacitor placement in distribution systems. In the FDHS, a fuzzy system is employed to dynamically adjust two parameter values, i.e. harmony memory considering rate and pitch adjusting rate, with respect to normalized mean fitness of the harmony memory. The key aspect of FDHS is that it needs substantially fewer iterations to reach convergence in comparison with classical discrete harmony search (CDHS). To the authors' knowledge, this is the first application of DHS to specify appropriate capacitor locations and their best amounts in the distribution systems. Simulations are provided for 10-, 34-, 85- and 141-bus distribution systems using CDHS and FDHS. The results show the effectiveness of FDHS over previous related studies.

  7. Reducing ambulance response times using discrete event simulation.

    PubMed

    Wei Lam, Sean Shao; Zhang, Zhong Cheng; Oh, Hong Choon; Ng, Yih Ying; Wah, Win; Hock Ong, Marcus Eng

    2014-01-01

    The objectives of this study are to develop a discrete-event simulation (DES) model for the Singapore Emergency Medical Services (EMS), and to demonstrate the utility of this DES model for the evaluation of different policy alternatives to improve ambulance response times. A DES model was developed based on retrospective emergency call data over a continuous 6-month period in Singapore. The main outcome measure is the distribution of response times. The secondary outcome measure is ambulance utilization levels based on unit hour utilization (UHU) ratios. The DES model was used to evaluate different policy options in order to improve the response times, while maintaining reasonable fleet utilization. Three policy alternatives looking at the reallocation of ambulances, the addition of new ambulances, and alternative dispatch policies were evaluated. Modifications of dispatch policy combined with the reallocation of existing ambulances were able to achieve response time performance equivalent to that of adding 10 ambulances. The median (90th percentile) response time was 7.08 minutes (12.69 minutes). Overall, this combined strategy managed to narrow the gap between the ideal and existing response time distribution by 11-13%. Furthermore, the median UHU under this combined strategy was 0.324 with an interquartile range (IQR) of 0.047 versus a median utilization of 0.285 (IQR of 0.051) resulting from the introduction of additional ambulances. Response times were shown to be improved via a more effective reallocation of ambulances and dispatch policy. More importantly, the response time improvements were achieved without a reduction in the utilization levels and additional costs associated with the addition of ambulances. We demonstrated the effective use of DES as a versatile platform to model the dynamic system complexities of Singapore's national EMS systems for the evaluation of operational strategies to improve ambulance response times.

  8. A full-angle Monte-Carlo scattering technique including cumulative and single-event Rutherford scattering in plasmas [Theory of cumulative large-angle collisions in plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higginson, Drew P.

    Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less

  9. A full-angle Monte-Carlo scattering technique including cumulative and single-event Rutherford scattering in plasmas [Theory of cumulative large-angle collisions in plasmas

    DOE PAGES

    Higginson, Drew P.

    2017-08-12

    Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less

  10. Attention and working memory: two basic mechanisms for constructing temporal experiences

    PubMed Central

    Marchetti, Giorgio

    2014-01-01

    Various kinds of observations show that the ability of human beings to both consciously relive past events – episodic memory – and conceive future events, entails an active process of construction. This construction process also underpins many other important aspects of conscious human life, such as perceptions, language, and conscious thinking. This article provides an explanation of what makes the constructive process possible and how it works. The process mainly relies on attentional activity, which has a discrete and periodic nature, and working memory, which allows for the combination of discrete attentional operations. An explanation is also provided of how past and future events are constructed. PMID:25177305

  11. Donders revisited: Discrete or continuous temporal processing underlying reaction time distributions?

    PubMed

    Bao, Yan; Yang, Taoxi; Lin, Xiaoxiong; Pöppel, Ernst

    2016-09-01

    Differences of reaction times to specific stimulus configurations are used as indicators of cognitive processing stages. In this classical experimental paradigm, continuous temporal processing is implicitly assumed. Multimodal response distributions indicate, however, discrete time sampling, which is often masked by experimental conditions. Differences in reaction times reflect discrete temporal mechanisms that are pre-semantically implemented and suggested to be based on entrained neural oscillations. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  12. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  13. Broadband Time-Frequency Analysis Using a Multicomputer

    DTIC Science & Technology

    2004-09-30

    FFT 512 pt Waterfall WVD display 8© 2004 Mercury Computer Systems, Inc. Smoothed Pseudo Wigner - Ville Distribution One of many interference reduction...The Wigner - Ville distribution , the scalogram, and the discrete Gabor transform are among the most well-known of these methods. Due to specific...based upon FFT Accumulation Method • Continuous Wavelet Transform (Scalogram) • Discrete Wigner - Ville Distribution with a selected set of interference

  14. Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Nyst, M.

    2013-12-01

    We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.

  15. Hybrid Architectural Framework for C4ISR and Discrete-Event Simulation (DES) to Support Sensor-Driven Model Synthesis in Real-World Scenarios

    DTIC Science & Technology

    2013-09-01

    which utilizes FTA and then loads it into a DES engine to generate simulation results. .......44 Figure 21. This simulation architecture is...While Discrete Event Simulation ( DES ) can provide accurate time estimation and fast simulation speed, models utilizing it often suffer...C4ISR progress in MDW is developed in this research to demonstrate the feasibility of AEMF- DES and explore its potential. The simulation (MDSIM

  16. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    PubMed

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  17. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  18. Gaussian quadrature and lattice discretization of the Fermi-Dirac distribution for graphene.

    PubMed

    Oettinger, D; Mendoza, M; Herrmann, H J

    2013-07-01

    We construct a lattice kinetic scheme to study electronic flow in graphene. For this purpose, we first derive a basis of orthogonal polynomials, using as the weight function the ultrarelativistic Fermi-Dirac distribution at rest. Later, we use these polynomials to expand the respective distribution in a moving frame, for both cases, undoped and doped graphene. In order to discretize the Boltzmann equation and make feasible the numerical implementation, we reduce the number of discrete points in momentum space to 18 by applying a Gaussian quadrature, finding that the family of representative wave (2+1)-vectors, which satisfies the quadrature, reconstructs a honeycomb lattice. The procedure and discrete model are validated by solving the Riemann problem, finding excellent agreement with other numerical models. In addition, we have extended the Riemann problem to the case of different dopings, finding that by increasing the chemical potential the electronic fluid behaves as if it increases its effective viscosity.

  19. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  20. Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition.

    PubMed

    Smolensky, Paul; Goldrick, Matthew; Mathis, Donald

    2014-08-01

    Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.

  1. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less

  2. Discrete photon statistics from continuous microwave measurements

    NASA Astrophysics Data System (ADS)

    Virally, Stéphane; Simoneau, Jean Olivier; Lupien, Christian; Reulet, Bertrand

    2016-04-01

    Photocount statistics are an important tool for the characterization of electromagnetic fields, especially for fields with an irrelevant phase. In the microwave domain, continuous rather than discrete measurements are the norm. Using a different approach, we recover discrete photon statistics from the cumulants of a continuous distribution of field quadrature measurements. The use of cumulants allows the separation between the signal of interest and experimental noise. Using a parametric amplifier as the first stage of the amplification chain, we extract useful data from up to the sixth cumulant of the continuous distribution of a coherent field, hence recovering up to the third moment of the discrete statistics associated with a signal with much less than one average photon.

  3. The Physical Mechanism for Retinal Discrete Dark Noise: Thermal Activation or Cellular Ultraweak Photon Emission?

    PubMed Central

    Salari, Vahid; Scholkmann, Felix; Bokkon, Istvan; Shahbazi, Farhad; Tuszynski, Jack

    2016-01-01

    For several decades the physical mechanism underlying discrete dark noise of photoreceptors in the eye has remained highly controversial and poorly understood. It is known that the Arrhenius equation, which is based on the Boltzmann distribution for thermal activation, can model only a part (e.g. half of the activation energy) of the retinal dark noise experimentally observed for vertebrate rod and cone pigments. Using the Hinshelwood distribution instead of the Boltzmann distribution in the Arrhenius equation has been proposed as a solution to the problem. Here, we show that the using the Hinshelwood distribution does not solve the problem completely. As the discrete components of noise are indistinguishable in shape and duration from those produced by real photon induced photo-isomerization, the retinal discrete dark noise is most likely due to ‘internal photons’ inside cells and not due to thermal activation of visual pigments. Indeed, all living cells exhibit spontaneous ultraweak photon emission (UPE), mainly in the optical wavelength range, i.e., 350–700 nm. We show here that the retinal discrete dark noise has a similar rate as UPE and therefore dark noise is most likely due to spontaneous cellular UPE and not due to thermal activation. PMID:26950936

  4. Long fiber Bragg grating sensor interrogation using discrete-time microwave photonic filtering techniques.

    PubMed

    Ricchiuti, Amelia Lavinia; Barrera, David; Sales, Salvador; Thevenaz, Luc; Capmany, José

    2013-11-18

    A novel technique for interrogating photonic sensors based on long fiber Bragg gratings (FBGs) is presented and experimentally demonstrated, dedicated to detect the presence and the precise location of several spot events. The principle of operation is based on a technique used to analyze microwave photonics (MWP) filters. The long FBGs are used as quasi-distributed sensors. Several hot-spots can be detected along the FBG with a spatial accuracy under 0.5 mm using a modulator and a photo-detector (PD) with a modest bandwidth of less than 1 GHz. The proposed interrogation system is intrinsically robust against environmental changes.

  5. Seismic Velocity Structure of the San Jacinto Fault Zone from Double-Difference Tomography and Expected Distribution of Head Waves

    NASA Astrophysics Data System (ADS)

    Allam, A. A.; Ben-Zion, Y.

    2010-12-01

    We present initial results of double-difference tomographic images for the velocity structure of the San Jacinto Fault Zone (SJFZ), and related 3D forward calculations of waves in the immediate vicinity of the SJFZ. We begin by discretizing the SJFZ region with a uniform grid spacing of 500 m, extending 140 km by 80 km and down to 25 km depth. We adopt the layered 1D model of Dreger & Helmberger (1993) as a starting model for this region, and invert for 3D distributions of VP and VS with the double-difference tomography of Zhang & Thurber (2003), which makes use of absolute event-station travel times as well as relative travel times for phases from nearby event pairs. Absolute arrival times of over 78,000 P- and S-wave phase picks generated by 1127 earthquakes and recorded at 70 stations near the SJFZ are used. Only data from events with Mw greater than 2.2 are used. Though ray coverage is limited at shallow depths, we obtain relatively high-resolution images from 4 to 13 km which show a clear contrast in velocity across the NW section of the SJFZ. To the SE, in the so-called trifurcation area, the structure is more complicated, though station coverage is poorest in this region. Using the obtained image, the current event locations, and the 3D finite-difference code of Olsen (1994), we estimate the likely distributions of fault zone head waves as a tool for future deployment of instrument. We plan to conduct further studies by including more travel time picks, including those from newly-deployed stations in the SJFZ area, in order to gain a more accurate image of the velocity structure.

  6. Budget impact analysis of thrombolysis for stroke in Spain: a discrete event simulation model.

    PubMed

    Mar, Javier; Arrospide, Arantzazu; Comas, Mercè

    2010-01-01

    Thrombolysis within the first 3 hours after the onset of symptoms of a stroke has been shown to be a cost-effective treatment because treated patients are 30% more likely than nontreated patients to have no residual disability. The objective of this study was to calculate by means of a discrete event simulation model the budget impact of thrombolysis in Spain. The budget impact analysis was based on stroke incidence rates and the estimation of the prevalence of stroke-related disability in Spain and its translation to hospital and social costs. A discrete event simulation model was constructed to represent the flow of patients with stroke in Spain. If 10% of patients with stroke from 2000 to 2015 would receive thrombolytic treatment, the prevalence of dependent patients in 2015 would decrease from 149,953 to 145,922. For the first 6 years, the cost of intervention would surpass the savings. Nevertheless, the number of cases in which patient dependency was avoided would steadily increase, and after 2006 the cost savings would be greater, with a widening difference between the cost of intervention and the cost of nonintervention, until 2015. The impact of thrombolysis on society's health and social budget indicates a net benefit after 6 years, and the improvement in health grows continuously. The validation of the model demonstrates the adequacy of the discrete event simulation approach in representing the epidemiology of stroke to calculate the budget impact.

  7. Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.

    PubMed

    Morrison, Shane A; Luttbeg, Barney; Belden, Jason B

    2016-11-01

    Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50  = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Finite element probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacvarov, D.C.

    1981-01-01

    A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less

  9. Estuarine abandoned channel sedimentation rates record peak fluvial discharge magnitudes

    NASA Astrophysics Data System (ADS)

    Gray, A. B.; Pasternack, G. B.; Watson, E. B.

    2018-04-01

    Fluvial sediment deposits can provide useful records of integrated watershed expressions including flood event magnitudes. However, floodplain and estuarine sediment deposits evolve through the interaction of watershed/marine sediment supply and transport characteristics with the local depositional environment. Thus extraction of watershed scale signals depends upon accounting for local scale effects on sediment deposition rates and character. This study presents an examination of the balance of fluvial sediment dynamics and local scale hydro-geomorphic controls on alluviation of an abandoned channel in the Salinas River Lagoon, CA. A set of three sediment cores contained discrete flood deposits that corresponded to the largest flood events over the period of accretion from 1969 to 2007. Sedimentation rates scaled with peak flood discharge and event scale sediment flux, but were not influenced by longer scale hydro-meteorological activities such as annual precipitation and water yield. Furthermore, the particle size distributions of flood deposits showed no relationship to event magnitudes. Both the responsiveness of sedimentation and unresponsiveness of particle size distributions to hydro-sedimentological event magnitudes appear to be controlled by aspects of local geomorphology that influence the connectivity of the abandoned channel to the Salinas River mainstem. Well-developed upstream plug bar formation precluded the entrainment of coarser bedload into the abandoned channel, while Salinas River mouth conditions (open/closed) in conjunction with tidal and storm surge conditions may play a role in influencing the delivery of coarser suspended load fractions. Channel adjacent sediment deposition can be valuable records of hydro-meteorological and sedimentological regimes, but local depositional settings may dominate the character of short term (interdecadal) signatures.

  10. Analysis of a Statistical Relationship Between Dose and Error Tallies in Semiconductor Digital Integrated Circuits for Application to Radiation Monitoring Over a Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Colins, Karen; Li, Liqian; Liu, Yu

    2017-05-01

    Mass production of widely used semiconductor digital integrated circuits (ICs) has lowered unit costs to the level of ordinary daily consumables of a few dollars. It is therefore reasonable to contemplate the idea of an engineered system that consumes unshielded low-cost ICs for the purpose of measuring gamma radiation dose. Underlying the idea is the premise of a measurable correlation between an observable property of ICs and radiation dose. Accumulation of radiation-damage-induced state changes or error events is such a property. If correct, the premise could make possible low-cost wide-area radiation dose measurement systems, instantiated as wireless sensor networks (WSNs) with unshielded consumable ICs as nodes, communicating error events to a remote base station. The premise has been investigated quantitatively for the first time in laboratory experiments and related analyses performed at the Canadian Nuclear Laboratories. State changes or error events were recorded in real time during irradiation of samples of ICs of different types in a 60Co gamma cell. From the error-event sequences, empirical distribution functions of dose were generated. The distribution functions were inverted and probabilities scaled by total error events, to yield plots of the relationship between dose and error tallies. Positive correlation was observed, and discrete functional dependence of dose quantiles on error tallies was measured, demonstrating the correctness of the premise. The idea of an engineered system that consumes unshielded low-cost ICs in a WSN, for the purpose of measuring gamma radiation dose over wide areas, is therefore tenable.

  11. Discrete stochastic analogs of Erlang epidemic models.

    PubMed

    Getz, Wayne M; Dougherty, Eric R

    2018-12-01

    Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.

  12. Exploration Supply Chain Simulation

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Exploration Supply Chain Simulation project was chartered by the NASA Exploration Systems Mission Directorate to develop a software tool, with proper data, to quantitatively analyze supply chains for future program planning. This tool is a discrete-event simulation that uses the basic supply chain concepts of planning, sourcing, making, delivering, and returning. This supply chain perspective is combined with other discrete or continuous simulation factors. Discrete resource events (such as launch or delivery reviews) are represented as organizational functional units. Continuous resources (such as civil service or contractor program functions) are defined as enabling functional units. Concepts of fixed and variable costs are included in the model to allow the discrete events to interact with cost calculations. The definition file is intrinsic to the model, but a blank start can be initiated at any time. The current definition file is an Orion Ares I crew launch vehicle. Parameters stretch from Kennedy Space Center across and into other program entities (Michaud Assembly Facility, Aliant Techsystems, Stennis Space Center, Johnson Space Center, etc.) though these will only gain detail as the file continues to evolve. The Orion Ares I file definition in the tool continues to evolve, and analysis from this tool is expected in 2008. This is the first application of such business-driven modeling to a NASA/government-- aerospace contractor endeavor.

  13. Uncertainties in stormwater runoff data collection from a small urban catchment, Southeast China.

    PubMed

    Huang, Jinliang; Tu, Zhenshun; Du, Pengfei; Lin, Jie; Li, Qingsheng

    2010-01-01

    Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error propagation analysis, this study attempted to quantify the uncertainties of discrete chemical oxygen demand (COD), total suspended solids (TSS) concentration, stormwater flowrate, stormwater event volumes, COD event mean concentration (EMC), and COD event loads in terms of flow measurement, sample collection, storage and laboratory analysis. The results showed that the uncertainties due to sample collection, storage and laboratory analysis of COD from stormwater runoff are 13.99%, 19.48% and 12.28%. Meanwhile, flow measurement uncertainty was 12.82%, and the sample collection uncertainty of TSS from stormwater runoff was 31.63%. Based on the law of propagation of uncertainties, the uncertainties regarding event flow volume, COD EMC and COD event loads were quantified as 7.03%, 10.26% and 18.47%.

  14. Women's bleeding patterns: ability to recall and predict menstrual events. World Health Organization Task Force on Psychosocial Research in Family, Planning, Special Programme of Research, Development and Research Training in Human Reproduction.

    PubMed

    1981-01-01

    Objective records of the occurrence of menstrual bleeding were compared with women's subjective assessments of the timing and duration of these events. The number of days a woman experienced bleeding during each episode was relatively constant; however, the length of the bleeding episode varied greatly among the 13 cultures studies. A greater understanding of menstrual patterns is possible if the pattern is seen as a succession of discrete events rather than as a whole. A more careful use of terminology relating to these discrete events would provide greater understanding of menstruation for the woman concerned and those advising her. The methodology employed in the collection of data about menstrual events among illiterate women is described and suggestions given as to how such information can be most efficiently obtained.

  15. Statistics of primordial density perturbations from discrete seed masses

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Bertschinger, Edmund

    1991-01-01

    The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.

  16. Multiscale Path Metrics for the Analysis of Discrete Geometric Structures

    DTIC Science & Technology

    2017-11-30

    Report: Multiscale Path Metrics for the Analysis of Discrete Geometric Structures The views, opinions and/or findings contained in this report are those...Analysis of Discrete Geometric Structures Report Term: 0-Other Email: tomasi@cs.duke.edu Distribution Statement: 1-Approved for public release

  17. It's Deja Vu All over Again: Using Multiple-Spell Discrete-Time Survival Analysis.

    ERIC Educational Resources Information Center

    Willett, John B.; Singer, Judith D.

    1995-01-01

    The multiple-spell discrete-time survival analysis method is introduced and illustrated using longitudinal data on exit from and reentry into the teaching profession. The method is applicable to many educational problems involving the sequential occurrence of disparate events or episodes. (SLD)

  18. Extreme events and event size fluctuations in biased random walks on networks.

    PubMed

    Kishore, Vimal; Santhanam, M S; Amritkar, R E

    2012-05-01

    Random walk on discrete lattice models is important to understand various types of transport processes. The extreme events, defined as exceedences of the flux of walkers above a prescribed threshold, have been studied recently in the context of complex networks. This was motivated by the occurrence of rare events such as traffic jams, floods, and power blackouts which take place on networks. In this work, we study extreme events in a generalized random walk model in which the walk is preferentially biased by the network topology. The walkers preferentially choose to hop toward the hubs or small degree nodes. In this setting, we show that extremely large fluctuations in event sizes are possible on small degree nodes when the walkers are biased toward the hubs. In particular, we obtain the distribution of event sizes on the network. Further, the probability for the occurrence of extreme events on any node in the network depends on its "generalized strength," a measure of the ability of a node to attract walkers. The generalized strength is a function of the degree of the node and that of its nearest neighbors. We obtain analytical and simulation results for the probability of occurrence of extreme events on the nodes of a network using a generalized random walk model. The result reveals that the nodes with a larger value of generalized strength, on average, display lower probability for the occurrence of extreme events compared to the nodes with lower values of generalized strength.

  19. Eye Movements Reveal the Influence of Event Structure on Reading Behavior

    ERIC Educational Resources Information Center

    Swets, Benjamin; Kurby, Christopher A.

    2016-01-01

    When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond…

  20. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  1. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  2. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  3. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  4. Using Discrete Event Simulation to predict KPI's at a Projected Emergency Room.

    PubMed

    Concha, Pablo; Neriz, Liliana; Parada, Danilo; Ramis, Francisco

    2015-01-01

    Discrete Event Simulation (DES) is a powerful factor in the design of clinical facilities. DES enables facilities to be built or adapted to achieve the expected Key Performance Indicators (KPI's) such as average waiting times according to acuity, average stay times and others. Our computational model was built and validated using expert judgment and supporting statistical data. One scenario studied resulted in a 50% decrease in the average cycle time of patients compared to the original model, mainly by modifying the patient's attention model.

  5. On the Biogeography of Centipeda: A Species-Tree Diffusion Approach

    PubMed Central

    Nylinder, Stephan; Lemey, Philippe; De Bruyn, Mark; Suchard, Marc A.; Pfeil, Bernard E.; Walsh, Neville; Anderberg, Arne A.

    2014-01-01

    Reconstructing the biogeographic history of groups present in continuous arid landscapes is challenging due to the difficulties in defining discrete areas for analyses, and even more so when species largely overlap both in terms of geography and habitat preference. In this study, we use a novel approach to estimate ancestral areas for the small plant genus Centipeda. We apply continuous diffusion of geography by a relaxed random walk where each species is sampled from its extant distribution on an empirical distribution of time-calibrated species-trees. Using a distribution of previously published substitution rates of the internal transcribed spacer (ITS) for Asteraceae, we show how the evolution of Centipeda correlates with the temporal increase of aridity in the arid zone since the Pliocene. Geographic estimates of ancestral species show a consistent pattern of speciation of early lineages in the Lake Eyre region, with a division in more northerly and southerly groups since ∼840 ka. Summarizing the geographic slices of species-trees at the time of the latest speciation event (∼20 ka), indicates no presence of the genus in Australia west of the combined desert belt of the Nullabor Plain, the Great Victoria Desert, the Gibson Desert, and the Great Sandy Desert, or beyond the main continental shelf of Australia. The result indicates all western occurrences of the genus to be a result of recent dispersal rather than ancient vicariance. This study contributes to our understanding of the spatiotemporal processes shaping the flora of the arid zone, and offers a significant improvement in inference of ancestral areas for any organismal group distributed where it remains difficult to describe geography in terms of discrete areas. PMID:24335493

  6. Approximation of discrete-time LQG compensators for distributed systems with boundary input and unbounded measurement

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1987-01-01

    The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.

  7. Hybrid stochastic simplifications for multiscale gene networks.

    PubMed

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-09-07

    Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  8. Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2014-07-01

    The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models the probability distribution of the Y-STR haplotypes. Creating a consistent statistical model of the haplotypes enables us to perform a wide range of analyses. Previously, haplotype frequency estimation using the discrete Laplace method has been validated. In this paper we investigate how the discrete Laplace method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous studies. We also compared pairwise distances (between geographically separated samples) with those obtained using the AMOVA method and found good agreement. Further analyses that are impossible with AMOVA were made using the discrete Laplace method: analysis of the homogeneity in two different ways and calculating marginal STR distributions. We found that the Y-STR haplotypes from e.g. Finland were relatively homogeneous as opposed to the relatively heterogeneous Y-STR haplotypes from e.g. Lublin, Eastern Poland and Berlin, Germany. We demonstrated that the observed distributions of alleles at each locus were similar to the expected ones. We also compared pairwise distances between geographically separated samples from Africa with those obtained using the AMOVA method and found good agreement. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Robustness of quantum key distribution with discrete and continuous variables to channel noise

    NASA Astrophysics Data System (ADS)

    Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.

    2017-06-01

    We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barr, G.E.; Borns, D.J.; Fridrich, C.

    A comprehensive collection of scenarios is presented that connect initiating tectonic events with radionuclide releases by logical and physically possible combinations or sequences of features, events and processes. The initiating tectonic events include both discrete faulting and distributed rock deformation developed through the repository and adjacent to it, as well as earthquake-induced ground motion and changes in tectonic stress at the site. The effects of these tectonic events include impacts on the engineered-barrier system, such as container rupture and failure of repository tunnels. These effects also include a wide range of hydrologic effects such as changes in pathways and flowmore » rates in the unsaturated and saturated zones, changes in the water-table configuration, and in the development of perched-water systems. These scenarios are intended go guide performance-assessment analyses and to assist principal investigators in how essential field, laboratory, and calculational studies are used. This suite of scenarios will help ensure that all important aspects of the system disturbance related to a tectonic scenario are captured in numerical analyses. It also provides a record of all options considered by project analysts to provide documentation required for licensing agreement. The final portion of this report discusses issues remaining to be addressed with respect to tectonic activity. 105 refs.« less

  11. Self-organized criticality in a two-dimensional cellular automaton model of a magnetic flux tube with background flow

    NASA Astrophysics Data System (ADS)

    Dănilă, B.; Harko, T.; Mocanu, G.

    2015-11-01

    We investigate the transition to self-organized criticality in a two-dimensional model of a flux tube with a background flow. The magnetic induction equation, represented by a partial differential equation with a stochastic source term, is discretized and implemented on a two-dimensional cellular automaton. The energy released by the automaton during one relaxation event is the magnetic energy. As a result of the simulations, we obtain the time evolution of the energy release, of the system control parameter, of the event lifetime distribution and of the event size distribution, respectively, and we establish that a self-organized critical state is indeed reached by the system. Moreover, energetic initial impulses in the magnetohydrodynamic flow can lead to one-dimensional signatures in the magnetic two-dimensional system, once the self-organized critical regime is established. The applications of the model for the study of gamma-ray bursts (GRBs) is briefly considered, and it is shown that some astrophysical parameters of the bursts, like the light curves, the maximum released energy and the number of peaks in the light curve can be reproduced and explained, at least on a qualitative level, by working in a framework in which the systems settles in a self-organized critical state via magnetic reconnection processes in the magnetized GRB fireball.

  12. Modeling effectiveness of management practices for flood mitigation using GIS spatial analysis functions in Upper Cilliwung watershed

    NASA Astrophysics Data System (ADS)

    Darma Tarigan, Suria

    2016-01-01

    Flooding is caused by excessive rainfall flowing downstream as cumulative surface runoff. Flooding event is a result of complex interaction of natural system components such as rainfall events, land use, soil, topography and channel characteristics. Modeling flooding event as a result of interaction of those components is a central theme in watershed management. The model is usually used to test performance of various management practices in flood mitigation. There are various types of management practices for flood mitigation including vegetative and structural management practices. Existing hydrological model such as SWAT and HEC-HMS models have limitation to accommodate discrete management practices such as infiltration well, small farm reservoir, silt pits in its analysis due to the lumped structure of these models. Aim of this research is to use raster spatial analysis functions of Geo-Information System (RGIS-HM) to model flooding event in Ciliwung watershed and to simulate impact of discrete management practices on surface runoff reduction. The model was validated using flooding data event of Ciliwung watershed on 29 January 2004. The hourly hydrograph data and rainfall data were available during period of model validation. The model validation provided good result with Nash-Suthcliff efficiency of 0.8. We also compared the RGIS-HM with Netlogo Hydrological Model (NL-HM). The RGIS-HM has similar capability with NL-HM in simulating discrete management practices in watershed scale.

  13. Polystyrene microspheres enable 10‐color compensation for immunophenotyping of primary human leukocytes

    PubMed Central

    Carr, Karen D.; Norman, John C.; Huye, Leslie; Hegde, Meenakshi

    2015-01-01

    Abstract Compensation is a critical process for the unbiased analysis of flow cytometry data. Numerous compensation strategies exist, including the use of bead‐based products. The purpose of this study was to determine whether beads, specifically polystyrene microspheres (PSMS) compare to the use of primary leukocytes for single color based compensation when conducting polychromatic flow cytometry. To do so, we stained individual tubes of both PSMS and leukocytes with panel specific antibodies conjugated to fluorochromes corresponding to fluorescent channels FL1‐FL10. We compared the matrix generated by PSMS to that generated using peripheral blood mononuclear cells (PBMC). Ideal for compensation is a sample with both a discrete negative population and a bright positive population. We demonstrate that PSMS display autofluorescence properties similar to PBMC. When comparing PSMS to PBMC for compensation PSMS yielded more evenly distributed and discrete negative and positive populations to use for compensation. We analyzed three donors' PBMC stained with our 10‐color T cell subpopulation panel using compensation generated by PSMS vs.PBMC and detected no significant differences in the population distribution. Panel specific antibodies bound to PSMS represent an invaluable valid tool to generate suitable compensation matrices especially when sample material is limited and/or the sample requires analysis of dynamically modulated or rare events. © 2015 The Authors. Cytometry Part A Published by Wiley Periodicals, Inc. PMID:26202733

  14. Stability and bifurcation analysis for the Kaldor-Kalecki model with a discrete delay and a distributed delay

    NASA Astrophysics Data System (ADS)

    Yu, Jinchen; Peng, Mingshu

    2016-10-01

    In this paper, a Kaldor-Kalecki model of business cycle with both discrete and distributed delays is considered. With the corresponding characteristic equation analyzed, the local stability of the positive equilibrium is investigated. It is found that there exist Hopf bifurcations when the discrete time delay passes a sequence of critical values. By applying the method of multiple scales, the explicit formulae which determine the direction of Hopf bifurcation and the stability of bifurcating periodic solutions are derived. Finally, numerical simulations are carried out to illustrate our main results.

  15. Modeling the rate of HIV testing from repeated binary data amidst potential never-testers.

    PubMed

    Rice, John D; Johnson, Brent A; Strawderman, Robert L

    2018-01-04

    Many longitudinal studies with a binary outcome measure involve a fraction of subjects with a homogeneous response profile. In our motivating data set, a study on the rate of human immunodeficiency virus (HIV) self-testing in a population of men who have sex with men (MSM), a substantial proportion of the subjects did not self-test during the follow-up study. The observed data in this context consist of a binary sequence for each subject indicating whether or not that subject experienced any events between consecutive observation time points, so subjects who never self-tested were observed to have a response vector consisting entirely of zeros. Conventional longitudinal analysis is not equipped to handle questions regarding the rate of events (as opposed to the odds, as in the classical logistic regression model). With the exception of discrete mixture models, such methods are also not equipped to handle settings in which there may exist a group of subjects for whom no events will ever occur, i.e. a so-called "never-responder" group. In this article, we model the observed data assuming that events occur according to some unobserved continuous-time stochastic process. In particular, we consider the underlying subject-specific processes to be Poisson conditional on some unobserved frailty, leading to a natural focus on modeling event rates. Specifically, we propose to use the power variance function (PVF) family of frailty distributions, which contains both the gamma and inverse Gaussian distributions as special cases and allows for the existence of a class of subjects having zero frailty. We generalize a computational algorithm developed for a log-gamma random intercept model (Conaway, 1990. A random effects model for binary data. Biometrics46, 317-328) to compute the exact marginal likelihood, which is then maximized to obtain estimates of model parameters. We conduct simulation studies, exploring the performance of the proposed method in comparison with competitors. Applying the PVF as well as a Gaussian random intercept model and a corresponding discrete mixture model to our motivating data set, we conclude that the group assigned to receive follow-up messages via SMS was self-testing at a significantly lower rate than the control group, but that there is no evidence to support the existence of a group of never-testers. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Adaptive Neural Network-Based Event-Triggered Control of Single-Input Single-Output Nonlinear Discrete-Time Systems.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2016-01-01

    This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.

  17. A methodology for risk analysis based on hybrid Bayesian networks: application to the regasification system of liquefied natural gas onboard a floating storage and regasification unit.

    PubMed

    Martins, Marcelo Ramos; Schleder, Adriana Miralles; Droguett, Enrique López

    2014-12-01

    This article presents an iterative six-step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty. © 2014 Society for Risk Analysis.

  18. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  19. Near Optimal Event-Triggered Control of Nonlinear Discrete-Time Systems Using Neurodynamic Programming.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2016-09-01

    This paper presents an event-triggered near optimal control of uncertain nonlinear discrete-time systems. Event-driven neurodynamic programming (NDP) is utilized to design the control policy. A neural network (NN)-based identifier, with event-based state and input vectors, is utilized to learn the system dynamics. An actor-critic framework is used to learn the cost function and the optimal control input. The NN weights of the identifier, the critic, and the actor NNs are tuned aperiodically once every triggered instant. An adaptive event-trigger condition to decide the trigger instants is derived. Thus, a suitable number of events are generated to ensure a desired accuracy of approximation. A near optimal performance is achieved without using value and/or policy iterations. A detailed analysis of nontrivial inter-event times with an explicit formula to show the reduction in computation is also derived. The Lyapunov technique is used in conjunction with the event-trigger condition to guarantee the ultimate boundedness of the closed-loop system. The simulation results are included to verify the performance of the controller. The net result is the development of event-driven NDP.

  20. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    PubMed

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  1. Data Flow Analysis and Visualization for Spatiotemporal Statistical Data without Trajectory Information.

    PubMed

    Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S

    2018-03-01

    Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.

  2. Tutorial in medical decision modeling incorporating waiting lines and queues using discrete event simulation.

    PubMed

    Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter

    2010-01-01

    In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example.

  3. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilke, Jeremiah J; Kenny, Joseph P.

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less

  4. Self-Organisation and Intermittent Coherent Oscillations in the EXTRAP T2 Reversed Field Pinch

    NASA Astrophysics Data System (ADS)

    Cecconello, M.; Malmberg, J.-A.; Sallander, E.; Drake, J. R.

    Many reversed-field pinch (RFP) experiments exhibit a coherent oscillatory behaviour that is characteristic of discrete dynamo events and is associated with intermittent current profile self-organisation phenomena. However, in the vast majority of the discharges in the resistive shell RFP experiment EXTRAP T2, the dynamo activity does not show global, coherent oscillatory behaviour. The internally resonant tearing modes are phase-aligned and wall-locked resulting in a large localised magnetic perturbation. Equilibrium and plasma parameters have a level of high frequency fluctuations but the average values are quasi-steady. For some discharges, however, the equilibrium parameters exhibit the oscillatory behaviour characteristic of the discrete dynamo events. For these discharges, the trend observed in the tearing mode spectra, associated with the onset of the discrete relaxation event behaviour, is a relative higher amplitude of m = 0 mode activity and relative lower amplitude of the m = 1 mode activity compared with their average values. Global plasma parameters and model profile calculations for sample discharges representing the two types of relaxation dynamics are presented.

  5. A Decision Tool that Combines Discrete Event Software Process Models with System Dynamics Pieces for Software Development Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn Barrett; Malone, Linda

    2007-01-01

    The development process for a large software development project is very complex and dependent on many variables that are dynamic and interrelated. Factors such as size, productivity and defect injection rates will have substantial impact on the project in terms of cost and schedule. These factors can be affected by the intricacies of the process itself as well as human behavior because the process is very labor intensive. The complex nature of the development process can be investigated with software development process models that utilize discrete event simulation to analyze the effects of process changes. The organizational environment and its effects on the workforce can be analyzed with system dynamics that utilizes continuous simulation. Each has unique strengths and the benefits of both types can be exploited by combining a system dynamics model and a discrete event process model. This paper will demonstrate how the two types of models can be combined to investigate the impacts of human resource interactions on productivity and ultimately on cost and schedule.

  6. Diagnosis of delay-deadline failures in real time discrete event models.

    PubMed

    Biswas, Santosh; Sarkar, Dipankar; Bhowal, Prodip; Mukhopadhyay, Siddhartha

    2007-10-01

    In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.

  7. A computational framework for prime implicants identification in noncoherent dynamic systems.

    PubMed

    Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico

    2015-01-01

    Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.

  8. A Computational Model of Event Segmentation from Perceptual Prediction

    ERIC Educational Resources Information Center

    Reynolds, Jeremy R.; Zacks, Jeffrey M.; Braver, Todd S.

    2007-01-01

    People tend to perceive ongoing continuous activity as series of discrete events. This partitioning of continuous activity may occur, in part, because events correspond to dynamic patterns that have recurred across different contexts. Recurring patterns may lead to reliable sequential dependencies in observers' experiences, which then can be used…

  9. 49 CFR Appendix B to Part 242 - Procedures for Submission and Approval of Conductor Certification Programs

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... are voluntary or mandatory. Time and circumstances have the capacity to diminish both abstract knowledge and the proper application of that knowledge to discrete events. Time and circumstances also have.... In formulating how it will use the discretion being afforded, each railroad must design its program...

  10. 49 CFR Appendix B to Part 242 - Procedures for Submission and Approval of Conductor Certification Programs

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... are voluntary or mandatory. Time and circumstances have the capacity to diminish both abstract knowledge and the proper application of that knowledge to discrete events. Time and circumstances also have.... In formulating how it will use the discretion being afforded, each railroad must design its program...

  11. 49 CFR Appendix B to Part 242 - Procedures for Submission and Approval of Conductor Certification Programs

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... are voluntary or mandatory. Time and circumstances have the capacity to diminish both abstract knowledge and the proper application of that knowledge to discrete events. Time and circumstances also have.... In formulating how it will use the discretion being afforded, each railroad must design its program...

  12. Taxometric Investigation of PTSD: Data from Two Nationally Representative Samples

    ERIC Educational Resources Information Center

    Broman-Fulks, Joshua J.; Ruggiero, Kenneth J.; Green, Bradley A.; Kilpatrick, Dean G.; Danielson, Carla Kmett; Resnick, Heidi S.; Saunders, Benjamin E.

    2006-01-01

    Current psychiatric nosology depicts posttraumatic stress disorder (PTSD) as a discrete diagnostic category. However, only one study has examined the latent structure of PTSD, and this study suggested that PTSD may be more accurately conceptualized as an extreme reaction to traumatic life events rather than a discrete clinical syndrome. To build…

  13. CONFIG: Integrated engineering of systems and their operation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    This article discusses CONFIG 3, a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operations of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. CONFIG supports integration among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. CONFIG is designed to support integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems.

  14. Recent Advances in Composite Damage Mechanics

    NASA Technical Reports Server (NTRS)

    Reifsnider, Ken; Case, Scott; Iyengar, Nirmal

    1996-01-01

    The state of the art and recent developments in the field of composite material damage mechanics are reviewed, with emphasis on damage accumulation. The kinetics of damage accumulation are considered with emphasis on the general accumulation of discrete local damage events such as single or multiple fiber fractures or microcrack formation. The issues addressed include: how to define strength in the presence of widely distributed damage, and how to combine mechanical representations in order to predict the damage tolerance and life of engineering components. It is shown that a damage mechanics approach can be related to the thermodynamics of the damage accumulation processes in composite laminates subjected to mechanical loading and environmental conditions over long periods of time.

  15. Focal mechanisms and inter-event times of low-frequency earthquakes reveal quasi-continuous deformation and triggered slow slip on the deep Alpine Fault

    NASA Astrophysics Data System (ADS)

    Baratin, Laura-May; Chamberlain, Calum J.; Townend, John; Savage, Martha K.

    2018-02-01

    Characterising the seismicity associated with slow deformation in the vicinity of the Alpine Fault may provide constraints on the stresses acting on a major transpressive margin prior to an anticipated great (≥M8) earthquake. Here, we use recently detected tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault late in its typical ∼300-yr seismic cycle. We analyse a continuous seismic dataset recorded between 2009 and 2016 using a network of 10-13 short-period seismometers, the Southern Alps Microearthquake Borehole Array. Fourteen primary LFE templates are used in an iterative matched-filter and stacking routine, allowing the detection of similar signals corresponding to LFE families sharing common locations. This yields an 8-yr catalogue containing 10,000 LFEs that are combined for each of the 14 LFE families using phase-weighted stacking to produce signals with the highest possible signal-to-noise ratios. We show that LFEs occur almost continuously during the 8-yr study period and highlight two types of LFE distributions: (1) discrete behaviour with an inter-event time exceeding 2 min; (2) burst-like behaviour with an inter-event time below 2 min. We interpret the discrete events as small-scale frequent deformation on the deep extent of the Alpine Fault and LFE bursts (corresponding in most cases to known episodes of tremor or large regional earthquakes) as brief periods of increased slip activity indicative of slow slip. We compute improved non-linear earthquake locations using a 3-D velocity model. LFEs occur below the seismogenic zone at depths of 17-42 km, on or near the hypothesised deep extent of the Alpine Fault. The first estimates of LFE focal mechanisms associated with continental faulting, in conjunction with recurrence intervals, are consistent with quasi-continuous shear faulting on the deep extent of the Alpine Fault.

  16. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  17. Discrete-Event Simulation Models of Plasmodium falciparum Malaria

    PubMed Central

    McKenzie, F. Ellis; Wong, Roger C.; Bossert, William H.

    2008-01-01

    We develop discrete-event simulation models using a single “timeline” variable to represent the Plasmodium falciparum lifecycle in individual hosts and vectors within interacting host and vector populations. Where they are comparable our conclusions regarding the relative importance of vector mortality and the durations of host immunity and parasite development are congruent with those of classic differential-equation models of malaria, epidemiology. However, our results also imply that in regions with intense perennial transmission, the influence of mosquito mortality on malaria prevalence in humans may be rivaled by that of the duration of host infectivity. PMID:18668185

  18. Graph-theoretic analysis of discrete-phase-space states for condition change detection and quantification of information

    DOEpatents

    Hively, Lee M.

    2014-09-16

    Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By monitoring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (unstructured data) into discrete-phase-space states, and hence into a graph (structured data) for extraction of condition change.

  19. Control of discrete event systems modeled as hierarchical state machines

    NASA Technical Reports Server (NTRS)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  20. Networked event-triggered control: an introduction and research trends

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Sabih, Muhammad

    2014-11-01

    A physical system can be studied as either continuous time or discrete-time system depending upon the control objectives. Discrete-time control systems can be further classified into two categories based on the sampling: (1) time-triggered control systems and (2) event-triggered control systems. Time-triggered systems sample states and calculate controls at every sampling instant in a periodic fashion, even in cases when states and calculated control do not change much. This indicates unnecessary and useless data transmission and computation efforts of a time-triggered system, thus inefficiency. For networked systems, the transmission of measurement and control signals, thus, cause unnecessary network traffic. Event-triggered systems, on the other hand, have potential to reduce the communication burden in addition to reducing the computation of control signals. This paper provides an up-to-date survey on the event-triggered methods for control systems and highlights the potential research directions.

  1. Non-fragile ?-? control for discrete-time stochastic nonlinear systems under event-triggered protocols

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Ding, Derui; Zhang, Sunjie; Wei, Guoliang; Liu, Hongjian

    2018-07-01

    In this paper, the non-fragile ?-? control problem is investigated for a class of discrete-time stochastic nonlinear systems under event-triggered communication protocols, which determine whether the measurement output should be transmitted to the controller or not. The main purpose of the addressed problem is to design an event-based output feedback controller subject to gain variations guaranteeing the prescribed disturbance attenuation level described by the ?-? performance index. By utilizing the Lyapunov stability theory combined with S-procedure, a sufficient condition is established to guarantee both the exponential mean-square stability and the ?-? performance for the closed-loop system. In addition, with the help of the orthogonal decomposition, the desired controller parameter is obtained in terms of the solution to certain linear matrix inequalities. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed event-based controller design scheme.

  2. An Empirical Study of Combining Communicating Processes in a Parallel Discrete Event Simulation

    DTIC Science & Technology

    1990-12-01

    dynamics of the cost/performance criteria which typically made up computer resource acquisition decisions . offering a broad range of tradeoffs in the way... prcesses has a significant impact on simulation performance. It is the hypothesis of this 3-4 SYSTEM DECOMPOSITION PHYSICAL SYSTEM 1: N PHYSICAL PROCESS 1...EMPTY)) next-event = pop(next-event-queue); lp-clock = next-event - time; Simulate next event departure- consume event-enqueue new event end while; If no

  3. Discrete ellipsoidal statistical BGK model and Burnett equations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei

    2018-06-01

    A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.

  4. TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, S. F.

    1994-01-01

    The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  5. Event-driven contrastive divergence for spiking neuromorphic systems.

    PubMed

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  6. Event-driven contrastive divergence for spiking neuromorphic systems

    PubMed Central

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2014-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952

  7. Input-output identification of controlled discrete manufacturing systems

    NASA Astrophysics Data System (ADS)

    Estrada-Vargas, Ana Paula; López-Mellado, Ernesto; Lesage, Jean-Jacques

    2014-03-01

    The automated construction of discrete event models from observations of external system's behaviour is addressed. This problem, often referred to as system identification, allows obtaining models of ill-known (or even unknown) systems. In this article, an identification method for discrete event systems (DESs) controlled by a programmable logic controller is presented. The method allows processing a large quantity of observed long sequences of input/output signals generated by the controller and yields an interpreted Petri net model describing the closed-loop behaviour of the automated DESs. The proposed technique allows the identification of actual complex systems because it is sufficiently efficient and well adapted to cope with both the technological characteristics of industrial controllers and data collection requirements. Based on polynomial-time algorithms, the method is implemented as an efficient software tool which constructs and draws the model automatically; an overview of this tool is given through a case study dealing with an automated manufacturing system.

  8. Testing for Independence between Evolutionary Processes.

    PubMed

    Behdenna, Abdelkader; Pothier, Joël; Abby, Sophie S; Lambert, Amaury; Achaz, Guillaume

    2016-09-01

    Evolutionary events co-occurring along phylogenetic trees usually point to complex adaptive phenomena, sometimes implicating epistasis. While a number of methods have been developed to account for co-occurrence of events on the same internal or external branch of an evolutionary tree, there is a need to account for the larger diversity of possible relative positions of events in a tree. Here we propose a method to quantify to what extent two or more evolutionary events are associated on a phylogenetic tree. The method is applicable to any discrete character, like substitutions within a coding sequence or gains/losses of a biological function. Our method uses a general approach to statistically test for significant associations between events along the tree, which encompasses both events inseparable on the same branch, and events genealogically ordered on different branches. It assumes that the phylogeny and themapping of branches is known without errors. We address this problem from the statistical viewpoint by a linear algebra representation of the localization of the evolutionary events on the tree.We compute the full probability distribution of the number of paired events occurring in the same branch or in different branches of the tree, under a null model of independence where each type of event occurs at a constant rate uniformly inthephylogenetic tree. The strengths andweaknesses of themethodare assessed via simulations;we then apply the method to explore the loss of cell motility in intracellular pathogens. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Stability and Hopf bifurcation for a regulated logistic growth model with discrete and distributed delays

    NASA Astrophysics Data System (ADS)

    Fang, Shengle; Jiang, Minghui

    2009-12-01

    In this paper, we investigate the stability and Hopf bifurcation of a new regulated logistic growth with discrete and distributed delays. By choosing the discrete delay τ as a bifurcation parameter, we prove that the system is locally asymptotically stable in a range of the delay and Hopf bifurcation occurs as τ crosses a critical value. Furthermore, explicit algorithm for determining the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions is derived by normal form theorem and center manifold argument. Finally, an illustrative example is also given to support the theoretical results.

  10. Discrete shaped strain sensors for intelligent structures

    NASA Technical Reports Server (NTRS)

    Andersson, Mark S.; Crawley, Edward F.

    1992-01-01

    Design of discrete, highly distributed sensor systems for intelligent structures has been studied. Data obtained indicate that discrete strain-averaging sensors satisfy the functional requirements for distributed sensing of intelligent structures. Bartlett and Gauss-Hanning sensors, in particular, provide good wavenumber characteristics while meeting the functional requirements. They are characterized by good rolloff rates and positive Fourier transforms for all wavenumbers. For the numerical integration schemes, Simpson's rule is considered to be very simple to implement and consistently provides accurate results for five sensors or more. It is shown that a sensor system that satisfies the functional requirements can be applied to a structure that supports mode shapes with purely sinusoidal curvature.

  11. Distributed-observer-based cooperative control for synchronization of linear discrete-time multi-agent systems.

    PubMed

    Liang, Hongjing; Zhang, Huaguang; Wang, Zhanshan

    2015-11-01

    This paper considers output synchronization of discrete-time multi-agent systems with directed communication topologies. The directed communication graph contains a spanning tree and the exosystem as its root. Distributed observer-based consensus protocols are proposed, based on the relative outputs of neighboring agents. A multi-step algorithm is presented to construct the observer-based protocols. In light of the discrete-time algebraic Riccati equation and internal model principle, synchronization problem is completed. At last, numerical simulation is provided to verify the effectiveness of the theoretical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  12. 40 CFR 1033.535 - Adjusting emission levels to account for infrequently regenerating aftertreatment devices.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... aftertreatment technology with infrequent regeneration events that occur during testing. See paragraph (e) of... adjust discrete-mode testing. For this section, “regeneration” means an intended event during which... section, “infrequent” refers to regeneration events that are expected to occur on average less than once...

  13. 40 CFR 1033.535 - Adjusting emission levels to account for infrequently regenerating aftertreatment devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... aftertreatment technology with infrequent regeneration events that occur during testing. See paragraph (e) of... adjust discrete-mode testing. For this section, “regeneration” means an intended event during which... section, “infrequent” refers to regeneration events that are expected to occur on average less than once...

  14. 40 CFR 1033.535 - Adjusting emission levels to account for infrequently regenerating aftertreatment devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... aftertreatment technology with infrequent regeneration events that occur during testing. See paragraph (e) of... adjust discrete-mode testing. For this section, “regeneration” means an intended event during which... section, “infrequent” refers to regeneration events that are expected to occur on average less than once...

  15. Timing Processes Are Correlated when Tasks Share a Salient Event

    ERIC Educational Resources Information Center

    Zelaznik, Howard N.; Rosenbaum, David A.

    2010-01-01

    Event timing is manifested when participants make discrete movements such as repeatedly tapping a key. Emergent timing is manifested when participants make continuous movements such as repeatedly drawing a circle. Here we pursued the possibility that providing salient perceptual events to mark the completion of time intervals could allow circle…

  16. SimEngine v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Hai D.

    2017-03-02

    SimEngine provides the core functionalities and components that are key to the development of discrete event simulation tools. These include events, activities, event queues, random number generators, and basic result tracking classes. SimEngine was designed for high performance, integrates seamlessly into any Microsoft .Net development environment, and provides a flexible API for simulation developers.

  17. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  18. Discrete-Slots Models of Visual Working-Memory Response Times

    PubMed Central

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  19. Growing degree day calculator

    USDA-ARS?s Scientific Manuscript database

    Degree-day benchmarks indicate discrete biological events in the development of insect pests. For the Sparganothis fruitworm, we have isolated all key development events and linked them to degree-day accumulations. These degree-day accumulations can greatly improve treatment timings for cranberry IP...

  20. Event-driven management algorithm of an Engineering documents circulation system

    NASA Astrophysics Data System (ADS)

    Kuzenkov, V.; Zebzeev, A.; Gromakov, E.

    2015-04-01

    Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.

  1. Event-Based Control Strategy for Mobile Robots in Wireless Environments.

    PubMed

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-12-02

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.

  2. Event-Based Control Strategy for Mobile Robots in Wireless Environments

    PubMed Central

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-01-01

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412

  3. Hybrid stochastic simplifications for multiscale gene networks

    PubMed Central

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-01-01

    Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554

  4. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  5. Stochastic Evolution Equations Driven by Fractional Noises

    DTIC Science & Technology

    2016-11-28

    rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian

  6. Correlation between discrete probability and reaction front propagation rate in heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Naine, Tarun Bharath; Gundawar, Manoj Kumar

    2017-09-01

    We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.

  7. Self-narrowing of size distributions of nanostructures by nucleation antibunching

    NASA Astrophysics Data System (ADS)

    Glas, Frank; Dubrovskii, Vladimir G.

    2017-08-01

    We study theoretically the size distributions of ensembles of nanostructures fed from a nanosize mother phase or a nanocatalyst that contains a limited number of the growth species that form each nanostructure. In such systems, the nucleation probability decreases exponentially after each nucleation event, leading to the so-called nucleation antibunching. Specifically, this effect has been observed in individual nanowires grown in the vapor-liquid-solid mode and greatly affects their properties. By performing numerical simulations over large ensembles of nanostructures as well as developing two different analytical schemes (a discrete and a continuum approach), we show that nucleation antibunching completely suppresses fluctuation-induced broadening of the size distribution. As a result, the variance of the distribution saturates to a time-independent value instead of growing infinitely with time. The size distribution widths and shapes primarily depend on the two parameters describing the degree of antibunching and the nucleation delay required to initiate the growth. The resulting sub-Poissonian distributions are highly desirable for improving size homogeneity of nanowires. On a more general level, this unique self-narrowing effect is expected whenever the growth rate is regulated by a nanophase which is able to nucleate an island much faster than it is refilled from a surrounding macroscopic phase.

  8. Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.

    PubMed

    Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G

    2011-10-01

    In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.

  9. Category representations in the brain are both discretely localized and widely distributed.

    PubMed

    Shehzad, Zarrar; McCarthy, Gregory

    2018-06-01

    Whether category information is discretely localized or represented widely in the brain remains a contentious issue. Initial functional MRI studies supported the localizationist perspective that category information is represented in discrete brain regions. More recent fMRI studies using machine learning pattern classification techniques provide evidence for widespread distributed representations. However, these latter studies have not typically accounted for shared information. Here, we find strong support for distributed representations when brain regions are considered separately. However, localized representations are revealed by using analytical methods that separate unique from shared information among brain regions. The distributed nature of shared information and the localized nature of unique information suggest that brain connectivity may encourage spreading of information but category-specific computations are carried out in distinct domain-specific regions. NEW & NOTEWORTHY Whether visual category information is localized in unique domain-specific brain regions or distributed in many domain-general brain regions is hotly contested. We resolve this debate by using multivariate analyses to parse functional MRI signals from different brain regions into unique and shared variance. Our findings support elements of both models and show information is initially localized and then shared among other regions leading to distributed representations being observed.

  10. Implementing system simulation of C3 systems using autonomous objects

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1987-01-01

    The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.

  11. Program For Simulation Of Trajectories And Events

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1992-01-01

    Universal Simulation Executive (USE) program accelerates and eases generation of application programs for numerical simulation of continuous trajectories interrupted by or containing discrete events. Developed for simulation of multiple spacecraft trajectories with events as one spacecraft crossing the equator, two spacecraft meeting or parting, or firing rocket engine. USE also simulates operation of chemical batch processing factory. Written in Ada.

  12. Estimating Multi-Level Discrete-Time Hazard Models Using Cross-Sectional Data: Neighborhood Effects on the Onset of Adolescent Cigarette Use.

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Brennan, Robert T.; Buka, Stephen L.

    2002-01-01

    Developed procedures for constructing a retrospective person-period data set from cross-sectional data and discusses modeling strategies for estimating multilevel discrete-time event history models. Applied the methods to the analysis of cigarette use by 1,979 urban adolescents. Results show the influence of the racial composition of the…

  13. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling.

    PubMed

    Schuck, P

    2000-03-01

    A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.

  14. Improving Project Management with Simulation and Completion Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cates, Grant R.

    2004-01-01

    Despite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. A major culprit in late projects is uncertainty, which most, if not all, projects are inherently subject to. This uncertainty resides in the estimates for activity durations, the occurrence of unplanned and unforeseen events, and the availability of critical resources. In response to this problem, this research developed a comprehensive simulation based methodology for conducting quantitative project completion time risk analysis. It is called the Project Assessment by Simulation Technique (PAST). This new tool enables project stakeholders to visualize uncertainty or risk, i.e. the likelihood of their project completing late and the magnitude of the lateness, by providing them with a completion time distribution function of their projects. Discrete event simulation is used within PAST to determine the completion distribution function for the project of interest. The simulation is populated with both deterministic and stochastic elements. The deterministic inputs include planned project activities, precedence requirements, and resource requirements. The stochastic inputs include activity duration growth distributions, probabilities for events that can impact the project, and other dynamic constraints that may be placed upon project activities and milestones. These stochastic inputs are based upon past data from similar projects. The time for an entity to complete the simulation network, subject to both the deterministic and stochastic factors, represents the time to complete the project. Repeating the simulation hundreds or thousands of times allows one to create the project completion distribution function. The Project Assessment by Simulation Technique was demonstrated to be effective for the on-going NASA project to assemble the International Space Station. Approximately $500 million per month is being spent on this project, which is scheduled to complete by 2010. NASA project stakeholders participated in determining and managing completion distribution functions produced from PAST. The first result was that project stakeholders improved project completion risk awareness. Secondly, using PAST, mitigation options were analyzed to improve project completion performance and reduce total project cost.

  15. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  16. A PLUG-AND-PLAY ARCHITECTURE FOR PROBABILISTIC PROGRAMMING

    DTIC Science & Technology

    2017-04-01

    programs that use discrete numerical distributions, but even then, the space of possible outcomes may be uncountable (as a solution can be infinite...also identify conditions guaranteeing that all possible outcomes are finite (and then the probability space is discrete ). 2.2.2 The PlogiQL...and not determined at runtime. Nevertheless, the PRAiSE team plans to extend their solution to support numerical (continuous or discrete

  17. Reconstruction of improvised explosive device blast loading to personnel in the open

    NASA Astrophysics Data System (ADS)

    Wiri, Suthee; Needham, Charles

    2016-05-01

    Significant advances in reconstructing attacks by improvised explosive devices (IEDs) and other blast events are reported. A high-fidelity three-dimensional computational fluid dynamics tool, called Second-order Hydrodynamic Automatic Mesh Refinement Code, was used for the analysis. Computer-aided design models for subjects or vehicles in the scene accurately represent geometries of objects in the blast field. A wide range of scenario types and blast exposure levels were reconstructed including free field blast, enclosed space of vehicle cabin, IED attack on a vehicle, buried charges, recoilless rifle operation, rocket-propelled grenade attack and missile attack with single subject or multiple subject exposure to pressure levels from ˜ 27.6 kPa (˜ 4 psi) to greater than 690 kPa (>100 psi). To create a full 3D pressure time-resolved reconstruction of a blast event for injury and blast exposure analysis, a combination of intelligence data and Blast Gauge data can be used to reconstruct an actual in-theatre blast event. The methodology to reconstruct an event and the "lessons learned" from multiple reconstructions in open space are presented. The analysis uses records of blast pressure at discrete points, and the output is a spatial and temporal blast load distribution for all personnel involved.

  18. Security of a discretely signaled continuous variable quantum key distribution protocol for high rate systems.

    PubMed

    Zhang, Zheshen; Voss, Paul L

    2009-07-06

    We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.

  19. Efficiency of synaptic transmission of single-photon events from rod photoreceptor to rod bipolar dendrite.

    PubMed

    Schein, Stan; Ahmad, Kareem M

    2006-11-01

    A rod transmits absorption of a single photon by what appears to be a small reduction in the small number of quanta of neurotransmitter (Q(count)) that it releases within the integration period ( approximately 0.1 s) of a rod bipolar dendrite. Due to the quantal and stochastic nature of release, discrete distributions of Q(count) for darkness versus one isomerization of rhodopsin (R*) overlap. We suggested that release must be regular to narrow these distributions, reduce overlap, reduce the rate of false positives, and increase transmission efficiency (the fraction of R* events that are identified as light). Unsurprisingly, higher quantal release rates (Q(rates)) yield higher efficiencies. Focusing here on the effect of small changes in Q(rate), we find that a slightly higher Q(rate) yields greatly reduced efficiency, due to a necessarily fixed quantal-count threshold. To stabilize efficiency in the face of drift in Q(rate), the dendrite needs to regulate the biochemical realization of its quantal-count threshold with respect to its Q(count). These considerations reveal the mathematical role of calcium-based negative feedback and suggest a helpful role for spontaneous R*. In addition, to stabilize efficiency in the face of drift in degree of regularity, efficiency should be approximately 50%, similar to measurements.

  20. A hierarchical model of the evolution of cooperation in cultural systems.

    PubMed

    Savatsky, K; Reynolds, R G

    1989-01-01

    In this paper the following problem is addressed: "Under what conditions can a collection of individual organisms learn to cooperate when cooperation appears to outwardly degrade individual performance at the outset. In order to attempt a theoretical solution to this problem, data from a real world problem in anthropology is used. A distributed simulation model of this system was developed to assess its long term behavior using using an approach suggested by Zeigler (Zeigler, B.P., 1984, Multifaceted Modelling and Discrete Event Simulation (Academic Press, London)). The results of the simulation are used to show that although cooperation degrades the performance potential of each individual, it enhances the persistence of the individual's partial solution to the problem in certain situations."

  1. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  2. Event-triggered fault detection for a class of discrete-time linear systems using interval observers.

    PubMed

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-05-01

    This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Gauging events that influence students' perceptions of the medical school learning environment: findings from one institution.

    PubMed

    Shochet, Robert B; Colbert-Getz, Jorie M; Levine, Rachel B; Wright, Scott M

    2013-02-01

    The medical school learning environment (LE), encompassing the physical, social, and psychological context for learning, holds significant influence on students' professional development. Among these myriad experiences, the authors sought to gauge what students judge as influencing their perceptions of the LE. Fourth-year medical students at Johns Hopkins University participated in this cohort survey study before their 2010 graduation. A list of 55 events was iteratively revised and pilot-tested before being administered online. Responses assessed whether students experienced each event and, if so, the degree of impact on perceptions of the LE. A calculated mean impact score (MIS) provided a means to compare the relative impact of events. Of 119 students, 84 (71%) completed the survey. Students rated the overall LE as exceptional (29/84; 35%), good (36/84; 43%), fair (17/84; 20%), or poor (2/84; 2%). Eighty percent of students experienced at least 41 of the 55 events. MIS values ranged from 2.00 to 3.76 (highest possible: 4.00). Students rated positive events as having the highest impact. Few significant differences were found across gender, age, or surgical/nonsurgical specialty choice. MIS distributions differed between those perceiving the LE as exceptional or fair to poor for 22 (40%) of 55 events. This study attempted to identify the discrete events that medical students perceive as most affecting their sense of the LE. Knowing the phenomena that most strongly influence student perceptions can inform how settings, relationships, and interactions can be shaped for meaningful learning and professional formation.

  4. Modeling Temporal Processes in Early Spacecraft Design: Application of Discrete-Event Simulations for Darpa's F6 Program

    NASA Technical Reports Server (NTRS)

    Dubos, Gregory F.; Cornford, Steven

    2012-01-01

    While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".

  5. Characteristics of dayside auroral displays in relation to magnetospheric processes

    NASA Astrophysics Data System (ADS)

    Minow, Joseph I.

    1997-09-01

    The use of dayside aurorae as a ground based monitor of magnetopause activity is explored in this thesis. The origin of diffuse (OI) 630.0 nm emissions in the midday auroral oval is considered first. Analysis of low altitude satellite records of precipitating charged particles within the cusp show an unstructured electron component that will produce a 0.5-1 kR 630.0 nm emission throughout the cusp. Distribution of the electrons is controlled by the requirement of charge neutrality in the cusp, predicting a diffuse 630.0 nm background even if the magnetosheath plasma is introduced into the magnetosphere in discrete merging events. Cusp electron fluxes also contain a structured component characterized by enhancements in the electron energy and energy flux over background values in narrow regions a few 10's of kilometers in width. These structured features are identified as the source of the transient midday arcs. An auroral model is developed to study the morphology of (OI) 630.0 nm auroral emissions produced by the transient arcs. The model demonstrates that a diffuse 630.0 nm background emission is produced by transient arcs due to the long lifetime of the O(1D) state. Two sources of diffuse 630.0 nm background emissions exist in the cusp which may originate in discrete merging events. The conclusion is that persistent 630.0 nm emissions cannot be interpreted as prima facie evidence for continuous particle transport from the magnetosheath across the magnetopause boundary and into the polar cusp. The second subject that is considered is the analysis of temporal and spatial variations of the diffuse 557.7 nm pulsating aurora in relation to the 630.0 nm dominated transient aurora. Temporal variations at the poleward boundary of the diffuse 557.7 nm aurora correlate with the formation of the 630.0 nm transient aurorae suggesting that the two events are related. The character of the auroral variations is consistent with the behavior of particle populations reported during satellite observations of flux transfer events near the dayside magnetopause. An interpretation of the events in terms of impulsive magnetic reconnection yields a new observation that relates the poleward moving transient auroral arcs in the midday sector to the flux transfer events.

  6. Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models

    NASA Astrophysics Data System (ADS)

    Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido

    2016-06-01

    We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.

  7. Predicting System Accidents with Model Analysis During Hybrid Simulation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land D.; Throop, David R.

    2002-01-01

    Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.

  8. Temporal variations in supraglacial debris distribution on Baltoro Glacier, Karakoram between 2001 and 2012

    NASA Astrophysics Data System (ADS)

    Gibson, Morgan J.; Glasser, Neil F.; Quincey, Duncan J.; Mayer, Christoph; Rowan, Ann V.; Irvine-Fynn, Tristram D. L.

    2017-10-01

    Distribution of supraglacial debris in a glacier system varies spatially and temporally due to differing rates of debris input, transport and deposition. Supraglacial debris distribution governs the thickness of a supraglacial debris layer, an important control on the amount of ablation that occurs under such a debris layer. Characterising supraglacial debris layer thickness on a glacier is therefore key to calculating ablation across a glacier surface. The spatial pattern of debris thickness on Baltoro Glacier has previously been calculated for one discrete point in time (2004) using satellite thermal data and an empirically based relationship between supraglacial debris layer thickness and debris surface temperature identified in the field. Here, the same empirically based relationship was applied to two further datasets (2001, 2012) to calculate debris layer thickness across Baltoro Glacier for three discrete points over an 11-year period (2001, 2004, 2012). Surface velocity and sediment flux were also calculated, as well as debris thickness change between periods. Using these outputs, alongside geomorphological maps of Baltoro Glacier produced for 2001, 2004 and 2012, spatiotemporal changes in debris distribution for a sub-decadal timescale were investigated. Sediment flux remained constant throughout the 11-year period. The greatest changes in debris thickness occurred along medial moraines, the locations of mass movement deposition and areas of interaction between tributary glaciers and the main glacier tongue. The study confirms the occurrence of spatiotemporal changes in supraglacial debris layer thickness on sub-decadal timescales, independent of variation in surface velocity. Instead, variation in rates of debris distribution are primarily attributed to frequency and magnitude of mass movement events over decadal timescales, with climate, regional uplift and erosion rates expected to control debris inputs over centurial to millennial timescales. Inclusion of such spatiotemporal variations in debris thickness in distributed surface energy balance models would increase the accuracy of calculated ablation, leading to a more accurate simulation of glacier mass balance through time, and greater precision in quantification of the response of debris-covered glaciers to climatic change.

  9. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  10. Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing

    DTIC Science & Technology

    2012-12-14

    Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of

  11. Parallel and Distributed Computing Combinatorial Algorithms

    DTIC Science & Technology

    1993-10-01

    Discrete Math , 1991. In press. [551 L. Finkelstein, D. Kleitman, and T. Leighton. Applying the classification theorem for finite simple groups to minimize...Mathematics (in press). [741 L. Heath, T. Leighton, and A. Rosenberg. Comparing queue and stack layouts. SIAM J Discrete Math , 5(3):398-412, August 1992...line can meet only a few. DIMA CS Series in Discrete Math and Theoretical Computer Science, 9, 1993. Publications, Presentations and Theses Supported

  12. Stochastic Adaptive Estimation and Control.

    DTIC Science & Technology

    1994-10-26

    Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and

  13. Simulation studies of vestibular macular afferent-discharge patterns using a new, quasi-3-D finite volume method

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Linton, S. W.; Parnas, B. R.

    2000-01-01

    A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.

  14. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  15. Airlift Operation Modeling Using Discrete Event Simulation (DES)

    DTIC Science & Technology

    2009-12-01

    Java ......................................................................................................20 2. Simkit...JRE Java Runtime Environment JVM Java Virtual Machine lbs Pounds LAM Load Allocation Mode LRM Landing Spot Reassignment Mode LEGO Listener Event...SOFTWARE DEVELOPMENT ENVIRONMENT The following are the software tools and development environment used for constructing the models. 1. Java Java

  16. Chemical Dosing and First-Order Kinetics

    ERIC Educational Resources Information Center

    Hladky, Paul W.

    2011-01-01

    College students encounter a variety of first-order phenomena in their mathematics and science courses. Introductory chemistry textbooks that discuss first-order processes, usually in conjunction with chemical kinetics or radioactive decay, stop at single, discrete dose events. Although single-dose situations are important, multiple-dose events,…

  17. Sparganothis fruitworm degree-day benchmarks provide key treatmen timings for cranberry IPM

    USDA-ARS?s Scientific Manuscript database

    Degree-day benchmarks indicate discrete biological events in the development of insect pests. For the Sparganothis fruitworm, we have isolated all key development events and linked them to degree-day accumulations. These degree-day accumulations can greatly improve treatment timings for cranberry ...

  18. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  19. Advances in Discrete-Event Simulation for MSL Command Validation

    NASA Technical Reports Server (NTRS)

    Patrikalakis, Alexander; O'Reilly, Taifun

    2013-01-01

    In the last five years, the discrete event simulator, SEQuence GENerator (SEQGEN), developed at the Jet Propulsion Laboratory to plan deep-space missions, has greatly increased uplink operations capacity to deal with increasingly complicated missions. In this paper, we describe how the Mars Science Laboratory (MSL) project makes full use of an interpreted environment to simulate change in more than fifty thousand flight software parameters and conditional command sequences to predict the result of executing a conditional branch in a command sequence, and enable the ability to warn users whenever one or more simulated spacecraft states change in an unexpected manner. Using these new SEQGEN features, operators plan more activities in one sol than ever before.

  20. A computational approach to extinction events in chemical reaction networks with discrete state spaces.

    PubMed

    Johnston, Matthew D

    2017-12-01

    Recent work of Johnston et al. has produced sufficient conditions on the structure of a chemical reaction network which guarantee that the corresponding discrete state space system exhibits an extinction event. The conditions consist of a series of systems of equalities and inequalities on the edges of a modified reaction network called a domination-expanded reaction network. In this paper, we present a computational implementation of these conditions written in Python and apply the program on examples drawn from the biochemical literature. We also run the program on 458 models from the European Bioinformatics Institute's BioModels Database and report our results. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Safety Discrete Event Models for Holonic Cyclic Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Ciufudean, Calin; Filote, Constantin

    In this paper the expression “holonic cyclic manufacturing systems” refers to complex assembly/disassembly systems or fork/join systems, kanban systems, and in general, to any discrete event system that transforms raw material and/or components into products. Such a system is said to be cyclic if it provides the same sequence of products indefinitely. This paper considers the scheduling of holonic cyclic manufacturing systems and describes a new approach using Petri nets formalism. We propose an approach to frame the optimum schedule of holonic cyclic manufacturing systems in order to maximize the throughput while minimize the work in process. We also propose an algorithm to verify the optimum schedule.

  2. High Fidelity Simulations of Large-Scale Wireless Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma; Benz, Zachary

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less

  3. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  4. Parameter Prediction of Hydraulic Fracture for Tight Reservoir Based on Micro-Seismic and History Matching

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Ma, Xiaopeng; Li, Yanlai; Wu, Haiyang; Cui, Chenyu; Zhang, Xiaoming; Zhang, Hao; Yao, Jun

    Hydraulic fracturing is an important measure for the development of tight reservoirs. In order to describe the distribution of hydraulic fractures, micro-seismic diagnostic was introduced into petroleum fields. Micro-seismic events may reveal important information about static characteristics of hydraulic fracturing. However, this method is limited to reflect the distribution area of the hydraulic fractures and fails to provide specific parameters. Therefore, micro-seismic technology is integrated with history matching to predict the hydraulic fracture parameters in this paper. Micro-seismic source location is used to describe the basic shape of hydraulic fractures. After that, secondary modeling is considered to calibrate the parameters information of hydraulic fractures by using DFM (discrete fracture model) and history matching method. In consideration of fractal feature of hydraulic fracture, fractal fracture network model is established to evaluate this method in numerical experiment. The results clearly show the effectiveness of the proposed approach to estimate the parameters of hydraulic fractures.

  5. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  6. Identification of safety-critical events using kinematic vehicle data and the discrete fourier transform.

    PubMed

    Kluger, Robert; Smith, Brian L; Park, Hyungjun; Dailey, Daniel J

    2016-11-01

    Recent technological advances have made it both feasible and practical to identify unsafe driving behaviors using second-by-second trajectory data. Presented in this paper is a unique approach to detecting safety-critical events using vehicles' longitudinal accelerations. A Discrete Fourier Transform is used in combination with K-means clustering to flag patterns in the vehicles' accelerations in time-series that are likely to be crashes or near-crashes. The algorithm was able to detect roughly 78% of crasjavascript:void(0)hes and near-crashes (71 out of 91 validated events in the Naturalistic Driving Study data used), while generating about 1 false positive every 2.7h. In addition to presenting the promising results, an implementation strategy is discussed and further research topics that can improve this method are suggested in the paper. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Event-triggered H∞ state estimation for semi-Markov jumping discrete-time neural networks with quantization.

    PubMed

    Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H

    2018-05-17

    This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Asynchronous discrete event schemes for PDEs

    NASA Astrophysics Data System (ADS)

    Stone, D.; Geiger, S.; Lord, G. J.

    2017-08-01

    A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.

  9. Order of events matter: comparing discrete models for optimal control of species augmentation.

    PubMed

    Bodine, Erin N; Gross, Louis J; Lenhart, Suzanne

    2012-01-01

    We investigate optimal timing of augmentation of an endangered/threatened species population in a target region by moving individuals from a reserve or captive population. This is formulated as a discrete-time optimal control problem in which augmentation occurs once per time period over a fixed number of time periods. The population model assumes the Allee effect growth functions in both target and reserve populations and the control objective is to maximize the target and reserve population sizes over the time horizon while accounting for costs of augmentation. Two possible orders of events are considered for different life histories of the species relative to augmentation time: move individuals either before or after population growth occurs. The control variable is the proportion of the reserve population to be moved to the target population. We develop solutions and illustrate numerical results which indicate circumstances for which optimal augmentation strategies depend upon the order of events.

  10. Discrete event simulation model of sudden cardiac death predicts high impact of preventive interventions.

    PubMed

    Andreev, Victor P; Head, Trajen; Johnson, Neil; Deo, Sapna K; Daunert, Sylvia; Goldschmidt-Clermont, Pascal J

    2013-01-01

    Sudden Cardiac Death (SCD) is responsible for at least 180,000 deaths a year and incurs an average cost of $286 billion annually in the United States alone. Herein, we present a novel discrete event simulation model of SCD, which quantifies the chains of events associated with the formation, growth, and rupture of atheroma plaques, and the subsequent formation of clots, thrombosis and on-set of arrhythmias within a population. The predictions generated by the model are in good agreement both with results obtained from pathological examinations on the frequencies of three major types of atheroma, and with epidemiological data on the prevalence and risk of SCD. These model predictions allow for identification of interventions and importantly for the optimal time of intervention leading to high potential impact on SCD risk reduction (up to 8-fold reduction in the number of SCDs in the population) as well as the increase in life expectancy.

  11. Photon strength and the low-energy enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiedeking, M.; Bernstein, L. A.; Bleuel, D. L.

    2014-08-14

    Several measurements in medium mass nuclei have reported a low-energy enhancement in the photon strength function. Although, much effort has been invested in unraveling the mysteries of this effect, its physical origin is still not conclusively understood. Here, a completely model-independent experimental approach to investigate the existence of this enhancement is presented. The experiment was designed to study statistical feeding from the quasi-continuum (below the neutron separation energy) to individual low-lying discrete levels in {sup 95}Mo produced in the (d, p) reaction. A key aspect to successfully study gamma decay from the region of high-level density is the detection andmore » extraction of correlated particle-gamma-gamma events which was accomplished using an array of Clover HPGe detectors and large area annular silicon detectors. The entrance channel excitation energy into the residual nucleus produced in the reaction was inferred from the detected proton energies in the silicon detectors. Gating on gamma-transitions originating from low-lying discrete levels specifies the state fed by statistical gamma-rays. Any particle-gamma-gamma event in combination with specific energy sum requirements ensures a clean and unambiguous determination of the initial and final state of the observed gamma rays. With these requirements the statistical feeding to individual discrete levels is extracted on an event-by-event basis. The results are presented and compared to {sup 95}Mo photon strength function data measured at the University of Oslo.« less

  12. On the role of fluids in stick-slip dynamics of saturated granular fault gouge using a coupled computational fluid dynamics-discrete element approach

    NASA Astrophysics Data System (ADS)

    Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.; Marone, Chris; Carmeliet, Jan

    2017-05-01

    The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this paper, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that the (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.

  13. Causal Networks or Causal Islands? The Representation of Mechanisms and the Transitivity of Causal Judgment

    ERIC Educational Resources Information Center

    Johnson, Samuel G. B.; Ahn, Woo-kyoung

    2015-01-01

    Knowledge of mechanisms is critical for causal reasoning. We contrasted two possible organizations of causal knowledge--an interconnected causal "network," where events are causally connected without any boundaries delineating discrete mechanisms; or a set of disparate mechanisms--causal "islands"--such that events in different…

  14. DEVELOPMENT, EVALUATION AND APPLICATION OF AN AUTOMATED EVENT PRECIPITATION SAMPLER FOR NETWORK OPERATION

    EPA Science Inventory

    In 1993, the University of Michigan Air Quality Laboratory (UMAQL) designed a new wet-only precipitation collection system that was utilized in the Lake Michigan Loading Study. The collection system was designed to collect discrete mercury and trace element samples on an event b...

  15. Using Movement and Intentions to Understand Human Activity

    ERIC Educational Resources Information Center

    Zacks, Jeffrey M.; Kumar, Shawn; Abrams, Richard A.; Mehta, Ritesh

    2009-01-01

    During perception, people segment continuous activity into discrete events. They do so in part by monitoring changes in features of an ongoing activity. Characterizing these features is important for theories of event perception and may be helpful for designing information systems. The three experiments reported here asked whether the body…

  16. Sensitivity of diabetic retinopathy associated vision loss to screening interval in an agent-based/discrete event simulation model.

    PubMed

    Day, T Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann

    2014-04-01

    To examine the effect of changes to screening interval on the incidence of vision loss in a simulated cohort of Veterans with diabetic retinopathy (DR). This simulation allows us to examine potential interventions without putting patients at risk. Simulated randomized controlled trial. We develop a hybrid agent-based/discrete event simulation which incorporates a population of simulated Veterans--using abstracted data from a retrospective cohort of real-world diabetic Veterans--with a discrete event simulation (DES) eye clinic at which it seeks treatment for DR. We compare vision loss under varying screening policies, in a simulated population of 5000 Veterans over 50 independent ten-year simulation runs for each group. Diabetic Retinopathy associated vision loss increased as the screening interval was extended from one to five years (p<0.0001). This increase was concentrated in the third year of the screening interval (p<0.01). There was no increase in vision loss associated with increasing the screening interval from one year to two years (p=0.98). Increasing the screening interval for diabetic patients who have not yet developed diabetic retinopathy from 1 to 2 years appears safe, while increasing the interval to 3 years heightens risk for vision loss. Published by Elsevier Ltd.

  17. Autonomous control of production networks using a pheromone approach

    NASA Astrophysics Data System (ADS)

    Armbruster, D.; de Beer, C.; Freitag, M.; Jagalski, T.; Ringhofer, C.

    2006-04-01

    The flow of parts through a production network is usually pre-planned by a central control system. Such central control fails in presence of highly fluctuating demand and/or unforeseen disturbances. To manage such dynamic networks according to low work-in-progress and short throughput times, an autonomous control approach is proposed. Autonomous control means a decentralized routing of the autonomous parts themselves. The parts’ decisions base on backward propagated information about the throughput times of finished parts for different routes. So, routes with shorter throughput times attract parts to use this route again. This process can be compared to ants leaving pheromones on their way to communicate with following ants. The paper focuses on a mathematical description of such autonomously controlled production networks. A fluid model with limited service rates in a general network topology is derived and compared to a discrete-event simulation model. Whereas the discrete-event simulation of production networks is straightforward, the formulation of the addressed scenario in terms of a fluid model is challenging. Here it is shown, how several problems in a fluid model formulation (e.g. discontinuities) can be handled mathematically. Finally, some simulation results for the pheromone-based control with both the discrete-event simulation model and the fluid model are presented for a time-dependent influx.

  18. Planning and supervision of reactor defueling using discrete event techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, H.E.; Imel, G.R.; Houshyar, A.

    1995-12-31

    New fuel handling and conditioning activities for the defueling of the Experimental Breeder Reactor II are being performed at Argonne National Laboratory. Research is being conducted to investigate the use of discrete event simulation, analysis, and optimization techniques to plan, supervise, and perform these activities in such a way that productivity can be improved. The central idea is to characterize this defueling operation as a collection of interconnected serving cells, and then apply operational research techniques to identify appropriate planning schedules for given scenarios. In addition, a supervisory system is being developed to provide personnel with on-line information on themore » progress of fueling tasks and to suggest courses of action to accommodate changing operational conditions. This paper provides an introduction to the research in progress at ANL. In particular, it briefly describes the fuel handling configuration for reactor defueling at ANL, presenting the flow of material from the reactor grid to the interim storage location, and the expected contributions of this work. As an example of the studies being conducted for planning and supervision of fuel handling activities at ANL, an application of discrete event simulation techniques to evaluate different fuel cask transfer strategies is given at the end of the paper.« less

  19. On the Maximum-Weight Clique Problem.

    DTIC Science & Technology

    1985-06-01

    hypergeometric distribution", Discrete Math . 25, 285-287 .* CHVATAL, V. (1983), Linear Programming, W.H. Freeman, New York/San Francisco. COOK, S.A. (1971...Annals Discrete Math . 21, 325-356 GROTSCHEL, M., L. LOVASZ, and A. SCHRIJVER ((1984b), "Relaxations of Vertex Packing", Preprint No. 35...de Grenoble. See also N. Sbihi, "Algorithme de recherche d’un stable de cardinalite maximum dans un graphe sans etoile", Discrete Math . 19 (1980), 53

  20. Effects of intermediate-scale wind disturbance on composition, structure, and succession in Quercus stands: Implications for natural disturbance-based silviculture

    Treesearch

    M.M. Cowden; J.L. Hart; C.J. Schweitzer; D.C. Dey

    2014-01-01

    Forest disturbances are discrete events in space and time that disrupt the biophysical environment and impart lasting legacies on forest composition and structure. Disturbances are often classified along a gradient of spatial extent and magnitude that ranges from catastrophic events where most of the overstory is removed to gap-scale events that modify local...

  1. Technology Development Risk Assessment for Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Godsell, Aga M.; Go, Susie

    2006-01-01

    A new approach for assessing development risk associated with technology development projects is presented. The method represents technology evolution in terms of sector-specific discrete development stages. A Monte Carlo simulation is used to generate development probability distributions based on statistical models of the discrete transitions. Development risk is derived from the resulting probability distributions and specific program requirements. Two sample cases are discussed to illustrate the approach, a single rocket engine development and a three-technology space transportation portfolio.

  2. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  3. Parameter estimation problems for distributed systems using a multigrid method

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Dutt, Pravir

    1990-01-01

    The problem of estimating spatially varying coefficients of partial differential equations is considered from observation of the solution and of the right hand side of the equation. It is assumed that the observations are distributed in the domain and that enough observations are given. A method of discretization and an efficient multigrid method for solving the resulting discrete systems are described. Numerical results are presented for estimation of coefficients in an elliptic and a parabolic partial differential equation.

  4. A 24 km fiber-based discretely signaled continuous variable quantum key distribution system.

    PubMed

    Dinh Xuan, Quyen; Zhang, Zheshen; Voss, Paul L

    2009-12-21

    We report a continuous variable key distribution system that achieves a final secure key rate of 3.45 kilobits/s over a distance of 24.2 km of optical fiber. The protocol uses discrete signaling and post-selection to improve reconciliation speed and quantifies security by means of quantum state tomography. Polarization multiplexing and a frequency translation scheme permit transmission of a continuous wave local oscillator and suppression of noise from guided acoustic wave Brillouin scattering by more than 27 dB.

  5. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  6. Fluctuation reduction and enhanced confinement in the MST reversed-field pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Brett Edward

    1997-10-01

    Plasmas with a factor of ≥3 improvement in energy confinement have been achieved in the MST reversed-field pinch (RFP). These plasmas occur spontaneously, following sawtooth crashes, subject to constraints on, eg, toroidal magnetic field reversal and wall conditioning. Possible contributors to the improved confinement include a reduction of core-resonant, global magnetic fluctuations and a reduction of electrostatic fluctuations over the entire plasma edge. One feature of these plasmas is a region of strong ExB flow shear in the edge. Never before observed in conjunction with enhanced confinement in the RFP, such shear is common in enhanced confinement discharges in tokamaks and stellarators. Another feature of these plasmas is a new type of discrete dynamo event. Like sawtooth crashes, a common form of discrete dynamo, these events correspond to bursts of edge parallel current. The reduction of electrostatic fluctuations in these plasmas occurs within and beyond the region of strong ExB flow shear, similar to what is observed in tokamaks and stellarators. However, the reductions in the MST include fluctuations whose correlation lengths are larger than the width of the shear region. The reduction of the global magnetic fluctuations is most likely due to flattening of the μ=μ 0more » $$\\vec{J}$$∙$$\\vec{B}$$/B 2 profile. Flattening can occur, eg, due to the new type of discrete dynamo event and reduced edge resistivity. Enhanced confinement plasmas are also achieved in the MST when auxiliary current is applied to flatten the μ profile and reduce magnetic fluctuations. Unexpectedly, these plasmas also exhibit a region (broader than in the case above) of strong ExB flow shear in the edge, an edge-wide reduction of electrostatic fluctuations, and the new type of discrete dynamo event. Auxiliary current drive has historically been viewed as the principal route to fusion reactor viability for the RFP.« less

  7. Using discrete event computer simulation to improve patient flow in a Ghanaian acute care hospital.

    PubMed

    Best, Allyson M; Dixon, Cinnamon A; Kelton, W David; Lindsell, Christopher J; Ward, Michael J

    2014-08-01

    Crowding and limited resources have increased the strain on acute care facilities and emergency departments worldwide. These problems are particularly prevalent in developing countries. Discrete event simulation is a computer-based tool that can be used to estimate how changes to complex health care delivery systems such as emergency departments will affect operational performance. Using this modality, our objective was to identify operational interventions that could potentially improve patient throughput of one acute care setting in a developing country. We developed a simulation model of acute care at a district level hospital in Ghana to test the effects of resource-neutral (eg, modified staff start times and roles) and resource-additional (eg, increased staff) operational interventions on patient throughput. Previously captured deidentified time-and-motion data from 487 acute care patients were used to develop and test the model. The primary outcome was the modeled effect of interventions on patient length of stay (LOS). The base-case (no change) scenario had a mean LOS of 292 minutes (95% confidence interval [CI], 291-293). In isolation, adding staffing, changing staff roles, and varying shift times did not affect overall patient LOS. Specifically, adding 2 registration workers, history takers, and physicians resulted in a 23.8-minute (95% CI, 22.3-25.3) LOS decrease. However, when shift start times were coordinated with patient arrival patterns, potential mean LOS was decreased by 96 minutes (95% CI, 94-98), and with the simultaneous combination of staff roles (registration and history taking), there was an overall mean LOS reduction of 152 minutes (95% CI, 150-154). Resource-neutral interventions identified through discrete event simulation modeling have the potential to improve acute care throughput in this Ghanaian municipal hospital. Discrete event simulation offers another approach to identifying potentially effective interventions to improve patient flow in emergency and acute care in resource-limited settings. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.

    2010-01-01

    NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand DES capabilities to address KSC's planning needs.

  9. Inhomogeneity Based Characterization of Distribution Patterns on the Plasma Membrane

    PubMed Central

    Paparelli, Laura; Corthout, Nikky; Wakefield, Devin L.; Sannerud, Ragna; Jovanovic-Talisman, Tijana; Annaert, Wim; Munck, Sebastian

    2016-01-01

    Cell surface protein and lipid molecules are organized in various patterns: randomly, along gradients, or clustered when segregated into discrete micro- and nano-domains. Their distribution is tightly coupled to events such as polarization, endocytosis, and intracellular signaling, but challenging to quantify using traditional techniques. Here we present a novel approach to quantify the distribution of plasma membrane proteins and lipids. This approach describes spatial patterns in degrees of inhomogeneity and incorporates an intensity-based correction to analyze images with a wide range of resolutions; we have termed it Quantitative Analysis of the Spatial distributions in Images using Mosaic segmentation and Dual parameter Optimization in Histograms (QuASIMoDOH). We tested its applicability using simulated microscopy images and images acquired by widefield microscopy, total internal reflection microscopy, structured illumination microscopy, and photoactivated localization microscopy. We validated QuASIMoDOH, successfully quantifying the distribution of protein and lipid molecules detected with several labeling techniques, in different cell model systems. We also used this method to characterize the reorganization of cell surface lipids in response to disrupted endosomal trafficking and to detect dynamic changes in the global and local organization of epidermal growth factor receptors across the cell surface. Our findings demonstrate that QuASIMoDOH can be used to assess protein and lipid patterns, quantifying distribution changes and spatial reorganization at the cell surface. An ImageJ/Fiji plugin of this analysis tool is provided. PMID:27603951

  10. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  11. Global stabilization analysis of inertial memristive recurrent neural networks with discrete and distributed delays.

    PubMed

    Wang, Leimin; Zeng, Zhigang; Ge, Ming-Feng; Hu, Junhao

    2018-05-02

    This paper deals with the stabilization problem of memristive recurrent neural networks with inertial items, discrete delays, bounded and unbounded distributed delays. First, for inertial memristive recurrent neural networks (IMRNNs) with second-order derivatives of states, an appropriate variable substitution method is invoked to transfer IMRNNs into a first-order differential form. Then, based on nonsmooth analysis theory, several algebraic criteria are established for the global stabilizability of IMRNNs under proposed feedback control, where the cases with both bounded and unbounded distributed delays are successfully addressed. Finally, the theoretical results are illustrated via the numerical simulations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Design, implementation and application of distributed order PI control.

    PubMed

    Zhou, Fengyu; Zhao, Yang; Li, Yan; Chen, YangQuan

    2013-05-01

    In this paper, a series of distributed order PI controller design methods are derived and applied to the robust control of wheeled service robots, which can tolerate more structural and parametric uncertainties than the corresponding fractional order PI control. A practical discrete incremental distributed order PI control strategy is proposed basing on the discretization method and the frequency criterions, which can be commonly used in many fields of fractional order system, control and signal processing. Besides, an auto-tuning strategy and the genetic algorithm are applied to the distributed order PI control as well. A number of experimental results are provided to show the advantages and distinguished features of the discussed methods in fairways. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Evolution of damage during deformation in porous granular materials (Louis Néel Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Main, Ian

    2014-05-01

    'Crackling noise' occurs in a wide variety of systems that respond to external forcing in an intermittent way, leading to sudden bursts of energy release similar to those heard when crunching up a piece of paper or listening to a fire. In mineral magnetism ('Barkhausen') crackling noise occurs due to sudden changes in the size and orientation of microscopic ferromagnetic domains when the external magnetic field is changed. In rock physics sudden changes in internal stress associated with microscopically brittle failure events lead to acoustic emissions that can be recorded on the sample boundary, and used to infer the state of internal damage. Crackling noise is inherently stochastic, but the population of events often exhibits remarkably robust scaling properties, in terms of the source area, duration, energy, and in the waiting time between events. Here I describe how these scaling properties emerge and evolve spontaneously in a fully-dynamic discrete element model of sedimentary rocks subject to uniaxial compression at a constant strain rate. The discrete elements have structural disorder similar to that of a real rock, and this is the only source of heterogeneity. Despite the stationary loading and the lack of any time-dependent weakening processes, the results are all characterized by emergent power law distributions over a broad range of scales, in agreement with experimental observation. As deformation evolves, the scaling exponents change systematically in a way that is similar to the evolution of damage in experiments on real sedimentary rocks. The potential for real-time failure forecasting is examined by using synthetic and real data from laboratory tests and prior to volcanic eruptions. The combination of non-linearity and an irreducible stochastic component leads to significant variations in the precision and accuracy of the forecast failure time, leading to a significant proportion of 'false alarms' (forecast too early) and 'missed events' (forecast too late), as well as an over-optimistic assessments of forecasting power and quality when the failure time is known (the 'benefit of hindsight'). The evolution becomes progressively more complex, and the forecasting power diminishes, in going from ideal synthetics to controlled laboratory tests to open natural systems at larger scales in space and time.

  14. Hydrological disposition of flash flood and debris flows events in an Alpine watershed in Austria

    NASA Astrophysics Data System (ADS)

    Prenner, David; Kaitna, Roland; Mostbauer, Karin; Hrachowitz, Markus

    2017-04-01

    Debris flows and flash floods including intensive bedload transport represent severe hazards in the Alpine environment of Austria. For neither of these processes, explicit rainfall thresholds - even for specific regions - are available. This may be due to insufficient data on the temporal and spatial variation of precipitation, but probably also due to variations of the geomorphic and hydrological disposition of a watershed to produce such processes in the course of a rainfall event. In this contribution we investigate the importance of the hydrological system state for triggering debris flows and flash floods in the Ill/Suggadin watershed (500 km2), Austria, by analyzing the effects of dynamics in system state variables such as soil moisture, snow pack, or ground water level. The analysis is based on a semi-distributed conceptual rainfall-runoff model, spatially discretizing the watershed according to the available precipitation observations, elevation, topographic considerations and land cover. Input data are available from six weather stations on a daily basis ranging back to 1947. A Thiessen polygon decomposition results in six individual precipitation zones with a maximum area of about 130 km2. Elevation specific behavior of the quantities temperature and precipitation is covered through an elevation-resolved computation every 200 m. Spatial heterogeneity is considered by distinct hydrological response units for bare rock, forest, grassland, and riparian zone. To reduce numerical smearing on the hydrological results, the Implicit Euler scheme was used to discretize the balance equations. For model calibration we utilized runoff hydrographs, snow cover data as well as prior parameter and process constraints. The obtained hydrological output variables are linked to documented observed flash flood and debris flow events by means of a multivariate logistic regression. We present a summary about the daily hydrological disposition of experiencing a flash flood or debris flow event in each precipitation zone of the Ill/Suggadin region over almost 65 years. Furthermore, we will provide an interpretation of the occurred hydrological trigger patterns and show a frequency ranking. The outcomes of this study shall lead to an improved forecasting and differentiation of trigger conditions leading to debris flows and flash floods.

  15. Geant4-DNA track-structure simulations for gold nanoparticles: The importance of electron discrete models in nanometer volumes.

    PubMed

    Sakata, Dousatsu; Kyriakou, Ioanna; Okada, Shogo; Tran, Hoang N; Lampe, Nathanael; Guatelli, Susanna; Bordage, Marie-Claude; Ivanchenko, Vladimir; Murakami, Koichi; Sasaki, Takashi; Emfietzoglou, Dimitris; Incerti, Sebastien

    2018-05-01

    Gold nanoparticles (GNPs) are known to enhance the absorbed dose in their vicinity following photon-based irradiation. To investigate the therapeutic effectiveness of GNPs, previous Monte Carlo simulation studies have explored GNP dose enhancement using mostly condensed-history models. However, in general, such models are suitable for macroscopic volumes and for electron energies above a few hundred electron volts. We have recently developed, for the Geant4-DNA extension of the Geant4 Monte Carlo simulation toolkit, discrete physics models for electron transport in gold which include the description of the full atomic de-excitation cascade. These models allow event-by-event simulation of electron tracks in gold down to 10 eV. The present work describes how such specialized physics models impact simulation-based studies on GNP-radioenhancement in a context of x-ray radiotherapy. The new discrete physics models are compared to the Geant4 Penelope and Livermore condensed-history models, which are being widely used for simulation-based NP radioenhancement studies. An ad hoc Geant4 simulation application has been developed to calculate the absorbed dose in liquid water around a GNP and its radioenhancement, caused by secondary particles emitted from the GNP itself, when irradiated with a monoenergetic electron beam. The effect of the new physics models is also quantified in the calculation of secondary particle spectra, when originating in the GNP and when exiting from it. The new physics models show similar backscattering coefficients with the existing Geant4 Livermore and Penelope models in large volumes for 100 keV incident electrons. However, in submicron sized volumes, only the discrete models describe the high backscattering that should still be present around GNPs at these length scales. Sizeable differences (mostly above a factor of 2) are also found in the radial distribution of absorbed dose and secondary particles between the new and the existing Geant4 models. The degree to which these differences are due to intrinsic limitations of the condensed-history models or to differences in the underling scattering cross sections requires further investigation. Improved physics models for gold are necessary to better model the impact of GNPs in radiotherapy via Monte Carlo simulations. We implemented discrete electron transport models for gold in Geant4 that is applicable down to 10 eV including the modeling of the full de-excitation cascade. It is demonstrated that the new model has a significant positive impact on particle transport simulations in gold volumes with submicron dimensions compared to the existing Livermore and Penelope condensed-history models of Geant4. © 2018 American Association of Physicists in Medicine.

  16. Core discrete event simulation model for the evaluation of health care technologies in major depressive disorder.

    PubMed

    Vataire, Anne-Lise; Aballéa, Samuel; Antonanzas, Fernando; Roijen, Leona Hakkaart-van; Lam, Raymond W; McCrone, Paul; Persson, Ulf; Toumi, Mondher

    2014-03-01

    A review of existing economic models in major depressive disorder (MDD) highlighted the need for models with longer time horizons that also account for heterogeneity in treatment pathways between patients. A core discrete event simulation model was developed to estimate health and cost outcomes associated with alternative treatment strategies. This model simulated short- and long-term clinical events (partial response, remission, relapse, recovery, and recurrence), adverse events, and treatment changes (titration, switch, addition, and discontinuation) over up to 5 years. Several treatment pathways were defined on the basis of fictitious antidepressants with three levels of efficacy, tolerability, and price (low, medium, and high) from first line to third line. The model was populated with input data from the literature for the UK setting. Model outputs include time in different health states, quality-adjusted life-years (QALYs), and costs from National Health Service and societal perspectives. The codes are open source. Predicted costs and QALYs from this model are within the range of results from previous economic evaluations. The largest cost components from the payer perspective were physician visits and hospitalizations. Key parameters driving the predicted costs and QALYs were utility values, effectiveness, and frequency of physician visits. Differences in QALYs and costs between two strategies with different effectiveness increased approximately twofold when the time horizon increased from 1 to 5 years. The discrete event simulation model can provide a more comprehensive evaluation of different therapeutic options in MDD, compared with existing Markov models, and can be used to compare a wide range of health care technologies in various groups of patients with MDD. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. A low noise discrete velocity method for the Boltzmann equation with quantized rotational and vibrational energy

    NASA Astrophysics Data System (ADS)

    Clarke, Peter; Varghese, Philip; Goldstein, David

    2018-01-01

    A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.

  18. Petri nets as a modeling tool for discrete concurrent tasks of the human operator. [describing sequential and parallel demands on human operators

    NASA Technical Reports Server (NTRS)

    Schumacher, W.; Geiser, G.

    1978-01-01

    The basic concepts of Petri nets are reviewed as well as their application as the fundamental model of technical systems with concurrent discrete events such as hardware systems and software models of computers. The use of Petri nets is proposed for modeling the human operator dealing with concurrent discrete tasks. Their properties useful in modeling the human operator are discussed and practical examples are given. By means of and experimental investigation of binary concurrent tasks which are presented in a serial manner, the representation of human behavior by Petri nets is demonstrated.

  19. Hydrological Simulation of Flood Events At Large Basins Using Distributed Modelling

    NASA Astrophysics Data System (ADS)

    Vélez, J.; Vélez, I.; Puricelli, M.; Francés, F.

    Recent advances in technology allows to the scientist community advance in new pro- cedures in order to reduce the risk associated to flood events. A conceptual distributed model has been implemented to simulate the hydrological processes involved during floods. The model has been named TETIS. The basin is divided into rectangular cells, all of them connected according to the network drainage. The rainfall-runoff process is modelled using four linked tanks at each cell with different outflow relationships at each tank, which represent the ET, direct runoff, interflow and base flow, respectively. The routing along the channel network has been proposed using basin geomorpho- logic characteristics coupled to the cinematic wave procedure. The vertical movement along the cell is proposed using simple relationships based on soil properties as field capacity and the saturated hydraulic conductivities, which were previously obtained using land use, litology, edaphology and basin properties maps. The different vertical proccesses along the cell included are: capillar storage, infiltration, percolation and underground losses. Finally, snowmelting and reservoir routing has been included. TETIS has been implemented in the flood warning system of the Tagus River, with a basin of 59 200 km2. The time discretization of the input data is 15 minutes, and the cell size is 500x500 m. The basic parameter maps were estimated for the entire basin, and a calibration and validation processes were performed using some recorded events in the upper part of the basin. Calibration confirmed the initial parameter estimation. Additionally, the validation in time and space showed the robustness of these types of models

  20. Comparative study of lesions created by high-intensity focused ultrasound using sequential discrete and continuous scanning strategies.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing

    2013-03-01

    Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.

  1. The Effect of Haptic Guidance on Learning a Hybrid Rhythmic-Discrete Motor Task.

    PubMed

    Marchal-Crespo, Laura; Bannwart, Mathias; Riener, Robert; Vallery, Heike

    2015-01-01

    Bouncing a ball with a racket is a hybrid rhythmic-discrete motor task, combining continuous rhythmic racket movements with discrete impact events. Rhythmicity is exceptionally important in motor learning, because it underlies fundamental movements such as walking. Studies suggested that rhythmic and discrete movements are governed by different control mechanisms at different levels of the Central Nervous System. The aim of this study is to evaluate the effect of fixed/fading haptic guidance on learning to bounce a ball to a desired apex in virtual reality with varying gravity. Changing gravity changes dominance of rhythmic versus discrete control: The higher the value of gravity, the more rhythmic the task; lower values reduce the bouncing frequency and increase dwell times, eventually leading to a repetitive discrete task that requires initiation and termination, resembling target-oriented reaching. Although motor learning in the ball-bouncing task with varying gravity has been studied, the effect of haptic guidance on learning such a hybrid rhythmic-discrete motor task has not been addressed. We performed an experiment with thirty healthy subjects and found that the most effective training condition depended on the degree of rhythmicity: Haptic guidance seems to hamper learning of continuous rhythmic tasks, but it seems to promote learning for repetitive tasks that resemble discrete movements.

  2. Incorporating discrete event simulation into quality improvement efforts in health care systems.

    PubMed

    Rutberg, Matthew Harris; Wenczel, Sharon; Devaney, John; Goldlust, Eric Jonathan; Day, Theodore Eugene

    2015-01-01

    Quality improvement (QI) efforts are an indispensable aspect of health care delivery, particularly in an environment of increasing financial and regulatory pressures. The ability to test predictions of proposed changes to flow, policy, staffing, and other process-level changes using discrete event simulation (DES) has shown significant promise and is well reported in the literature. This article describes how to incorporate DES into QI departments and programs in order to support QI efforts, develop high-fidelity simulation models, conduct experiments, make recommendations, and support adoption of results. The authors describe how DES-enabled QI teams can partner with clinical services and administration to plan, conduct, and sustain QI investigations. © 2013 by the American College of Medical Quality.

  3. DeMO: An Ontology for Discrete-event Modeling and Simulation.

    PubMed

    Silver, Gregory A; Miller, John A; Hybinette, Maria; Baramidze, Gregory; York, William S

    2011-09-01

    Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community.

  4. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  5. DeMO: An Ontology for Discrete-event Modeling and Simulation

    PubMed Central

    Silver, Gregory A; Miller, John A; Hybinette, Maria; Baramidze, Gregory; York, William S

    2011-01-01

    Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community. PMID:22919114

  6. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  7. The Foggy EUV Corona and Coronal Heating by MHD Waves from Explosive Reconnection Events

    NASA Technical Reports Server (NTRS)

    Moore, Ron L.; Cirtain, Jonathan W.; Falconer, David A.

    2008-01-01

    In 0.5 arcsec/pixel TRACE coronal EUV images, the corona rooted in active regions that are at the limb and are not flaring is seen to consist of (1) a complex array of discrete loops and plumes embedded in (2) a diffuse ambient component that shows no fine structure and gradually fades with height. For each of two not-flaring active regions, found that the diffuse component is (1) approximately isothermal and hydrostatic and (2) emits well over half of the total EUV luminosity of the active-region corona. Here, from a TRACE Fe XII coronal image of another not-flaring active region, the large sunspot active region AR 10652 when it was at the west limb on 30 July 2004, we separate the diffuse component from the discrete loop component by spatial filtering, and find that the diffuse component has about 60% of the total luminosity. If under much higher spatial resolution than that of TRACE (e. g., the 0.1 arcsec/pixel resolution of the Hi-C sounding-rocket experiment proposed by J. W. Cirtain et al), most of the diffuse component remains diffuse rather being resolved into very narrow loops and plumes, this will raise the possibility that the EUV corona in active regions consists of two basically different but comparably luminous components: one being the set of discrete bright loops and plumes and the other being a truly diffuse component filling the space between the discrete loops and plumes. This dichotomy would imply that there are two different but comparably powerful coronal heating mechanisms operating in active regions, one for the distinct loops and plumes and another for the diffuse component. We present a scenario in which (1) each discrete bright loop or plume is a flux tube that was recently reconnected in a burst of reconnection, and (2) the diffuse component is heated by MHD waves that are generated by these reconnection events and by other fine-scale explosive reconnection events, most of which occur in and below the base of the corona where they are seen as UV explosive events, EUV blinkers, and type II spicules. These MHD waves propagate across field lines and dissipate, heating the plasma in the field between the bright loops and plumes.

  8. Assessing the Utility of an Event-Step ASMD Model by Analysis of Surface Combatant Shared Self-Defense

    DTIC Science & Technology

    2001-09-01

    Oriented Discrete Event Simulation,” Master’s Thesis in Operations Research, Naval Postgraduate School Monterey, CA, 1996. 12. Arntzen , A., “Software...Dependent Hit Probabilities”, Naval Research Logistics, Vol. 31, pp. 363-371, 1984. 3 Arntzen , A., “Software Components for Air Defense Planning

  9. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    PubMed

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  10. A Bayesian model for time-to-event data with informative censoring

    PubMed Central

    Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo

    2012-01-01

    Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746

  11. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  12. How Does the Sparse Memory “Engram” Neurons Encode the Memory of a Spatial–Temporal Event?

    PubMed Central

    Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan

    2016-01-01

    Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns. PMID:27601979

  13. How Does the Sparse Memory "Engram" Neurons Encode the Memory of a Spatial-Temporal Event?

    PubMed

    Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan

    2016-01-01

    Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns.

  14. Fractional Programming for Communication Systems—Part II: Uplink Scheduling via Matching

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multi-cell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Further, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.

  15. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  16. Regression Analysis of Mixed Recurrent-Event and Panel-Count Data with Additive Rate Models

    PubMed Central

    Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L.

    2015-01-01

    Summary Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007; Zhao et al., 2011). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013). In this paper, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study. PMID:25345405

  17. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    PubMed

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  18. Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem

    2015-06-01

    Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.

  19. Hemolytic potential of hydrodynamic cavitation.

    PubMed

    Chambers, S D; Bartlett, R H; Ceccio, S L

    2000-08-01

    The purpose of this study was to determine the hemolytic potentials of discrete bubble cavitation and attached cavitation. To generate controlled cavitation events, a venturigeometry hydrodynamic device, called a Cavitation Susceptibility Meter (CSM), was constructed. A comparison between the hemolytic potential of discrete bubble cavitation and attached cavitation was investigated with a single-pass flow apparatus and a recirculating flow apparatus, both utilizing the CSM. An analytical model, based on spherical bubble dynamics, was developed for predicting the hemolysis caused by discrete bubble cavitation. Experimentally, discrete bubble cavitation did not correlate with a measurable increase in plasma-free hemoglobin (PFHb), as predicted by the analytical model. However, attached cavitation did result in significant PFHb generation. The rate of PFHb generation scaled inversely with the Cavitation number at a constant flow rate, suggesting that the size of the attached cavity was the dominant hemolytic factor.

  20. Finding Snowmageddon: Detecting and quantifying northeastern U.S. snowstorms in a multi-decadal global climate ensemble

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.

    2017-12-01

    The northeastern coast of the United States is particularly vulnerable to impacts from extratropical cyclones during winter months, which produce heavy precipitation, high winds, and coastal flooding. These impacts are amplified by the proximity of major population centers to common storm tracks and include risks to health and welfare, massive transportation disruption, lost spending productivity, power outages, and structural damage. Historically, understanding regional snowfall in climate models has generally centered around seasonal mean climatologies even though major impacts typically occur at the scales of hours to days. To quantify discrete snowstorms at the event level, we describe a new objective detection algorithm for gridded data based on the Regional Snowfall Index (RSI) produced by NOAA's National Centers for Environmental Information. The algorithm uses 6-hourly precipitation to collocate storm-integrated snowfall with population density to produce a distribution of snowstorms with societally relevant impacts. The algorithm is tested on the Community Earth System Model (CESM) Large Ensemble Project (LENS) data. Present day distributions of snowfall events is well-replicated within the ensemble. We discuss classification sensitivities to assumptions made in determining precipitation phase and snow water equivalent. We also explore projected reductions in mid-century and end-of-century snowstorms due to changes in snowfall rates and precipitation phase, as well as highlight potential improvements in storm representation from refined horizontal resolution in model simulations.

  1. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    PubMed

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  2. Strong Ground Motion Prediction By Composite Source Model

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Irikura, K.; Zahradnik, J.

    2003-12-01

    A composite source model, incorporating different sized subevents, provides a possible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock). The subevents are distributed randomly over the fault. Each subevent is modeled either as a finite or point source, differences between these choices are shown. The final slip and duration of each subevent is related to its characteristic dimension, using constant stress-drop scaling. Absolute value of subevents' stress drop is free parameter. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally layered crustal model. An estimation of subevents' stress drop is based on fitting empirical attenuation relations for PGA and PGV, as they represent robust information on strong ground motion caused by earthquakes, including both path and source effect. We use the 2000 M6.6 Western Tottori, Japan, earthquake as validation event, providing comparison between predicted and observed waveforms.

  3. The calculation of force-free fields from discrete flux distributions. [for chromospheric magnetic fields

    NASA Technical Reports Server (NTRS)

    Sheeley, N. R., Jr.; Harvey, J. W.

    1975-01-01

    This paper presents particularly simple mathematical formulas for the calculation of force-free fields of constant alpha from the distribution of discrete sources on a flat surface. The advantage of these formulas lies in their physical simplicity and the fact that they can be easily used in practice to calculate the fields. The disadvantage is that they are limited to fields of 'sufficiently small alpha'. These formulas may be useful in the study of chromospheric magnetic fields by the comparison of high-resolution H-alpha photographs and photospheric magnetograms.

  4. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  5. Multilayer shallow water models with locally variable number of layers and semi-implicit time discretization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys

    2018-07-01

    We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.

  6. On the role of fluids in stick-slip dynamics of saturated granular fault gouge using a coupled computational fluid dynamics-discrete element approach: STICK-SLIP IN SATURATED FAULT GOUGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.

    The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this study, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that themore » (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Finally, our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.« less

  7. On the role of fluids in stick-slip dynamics of saturated granular fault gouge using a coupled computational fluid dynamics-discrete element approach: STICK-SLIP IN SATURATED FAULT GOUGE

    DOE PAGES

    Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.; ...

    2017-05-01

    The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this study, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that themore » (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Finally, our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.« less

  8. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  9. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  10. A Spectral Analysis of Discrete-Time Quantum Walks Related to the Birth and Death Chains

    NASA Astrophysics Data System (ADS)

    Ho, Choon-Lin; Ide, Yusuke; Konno, Norio; Segawa, Etsuo; Takumi, Kentaro

    2018-04-01

    In this paper, we consider a spectral analysis of discrete time quantum walks on the path. For isospectral coin cases, we show that the time averaged distribution and stationary distributions of the quantum walks are described by the pair of eigenvalues of the coins as well as the eigenvalues and eigenvectors of the corresponding random walks which are usually referred as the birth and death chains. As an example of the results, we derive the time averaged distribution of so-called Szegedy's walk which is related to the Ehrenfest model. It is represented by Krawtchouk polynomials which is the eigenvectors of the model and includes the arcsine law.

  11. Distributed mean curvature on a discrete manifold for Regge calculus

    NASA Astrophysics Data System (ADS)

    Conboye, Rory; Miller, Warner A.; Ray, Shannon

    2015-09-01

    The integrated mean curvature of a simplicial manifold is well understood in both Regge Calculus and Discrete Differential Geometry. However, a well motivated pointwise definition of curvature requires a careful choice of the volume over which to uniformly distribute the local integrated curvature. We show that hybrid cells formed using both the simplicial lattice and its circumcentric dual emerge as a remarkably natural structure for the distribution of this local integrated curvature. These hybrid cells form a complete tessellation of the simplicial manifold, contain a geometric orthonormal basis, and are also shown to give a pointwise mean curvature with a natural interpretation as the fractional rate of change of the normal vector.

  12. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  13. Scattering in discrete random media with implications to propagation through rain. Ph.D. Thesis George Washingtion Univ., Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J., Jr.

    1977-01-01

    The multiple scattering effects on wave propagation through a volume of discrete scatterers were investigated. The mean field and intensity for a distribution of scatterers was developed using a discrete random media formulation, and second order series expansions for the mean field and total intensity derived for one-dimensional and three-dimensional configurations. The volume distribution results were shown to proceed directly from the one-dimensional results. The multiple scattering intensity expansion was compared to the classical single scattering intensity and the classical result was found to represent only the first three terms in the total intensity expansion. The Foldy approximation to the mean field was applied to develop the coherent intensity, and was found to exactly represent all coherent terms of the total intensity.

  14. Research on pyrolysis behavior of Camellia sinensis branches via the Discrete Distributed Activation Energy Model.

    PubMed

    Zhou, Bingliang; Zhou, Jianbin; Zhang, Qisheng

    2017-10-01

    This study aims at investigating the pyrolysis behavior of Camellia sinensis branches by the Discrete Distributed Activation Energy Model (DAEM) and thermogravimetric experiments. Then the Discrete DAEM method is used to describe pyrolysis process of Camellia sinensis branches dominated by 12 characterized reactions. The decomposition mechanism of Camellia sinensis branches and interaction with components are observed. And the reaction at 350.77°C is a significant boundary of the first and second reaction range. The pyrolysis process of Camellia sinensis branches at the heating rate of 10,000°C/min is predicted and provides valuable references for gasification or combustion. The relationship and function between four typical indexes and heating rates from 10 to 10,000°C/min are revealed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. The impact of interoperability of electronic health records on ambulatory physician practices: a discrete-event simulation study.

    PubMed

    Zhou, Yuan; Ancker, Jessica S; Upadhye, Mandar; McGeorge, Nicolette M; Guarrera, Theresa K; Hegde, Sudeep; Crane, Peter W; Fairbanks, Rollin J; Bisantz, Ann M; Kaushal, Rainu; Lin, Li

    2013-01-01

    The effect of health information technology (HIT) on efficiency and workload among clinical and nonclinical staff has been debated, with conflicting evidence about whether electronic health records (EHRs) increase or decrease effort. None of this paper to date, however, examines the effect of interoperability quantitatively using discrete event simulation techniques. To estimate the impact of EHR systems with various levels of interoperability on day-to-day tasks and operations of ambulatory physician offices. Interviews and observations were used to collect workflow data from 12 adult primary and specialty practices. A discrete event simulation model was constructed to represent patient flows and clinical and administrative tasks of physicians and staff members. High levels of EHR interoperability were associated with reduced time spent by providers on four tasks: preparing lab reports, requesting lab orders, prescribing medications, and writing referrals. The implementation of an EHR was associated with less time spent by administrators but more time spent by physicians, compared with time spent at paper-based practices. In addition, the presence of EHRs and of interoperability did not significantly affect the time usage of registered nurses or the total visit time and waiting time of patients. This paper suggests that the impact of using HIT on clinical and nonclinical staff work efficiency varies, however, overall it appears to improve time efficiency more for administrators than for physicians and nurses.

  16. Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.

    PubMed

    Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime

    2017-10-01

    Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.

  17. Fractional System Identification: An Approach Using Continuous Order-Distributions

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Lorenzo, Carl F.

    1999-01-01

    This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.

  18. Regression analysis of mixed recurrent-event and panel-count data with additive rate models.

    PubMed

    Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L

    2015-03-01

    Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study. © 2014, The International Biometric Society.

  19. Comparison of Timed Automata with Discrete Event Simulation for Modeling of Biomarker-Based Treatment Decisions: An Illustration for Metastatic Castration-Resistant Prostate Cancer.

    PubMed

    Degeling, Koen; Schivo, Stefano; Mehra, Niven; Koffijberg, Hendrik; Langerak, Rom; de Bono, Johann S; IJzerman, Maarten J

    2017-12-01

    With the advent of personalized medicine, the field of health economic modeling is being challenged and the use of patient-level dynamic modeling techniques might be required. To illustrate the usability of two such techniques, timed automata (TA) and discrete event simulation (DES), for modeling personalized treatment decisions. An early health technology assessment on the use of circulating tumor cells, compared with prostate-specific antigen and bone scintigraphy, to inform treatment decisions in metastatic castration-resistant prostate cancer was performed. Both modeling techniques were assessed quantitatively, in terms of intermediate outcomes (e.g., overtreatment) and health economic outcomes (e.g., net monetary benefit). Qualitatively, among others, model structure, agent interactions, data management (i.e., importing and exporting data), and model transparency were assessed. Both models yielded realistic and similar intermediate and health economic outcomes. Overtreatment was reduced by 6.99 and 7.02 weeks by applying circulating tumor cell as a response marker at a net monetary benefit of -€1033 and -€1104 for the TA model and the DES model, respectively. Software-specific differences were observed regarding data management features and the support for statistical distributions, which were considered better for the DES software. Regarding method-specific differences, interactions were modeled more straightforward using TA, benefiting from its compositional model structure. Both techniques prove suitable for modeling personalized treatment decisions, although DES would be preferred given the current software-specific limitations of TA. When these limitations are resolved, TA would be an interesting modeling alternative if interactions are key or its compositional structure is useful to manage multi-agent complex problems. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Is it beneficial to increase the provision of thrombolysis?-- a discrete-event simulation model.

    PubMed

    Barton, M; McClean, S; Gillespie, J; Garg, L; Wilson, D; Fullerton, K

    2012-07-01

    Although Thrombolysis has been licensed in the UK since 2003, it is still administered only to a small percentage of eligible patients. We consider the impact of investing the impact of thrombolysis on important acute stroke services, and the effect on quality of life. The concept is illustrated using data from the Northern Ireland Stroke Service. Retrospective study. We first present results of survival analysis utilizing length of stay (LOS) for discharge destinations, based on data from the Belfast City Hospital (BCH). None of these patients actually received thrombolysis but from those who would have been eligible, we created two initial groups, the first representing a scenario where they received thrombolysis and the second comprising those who do not receive thrombolysis. On the basis of the survival analysis, we created several subgroups based on discharge destination. We then developed a discrete event simulation (DES) model, where each group is a patient pathway within the simulation. Coxian phase type distributions were used to model the group LOS. Various scenarios were explored focusing on cost-effectiveness across hospital, community and social services had thrombolysis been administered to these patients, and the possible improvement in quality of life, should the proportion of patients who are administered thrombolysis be increased. Our aim in simulating various scenarios for this historical group of patients is to assess what the cost-effectiveness of thrombolysis would have been under different scenarios; from this we can infer the likely cost-effectiveness of future policies. The cost of thrombolysis is offset by reduction in hospital, community rehabilitation and institutional care costs, with a corresponding improvement in quality of life. Our model suggests that provision of thrombolysis would produce moderate overall improvement to the service assuming current levels of funding.

  1. An experimental assessment of vehicle disturbance effects on migratory shorebirds

    USGS Publications Warehouse

    Tarr, Nathan M.; Simons, T.R.; Pollock, K.H.

    2010-01-01

    Off-road vehicle (ORV) traffic is one of several forms of disturbance thought to affect shorebirds at migration stopover sites. Attempts to measure disturbance effects on shorebird habitat use and behavior at stopover sites are difficult because ORV disturbance is frequently confounded with habitat and environmental factors. We used a before-after-control-impact experimental design to isolate effects of vehicle disturbance from shorebird responses to environmental and habitat factors. We manipulated disturbance levels within beach closures along South Core Banks, North Carolina, USA, and measured changes in shorebird abundance and location, as well as the activity of one focal species, the sanderling (Calidris alba), within paired control and impact plots. We applied a discrete treatment level of one flee-response-inducing event every 10 minutes on impact plots. We found that disturbance reduced total shorebird and black-bellied plover (Pluvialis squatarola) abundance and reduced relative use of microhabitat zones above the swash zone (wet sand and dry sand) by sanderlings, black-bellied plovers, willets (Tringa semipalmata), and total shorebirds. Sanderlings and total shorebirds increased use of the swash zone in response to vehicle disturbance. Disturbance reduced use of study plots by sanderlings for resting and increased sanderling activity, but we did not detect an effect of vehicle disturbance on sanderling foraging activity. We provide the first estimates of how a discrete level of disturbance affects shorebird distributions among ocean beach microhabitats. Our findings provide a standard to which managers can compare frequency and intensity of disturbance events at other shorebird stopover and roosting sites and indicate that limiting disturbance will contribute to use of a site by migratory shorebirds. ?? 2010 The Wildlife Society.

  2. ENSO Dynamics and Trends, AN Alternate View

    NASA Astrophysics Data System (ADS)

    Rojo Hernandez, J. D.; Lall, U.; Mesa, O. J.

    2017-12-01

    El Niño - Southern Oscillation (ENSO) is the most important inter-annual climate fluctuation on a planetary level with great effects on the hydrological cycle, agriculture, ecosystems, health and society. This work demonstrates the use of the Non-Homogeneus hidden Markov Models (NHMM) to characterize ENSO using a set of discrete states with variable transition probabilities matrix using the data of sea surface temperature anomalies (SSTA) of the Kaplan Extended SST v2 between 120E -90W, 15N-15S from Jan-1856 to Dec-2016. ENSO spatial patterns, their temporal distribution, the transition probabilities between patterns and their temporal evolution are the main results of the NHHMM applied to ENSO. The five "hidden" states found appear to represent the different "Flavors" described in the literature: the Canonical El Niño, Central El Niño, a Neutral state, Central La Niña and the Canonical Niña. Using the whole record length of the SSTA it was possible to identify trends in the dynamic system, with a decrease in the probability of occurrence of the cold events and a significant increase of the warm events, in particular of Central El Niño events whose probability of occurrence has increased Dramatically since 1960 coupled with increases in global temperature.

  3. Evaluating the Discrete Element Method as a Tool for Predicting the Seasonal Evolution of the MIZ

    DTIC Science & Technology

    2014-09-30

    distribution (Hopkins & Thorndike 2006). The DEM treats sea ice as a collection of discrete pieces of ice, thus affording the method certain...Annals of Glaciology, 33(1), 355-360. Hopkins, M. A., & Thorndike , A. S. (2006) Floe formation in Arctic sea ice. Journal of Geophysical Research

  4. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  5. The elementary events of Ca2+ release elicited by membrane depolarization in mammalian muscle.

    PubMed

    Csernoch, L; Zhou, J; Stern, M D; Brum, G; Ríos, E

    2004-05-15

    Cytosolic [Ca(2+)] transients elicited by voltage clamp depolarization were examined by confocal line scanning of rat skeletal muscle fibres. Ca(2+) sparks were observed in the fibres' membrane-permeabilized ends, but not in responses to voltage in the membrane-intact area. Elementary events of the depolarization-evoked response could be separated either at low voltages (near -50 mV) or at -20 mV in partially inactivated cells. These were of lower amplitude, narrower and of much longer duration than sparks, similar to 'lone embers' observed in the permeabilized segments. Their average amplitude was 0.19 and spatial half-width 1.3 microm. Other parameters depended on voltage. At -50 mV average duration was 111 ms and latency 185 ms. At -20 mV duration was 203 ms and latency 24 ms. Ca(2+) release current, calculated on an average of events, was nearly steady at 0.5-0.6 pA. Accordingly, simulations of the fluorescence event elicited by a subresolution source of 0.5 pA open for 100 ms had morphology similar to the experimental average. Because 0.5 pA is approximately the current measured for single RyR channels in physiological conditions, the elementary fluorescence events in rat muscle probably reflect opening of a single RyR channel. A reconstruction of cell-averaged release flux at -20 mV based on the observed distribution of latencies and calculated elementary release had qualitatively correct but slower kinetics than the release flux in prior whole-cell measurements. The qualitative agreement indicates that global Ca(2+) release flux results from summation of these discrete events. The quantitative discrepancies suggest that the partial inactivation strategy may lead to events of greater duration than those occurring physiologically in fully polarized cells.

  6. Analysis of Spattering Activity at Halema'uma'u in 2015

    NASA Astrophysics Data System (ADS)

    Mintz, Bianca G.

    The classical explosive basaltic eruption spectrum is traditionally defined by the following end member eruption styles: Hawaiian and Strombolian. The field use of high-speed cameras has enabled volcanologists to make improved quantifications and more accurate descriptions of these classical eruptions styles and to quantify previously undecipherable activity (including activity on the basaltic eruption spectrum between the two defined end members). Explosive activity in 2015 at the free surface of the Halema'uma'u lava lake at Kilauea exhibited features of both sustained (Hawaiian) fountaining and transient (Strombolian) explosivity. Most of this activity is internally triggered by the internal rise of decoupled gas bubbles from below the lake's surface, but external triggering via rock falls, was also observed. Here I identify three styles of bubble bursting and spattering eruptive activity (isolated events, clusters of events, and prolonged episodes) at the lava lake, and distinguished them based on their temporal and spatial distributions. Isolated events are discrete single bubble bursts that persist for a few tenths of seconds to seconds and are separated by repose periods of similar or longer time scales. Cluster of events are closely spaced, repeated events grouped around a narrow point source, which persist for seconds to minutes. Prolonged episodes are groupings of numerous events closely linked in space and time that persist for tens of minutes to hours. Analysis of individual events from high-speed camera images indicates that they are made up of up to three phases: the bubble ascent phase, the bursting and pyroclast ejection phase, and the drain back (and rebound) phase. Based on the numerical parameters established in this study, the 2015 activity was relatively weak (i.e., of low intensity) but still falls in a region between those of continuous Hawaiian fountains and impulsive, short-lived Strombolian explosions, in terms of duration.

  7. Effects of Discrete Emotions on Young Children's Ability to Discern Fantasy and Reality

    ERIC Educational Resources Information Center

    Carrick, Nathalie; Quas, Jodi A.

    2006-01-01

    This study examined 3- to 5-year-olds' (N = 128; 54% girls) ability to discriminate emotional fantasy and reality. Children viewed images depicting fantastic or real events that elicited several emotions, reported whether each event could occur, and rated their emotional reaction to the image. Children were also administered the Play Behavior…

  8. Discrete-storm water-table fluctuation method to estimate episodic recharge.

    USGS Publications Warehouse

    Nimmo, John R.; Horowittz, Charles; Mitchell, Lara

    2015-01-01

    We have developed a method to identify and quantify recharge episodes, along with their associated infiltration-related inputs, by a consistent, systematic procedure. Our algorithm partitions a time series of water levels into discrete recharge episodes and intervals of no episodic recharge. It correlates each recharge episode with a specific interval of rainfall, so storm characteristics such as intensity and duration can be associated with the amount of recharge that results. To be useful in humid climates, the algorithm evaluates the separability of events, so that those whose recharge cannot be associated with a single storm can be appropriately lumped together. Elements of this method that are subject to subjectivity in the application of hydrologic judgment are values of lag time, fluctuation tolerance, and master recession parameters. Because these are determined once for a given site, they do not contribute subjective influences affecting episode-to-episode comparisons. By centralizing the elements requiring scientific judgment, our method facilitates such comparisons by keeping the most subjective elements openly apparent, making it easy to maintain consistency. If applied to a period of data long enough to include recharge episodes with broadly diverse characteristics, the method has value for predicting how climatic alterations in the distribution of storm intensities and seasonal duration may affect recharge.

  9. Studies on the latitudinal distribution of ground-based geomagnetic pulsations and fluctuations in the interplanetary medium using discrete mathematical analysis methods

    NASA Astrophysics Data System (ADS)

    Zelinsky, N. R.; Kleimenova, N. G.; Malysheva, L. M.

    2014-07-01

    Ground-based geomagnetic Pc5 (2-7 mHz) pulsations, caused by the passage of dense transients (density disturbances) in the solar wind, were analyzed. It was shown that intensive bursts can appear in the density of the solar wind and its fluctuations, up to Np ˜ 30-50 cm3, even during the most magnetically calm year in the past decades (2009). The analysis, performed using one of the latest methods of discrete mathematical analysis (DMA), is presented. The energy functional of a time-series fragment (called "anomaly rectification" in DMA terms) of two such events was calculated. It was established that fluctuations in the dynamic pressure (density) of the solar wind (SW) cause the global excitation of Pc5 geomagnetic pulsations in the daytime sector of the Earth's magnetosphere, i.e., from polar to equatorial latitudes. Such pulsations started and ended suddenly and simultaneously at all latitudes. Fluctuations in the interplanetary magnetic field (IMF) have turned up to be less geoeffective in exciting geomagnetic pulsations than fluctuations in the SW density. The pulsation generation mechanisms in various structural regions of the magnetosphere were probably different. It was therefore concluded that the most probable source of ground-based pulsations are fluctuations of the corresponding periods in the SW density.

  10. Modification of the SAS4A Safety Analysis Code for Integration with the ADAPT Discrete Dynamic Event Tree Framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jankovsky, Zachary Kyle; Denman, Matthew R.

    It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through themore » analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.« less

  11. Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows

    NASA Astrophysics Data System (ADS)

    Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.

    2016-08-01

    The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.

  12. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  13. Design of Flight Vehicle Management Systems

    NASA Technical Reports Server (NTRS)

    Meyer, George; Aiken, Edwin W. (Technical Monitor)

    1994-01-01

    As the operation of large systems becomes ever more dependent on extensive automation, the need for an effective solution to the problem of design and validation of the underlying software becomes more critical. Large systems possess much detailed structure, typically hierarchical, and they are hybrid. Information processing at the top of the hierarchy is by means of formal logic and sentences; on the bottom it is by means of simple scalar differential equations and functions of time; and in the middle it is by an interacting mix of nonlinear multi-axis differential equations and automata, and functions of time and discrete events. The lecture will address the overall problem as it relates to flight vehicle management, describe the middle level, and offer a design approach that is based on Differential Geometry and Discrete Event Dynamic Systems Theory.

  14. Safety analysis of discrete event systems using a simplified Petri net controller.

    PubMed

    Zareiee, Meysam; Dideban, Abbas; Asghar Orouji, Ali

    2014-01-01

    This paper deals with the problem of forbidden states in discrete event systems based on Petri net models. So, a method is presented to prevent the system from entering these states by constructing a small number of generalized mutual exclusion constraints. This goal is achieved by solving three types of Integer Linear Programming problems. The problems are designed to verify the constraints that some of them are related to verifying authorized states and the others are related to avoiding forbidden states. The obtained constraints can be enforced on the system using a small number of control places. Moreover, the number of arcs related to these places is small, and the controller after connecting them is maximally permissive. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  15. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  16. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  17. Time-Frequency Domain Analysis of Helicopter Transmission Vibration

    DTIC Science & Technology

    1991-08-01

    Wigner - Ville distribution ( WVD ) have be reported, including speech...FREQUENCY DISTRIBUTIONS . 8 6. THE WIGNER - VILLE DISTRIBUTION . 9 6.1 History. 9 6.2 Definition. 9 6.3 Discrete-Time/Frequency Wigner - Ville Distribution . 10...signals are examined to indicate how various forms of modulation are portrayed using the Wigner - Ville distribution . Practical examples A signal is

  18. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  19. The discrete regime of flame propagation

    NASA Astrophysics Data System (ADS)

    Tang, Francois-David; Goroshin, Samuel; Higgins, Andrew

    The propagation of laminar dust flames in iron dust clouds was studied in a low-gravity envi-ronment on-board a parabolic flight aircraft. The elimination of buoyancy-induced convection and particle settling permitted measurements of fundamental combustion parameters such as the burning velocity and the flame quenching distance over a wide range of particle sizes and in different gaseous mixtures. The discrete regime of flame propagation was observed by substitut-ing nitrogen present in air with xenon, an inert gas with a significantly lower heat conductivity. Flame propagation in the discrete regime is controlled by the heat transfer between neighbor-ing particles, rather than by the particle burning rate used by traditional continuum models of heterogeneous flames. The propagation mechanism of discrete flames depends on the spa-tial distribution of particles, and thus such flames are strongly influenced by local fluctuations in the fuel concentration. Constant pressure laminar dust flames were observed inside 70 cm long, 5 cm diameter Pyrex tubes. Equally-spaced plate assemblies forming rectangular chan-nels were placed inside each tube to determine the quenching distance defined as the minimum channel width through which a flame can successfully propagate. High-speed video cameras were used to measure the flame speed and a fiber optic spectrometer was used to measure the flame temperature. Experimental results were compared with predictions obtained from a numerical model of a three-dimensional flame developed to capture both the discrete nature and the random distribution of particles in the flame. Though good qualitative agreement was obtained between model predictions and experimental observations, residual g-jitters and the short reduced-gravity periods prevented further investigations of propagation limits in the dis-crete regime. The full exploration of the discrete flame phenomenon would require high-quality, long duration reduced gravity environment available only on orbital platforms.

  20. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the modelmore » size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.« less

  1. Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.

    PubMed

    Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin

    2017-11-01

    The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A method for modeling finite-core vortices in wake-flow calculations

    NASA Technical Reports Server (NTRS)

    Stremel, P. M.

    1984-01-01

    A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.

  3. A survival tree method for the analysis of discrete event times in clinical and epidemiological studies.

    PubMed

    Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard

    2016-02-28

    Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  5. Digitally controlled distributed phase shifter

    DOEpatents

    Hietala, V.M.; Kravitz, S.H.; Vawter, G.A.

    1993-08-17

    A digitally controlled distributed phase shifter is comprised of N phase shifters. Digital control is achieved by using N binary length-weighted electrodes located on the top surface of a waveguide. A control terminal is attached to each electrode thereby allowing the application of a control signal. The control signal is either one or two discrete bias voltages. The application of the discrete bias voltages changes the modal index of a portion of the waveguide that corresponds to a length of the electrode to which the bias voltage is applied, thereby causing the phase to change through the underlying portion of the waveguide. The digitally controlled distributed phase shift network has a total phase shift comprised of the sum of the individual phase shifters.

  6. Digitally controlled distributed phase shifter

    DOEpatents

    Hietala, Vincent M.; Kravitz, Stanley H.; Vawter, Gregory A.

    1993-01-01

    A digitally controlled distributed phase shifter is comprised of N phase shifters. Digital control is achieved by using N binary length-weighted electrodes located on the top surface of a waveguide. A control terminal is attached to each electrode thereby allowing the application of a control signal. The control signal is either one or two discrete bias voltages. The application of the discrete bias voltages changes the modal index of a portion of the waveguide that corresponds to a length of the electrode to which the bias voltage is applied, thereby causing the phase to change through the underlying portion of the waveguide. The digitally controlled distributed phase shift network has a total phase shift comprised of the sum of the individual phase shifters.

  7. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.

    PubMed

    Deng, Lei; Jiao, Peng; Pei, Jing; Wu, Zhenzhi; Li, Guoqi

    2018-04-01

    Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github.com/AcrossV/Gated-XNOR. More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. THYME: Toolkit for Hybrid Modeling of Electric Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro Kalyan Perumalla, James Joseph

    2011-01-01

    THYME is an object oriented library for building models of wide area control and communications in electric power systems. This software is designed as a module to be used with existing open source simulators for discrete event systems in general and communication systems in particular. THYME consists of a typical model for simulating electro-mechanical transients (e.g., as are used in dynamic stability studies), data handling objects to work with CDF and PTI formatted power flow data, and sample models of discrete sensors and controllers.

  9. Software engineering and simulation

    NASA Technical Reports Server (NTRS)

    Zhang, Shou X.; Schroer, Bernard J.; Messimer, Sherri L.; Tseng, Fan T.

    1990-01-01

    This paper summarizes the development of several automatic programming systems for discrete event simulation. Emphasis is given on the model development, or problem definition, and the model writing phases of the modeling life cycle.

  10. Utilization of Historic Information in an Optimisation Task

    NASA Technical Reports Server (NTRS)

    Boesser, T.

    1984-01-01

    One of the basic components of a discrete model of motor behavior and decision making, which describes tracking and supervisory control in unitary terms, is assumed to be a filtering mechanism which is tied to the representational principles of human memory for time-series information. In a series of experiments subjects used the time-series information with certain significant limitations: there is a range-effect; asymmetric distributions seem to be recognized, but it does not seem to be possible to optimize performance based on skewed distributions. Thus there is a transformation of the displayed data between the perceptual system and representation in memory involving a loss of information. This rules out a number of representational principles for time-series information in memory and fits very well into the framework of a comprehensive discrete model for control of complex systems, modelling continuous control (tracking), discrete responses, supervisory behavior and learning.

  11. The Effect of Scale Dependent Discretization on the Progressive Failure of Composite Materials Using Multiscale Analyses

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    A multiscale modeling methodology, which incorporates a statistical distribution of fiber strengths into coupled micromechanics/ finite element analyses, is applied to unidirectional polymer matrix composites (PMCs) to analyze the effect of mesh discretization both at the micro- and macroscales on the predicted ultimate tensile (UTS) strength and failure behavior. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a PMC tensile specimen that initiates at the repeating unit cell (RUC) level. Three different finite element mesh densities were employed and each coupled with an appropriate RUC. Multiple simulations were performed in order to assess the effect of a statistical distribution of fiber strengths on the bulk composite failure and predicted strength. The coupled effects of both the micro- and macroscale discretizations were found to have a noticeable effect on the predicted UTS and computational efficiency of the simulations.

  12. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  13. Power-law Exponent in Multiplicative Langevin Equation with Temporally Correlated Noise

    NASA Astrophysics Data System (ADS)

    Morita, Satoru

    2018-05-01

    Power-law distributions are ubiquitous in nature. Random multiplicative processes are a basic model for the generation of power-law distributions. For discrete-time systems, the power-law exponent is known to decrease as the autocorrelation time of the multiplier increases. However, for continuous-time systems, it is not yet clear how the temporal correlation affects the power-law behavior. Herein, we analytically investigated a multiplicative Langevin equation with colored noise. We show that the power-law exponent depends on the details of the multiplicative noise, in contrast to the case of discrete-time systems.

  14. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  15. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  16. LMI-based approach to stability analysis for fractional-order neural networks with discrete and distributed delays

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Ye, Renyu; Liu, Song; Cao, Jinde; Alsaedi, Ahmad; Li, Xiaodi

    2018-02-01

    This paper is concerned with the asymptotic stability of the Riemann-Liouville fractional-order neural networks with discrete and distributed delays. By constructing a suitable Lyapunov functional, two sufficient conditions are derived to ensure that the addressed neural network is asymptotically stable. The presented stability criteria are described in terms of the linear matrix inequalities. The advantage of the proposed method is that one may avoid calculating the fractional-order derivative of the Lyapunov functional. Finally, a numerical example is given to show the validity and feasibility of the theoretical results.

  17. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution

    DOE PAGES

    Cai, Hong; Long, Christopher M.; DeRose, Christopher T.; ...

    2017-01-01

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  18. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution.

    PubMed

    Cai, Hong; Long, Christopher M; DeRose, Christopher T; Boynton, Nicholas; Urayama, Junji; Camacho, Ryan; Pomerene, Andrew; Starbuck, Andrew L; Trotter, Douglas C; Davids, Paul S; Lentine, Anthony L

    2017-05-29

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  19. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Hong; Long, Christopher M.; DeRose, Christopher T.

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  20. Spatial Connectivity and Temporal Response of Variable Source Areas (VSAs): Implications for Catchment Scale Water and Solute Mixing

    NASA Astrophysics Data System (ADS)

    Inamdar, S.; Mitchell, M.; McDonnell, J.; McGlynn, B.; Shanley, J.

    2001-05-01

    The significance of variable source areas (VSAs) in storm runoff generation and as loci for mixing of event and pre-event waters has long been recognized. Recent research suggests that VSAs may also play an important role in regulating the export of C and N solutes from catchments. We hypothesize that the spatial distribution of VSAs in the catchment and their connectedness with the stream network is a first order control on the temporal dynamics and expression of water and solutes from the catchment. We examined two contrasting scenarios of VSA distribution: (1) VSAs located lower in the catchment and well connected to the stream network, versus, (2) discrete VSAs located in the upper portions of the catchment and disconnected from the stream network. We evaluated the potential impact of these scenarios on: (a) the timing and peak of event water contributions, and (b) the timing and peak of solute signatures. We hypothesized that if VSAs are well connected to the stream network (Scenario 1), then event water contributions would be distinct and would predominate early on during the rising limb of the hydrograph of stream discharge. In contrast, if VSAs are isolated and disconnected (Scenario 2), then event water contributions would be damped and delayed and possibly continue to be observed through hydrograph recession. We believe solutes such as dissolved organic carbon (DOC), which are primarily flushed from near surface soil horizons, will follow an event water trajectory. We tested these hypotheses for a 135 ha forested headwater catchment in the Adirondack Mountains of New York. Detailed storm runoff and solute data for the catchment are available since 1994. A two-component separation model using base cations (Na, Mg, Ca, and K) was used to partition stormflow discharge into pre-event and event components. Event water contributions were small on the rising limb of the hydrograph, reached their maximum just after the discharge peak, and continued through the recession limb, hours after cessation of rainfall. DOC concentrations followed a temporal pattern very similar to the event water contributions, with a peak at or just after peak discharge. In contrast, the timing of the nitrate peak appeared to vary seasonally, indicating availability of nitrate in the soil profile as a controlling mechanism. Nitrate peaks appeared to match DOC and event water peaks for spring events, but occurred much earlier on the rising limb of the discharge hydrograph during fall events. Results from this study appear to confirm our hypothesis for scenario 2, where the disconnected nature of VSAs is displayed by the delayed expression of event water and DOC. These results also confirm our hypothesis that the spatial distribution of VSAs will have a greater impact on the temporal expression of solutes that are available in near surface soil horizons, as opposed to solutes whose availability in the near surface soil varies with seasons. These hypotheses are also being evaluated for a forested subcatchment of the Sleepers River watershed in Vermont.

  1. Providing a parallel and distributed capability for JMASS using SPEEDES

    NASA Astrophysics Data System (ADS)

    Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob

    2002-07-01

    The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.

  2. On the optimal identification of tag sets in time-constrained RFID configurations.

    PubMed

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  3. Causal Set Phenomenology

    NASA Astrophysics Data System (ADS)

    Philpott, Lydia

    2010-09-01

    Central to the development of any new theory is the investigation of the observable consequences of the theory. In the search for quantum gravity, research in phenomenology has been dominated by models violating Lorentz invariance (LI) -- despite there being, at present, no evidence that LI is violated. Causal set theory is a LI candidate theory of QG that seeks not to quantise gravity as such, but rather to develop a new understanding of the universe from which both GR and QM could arise separately. The key hypothesis is that spacetime is a discrete partial order: a set of events where the partial ordering is the physical causal ordering between the events. This thesis investigates Lorentz invariant QG phenomenology motivated by the causal set approach. Massive particles propagating in a discrete spacetime will experience diffusion in both position and momentum in proper time. This thesis considers this idea in more depth, providing a rigorous derivation of the diffusion equation in terms of observable cosmic time. The diffusion behaviour does not depend on any particular underlying particle model. Simulations of three different models are conducted, revealing behaviour that matches the diffusion equation despite limitations on the size of causal set simulated. The effect of spacetime discreteness on the behaviour of massless particles is also investigated. Diffusion equations in both affine time and cosmic time are derived, and it is found that massless particles undergo diffusion and drift in energy. Constraints are placed on the magnitudes of the drift and diffusion parameters by considering the blackbody nature of the CMB. Spacetime discreteness also has a potentially observable effect on photon polarisation. For linearly polarised photons, underlying discreteness is found to cause a rotation in polarisation angle and a suppression in overall polarisation.

  4. Studies of discrete symmetries in a purely leptonic system using the Jagiellonian Positron Emission Tomograph

    NASA Astrophysics Data System (ADS)

    Moskal, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gupta-Sharma, N.; Gorgol, M.; Hiesmayr, B. C.; Jasińska, B.; Kamińska, D.; Khreptak, O.; Korcyl, G.; Kowalski, P.; Krzemień, W.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Niedźwiecki, Sz.; Pawlik-Niedńwiecka, M.; Raczyński, L.; Rudy, Z.; Silarski, M.; Smyrski, J.; Wieczorek, A.; Wiślicki, W.; Zgardzińska, B.; Zieliński, M.

    2016-11-01

    Discrete symmetries such as parity (P), charge-conjugation (C) and time reversal (T) are of fundamental importance in physics and cosmology. Breaking of charge conjugation symmetry (C) and its combination with parity (CP) constitute necessary conditions for the existence of the asymmetry between matter and antimatter in the observed Universe. The presently known sources of discrete symmetries violations can account for only a tiny fraction of the excess of matter over antimatter. So far CP and T symmetries violations were observed only for systems involving quarks and they were never reported for the purely leptonic objects. In this article we describe briefly an experimental proposal for the test of discrete symmetries in the decays of positronium atom which is made exclusively of leptons. The experiments are conducted by means of the Jagiellonian Positron Emission Tomograph (J-PET) which is constructed from strips of plastic scintillators enabling registration of photons from the positronium annihilation. J-PET tomograph together with the positronium target system enable to measure expectation values for the discrete symmetries odd operators constructed from (i) spin vector of the ortho-positronium atom, (ii) momentum vectors of photons originating from the decay of positronium, and (iii) linear polarization direction of annihilation photons. Linearly polarized positronium will be produced in the highly porous aerogel or polymer targets, exploiting longitudinally polarized positrons emitted by the sodium 22Na isotope. Information about the polarization vector of orthopositronium will be available on the event by event basis and will be reconstructed from the known position of the positron source and the reconstructed position of the orthopositronium annihilation. In 2016 the first tests and calibration runs are planned, and the data collection with high statistics will commence in the year 2017.

  5. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  6. RELATIONSHIP BETWEEN LINGUISTIC UNITS AND MOTOR COMMANDS.

    ERIC Educational Resources Information Center

    FROMKIN, VICTORIA A.

    ASSUMING THAT SPEECH IS THE RESULT OF A NUMBER OF DISCRETE NEUROMUSCULAR EVENTS AND THAT THE BRAIN CAN STORE ONLY A LIMITED NUMBER OF MOTOR COMMANDS WITH WHICH TO CONTROL THESE EVENTS, THE RESEARCH REPORTED IN THIS PAPER WAS DIRECTED TO A DETERMINATION OF THE SIZE AND NATURE OF THE STORED ITEMS AND AN EXPLANATION OF HOW SPEAKERS ENCODE A SEQUENCE…

  7. Using Institutional Data to Identify Students at Risk for Leaving Community College: An Event History Approach

    ERIC Educational Resources Information Center

    Bachler, Paul T.

    2013-01-01

    Community colleges have been criticized for having lower graduation rates than four year colleges, but few studies have looked at non-graduation transfer, in which a student leaves the community college for a four-year college without taking an associate degree. The current study utilizes institutional data and a discrete-time event history model…

  8. Pathways to the Principalship: An Event History Analysis of the Careers of Teachers with Principal Certification

    ERIC Educational Resources Information Center

    Davis, Bradley W.; Gooden, Mark A.; Bowers, Alex J.

    2017-01-01

    Utilizing rich data on nearly 11,000 educators over 17 academic years in a highly diverse context, we examine the career paths of teachers to determine whether and when they transition into the principalship. We utilize a variety of event history analyses, including discrete-time hazard modeling, to determine how an individual's race, gender, and…

  9. Modelling approaches: the case of schizophrenia.

    PubMed

    Heeg, Bart M S; Damen, Joep; Buskens, Erik; Caleo, Sue; de Charro, Frank; van Hout, Ben A

    2008-01-01

    Schizophrenia is a chronic disease characterized by periods of relative stability interrupted by acute episodes (or relapses). The course of the disease may vary considerably between patients. Patient histories show considerable inter- and even intra-individual variability. We provide a critical assessment of the advantages and disadvantages of three modelling techniques that have been used in schizophrenia: decision trees, (cohort and micro-simulation) Markov models and discrete event simulation models. These modelling techniques are compared in terms of building time, data requirements, medico-scientific experience, simulation time, clinical representation, and their ability to deal with patient heterogeneity, the timing of events, prior events, patient interaction, interaction between co-variates and variability (first-order uncertainty). We note that, depending on the research question, the optimal modelling approach should be selected based on the expected differences between the comparators, the number of co-variates, the number of patient subgroups, the interactions between co-variates, and simulation time. Finally, it is argued that in case micro-simulation is required for the cost-effectiveness analysis of schizophrenia treatments, a discrete event simulation model is best suited to accurately capture all of the relevant interdependencies in this chronic, highly heterogeneous disease with limited long-term follow-up data.

  10. Perception of binary acoustic events associated with the first heart sound

    NASA Technical Reports Server (NTRS)

    Spodick, D. H.

    1977-01-01

    The resolving power of the auditory apparatus permits discrete vibrations associated with cardiac activity to be perceived as one or more events. Irrespective of the vibratory combinations recorded by conventional phonocardiography, in normal adults and in most adult patients auscultators tend to discriminate only two discrete events associated with the first heart sound S1. It is stressed that the heart sound S4 may be present when a binary acoustic event associated with S1 occurs in the sequence 'low pitched sound preceding high pitched sound', i.e., its components are perceived by auscultation as 'dull-sharp'. The question of S4 audibility arises in those individuals, normal and diseased, in whom the major components of S1 ought to be, at least clinically, at their customary high pitch and indeed on the PCG appear as high frequency oscillations. It is revealed that the apparent audibility of recorded S4 is not related to P-R interval, P-S4 interval, or relative amplitude of S4. The significant S4-LFC (low frequency component of S1) differences can be related to acoustic modification of the early component of S1.

  11. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  12. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  13. Spatial Variations of Poloidal and Toroidal Mode Field Line Resonances Observed by MMS

    NASA Astrophysics Data System (ADS)

    Le, G.; Chi, P. J.; Strangeway, R. J.; Russell, C. T.; Slavin, J. A.; Anderson, B. J.; Kepko, L.; Nakamura, R.; Plaschke, F.; Torbert, R. B.

    2017-12-01

    Field line resonances (FLRs) are magnetosphere's responses to solar wind forcing and internal instabilities generated by solar wind-magnetospheric interactions. They are standing waves along the Earth's magnetic field lines oscillating in either poloidal or toroidal modes. The two types of waves have their unique frequency characteristics. The eigenfrequency of FLRs is determined by the length of the field line and the plasma density, and thus gradually changes with L. For toroidal mode oscillations with magnetic field perturbations in the azimuthal direction, ideal MHD predicts that each field line oscillates independently with its own eigenfrequency. For poloidal mode waves with field lines oscillating radially, their frequency cannot change with L easily as L shells need to oscillate in sync to avoid efficient damping due to phase mixing. Observations, mainly during quiet times, indeed show that poloidal mode waves often exhibit nearly constant frequency across L shells. Our recent observations, on the other hand, reveal a clear L-dependent frequency trend for a long lasting storm-time poloidal wave event, indicating the wave can maintain its power with changing frequencies for an extended period [Le et al., 2017]. The spatial variation of the frequency shows discrete spatial structures. The frequency remains constant within each discrete structure that spans about 1 REalong L, and changes discretely. We present a follow-up study to investigate spatial variations of wave frequencies using the Wigner-Ville distribution. We examine both poloidal and toroidal waves under different geomagnetic conditions using multipoint observations from MMS, and compare their frequency and occurrence characteristics for insights into their generation mechanisms. Reference: Le, G., et al. (2017), Global observations of magnetospheric high-m poloidal waves during the 22 June 2015 magnetic storm, Geophys. Res. Lett., 44, 3456-3464, doi:10.1002/2017GL073048.

  14. Differential modulation of changes in hippocampal-septal synaptic excitability by the amygdala as a function of either elemental or contextual fear conditioning in mice.

    PubMed

    Desmedt, A; Garcia, R; Jaffard, R

    1998-01-01

    Recent data obtained using a classic fear conditioning paradigm showed a dissociation between the retention of associations relative to contextual information (dependent on the hippocampal formation) and the retention of elemental associations (dependent on the amygdala). Furthermore, it was reported that conditioned emotional responses (CERs) could be dissociated from the recollection of the learning experience (declarative memory) in humans and from modifications of the hippocampal-septal excitability in animals. Our aim was to determine whether these two systems ("behavioral expression" system and "factual memory" system) interact by examining the consequences of amygdalar lesions (1) on the modifications of hippocampal-septal excitability and (2) on the behavioral expression of fear (freezing) resulting from an aversive conditioning during reexposure to conditional stimuli (CSs). During conditioning, to modulate the predictive nature of the context and of a discrete stimulus (tone) on the unconditional stimulus (US) occurrence, the phasic discrete CS was paired with the US or randomly distributed with regard to the US. After the lesion, the CER was dramatically reduced during reexposure to the CSs, whatever the type of acquisition. However, the changes in hippocampal-septal excitability persisted but were altered. For controls, a decrease in septal excitability was observed during reexposure to the conditioning context only for the "unpaired group" (predictive context case). Conversely, among lesioned subjects this decrease was observed in the "paired group" (predictive discrete CS case), whereas this decrease was significantly reduced in the unpaired group with respect to the matched control group. The amplitude and the direction of these modifications suggest a differential modulation of hippocampal-septal excitability by the amygdala to amplify the contribution of the more predictive association signaling the occurrence of the aversive event.

  15. Serial Founder Effects During Range Expansion: A Spatial Analog of Genetic Drift

    PubMed Central

    Slatkin, Montgomery; Excoffier, Laurent

    2012-01-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1 – 1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population. PMID:22367031

  16. Serial founder effects during range expansion: a spatial analog of genetic drift.

    PubMed

    Slatkin, Montgomery; Excoffier, Laurent

    2012-05-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1-1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population.

  17. Template-free synthesis and structural evolution of discrete hydroxycancrinite zeolite nanorods from high-concentration hydrogels.

    PubMed

    Chen, Shaojiang; Sorge, Lukas P; Seo, Dong-Kyun

    2017-12-07

    We report the synthesis and characterization of hydroxycancrinite zeolite nanorods by a simple hydrothermal treatment of aluminosilicate hydrogels at high concentrations of precursors without the use of structure-directing agents. Transmission electron microscopy (TEM) analysis reveals that cancrinite nanorods, with lengths of 200-800 nm and diameters of 30-50 nm, exhibit a hexagonal morphology and are elongated along the crystallographic c direction. The powder X-ray diffraction (PXRD), Fourier transform infrared (FT-IR) and TEM studies revealed sequential events of hydrogel formation, the formation of aggregated sodalite nuclei, the conversion of sodalite to cancrinite and finally the growth of cancrinite nanorods into discrete particles. The aqueous dispersion of the discrete nanorods displays a good stability between pH 6-12 with the zeta potential no greater than -30 mV. The synthesis is unique in that the initial aggregated nanocrystals do not grow into microsized particles (aggregative growth) but into discrete nanorods. Our findings demonstrate an unconventional possibility that discrete zeolite nanocrystals could be produced from a concentrated hydrogel.

  18. Updating older forest inventory data with a growth model and satellite records to improve the responsiveness and currency of national carbon monitoring

    NASA Astrophysics Data System (ADS)

    Healey, S. P.; Zhao, F. R.; McCarter, J. B.; Frescino, T.; Goeking, S.

    2017-12-01

    International reporting of American forest carbon trends depends upon the Forest Service's nationally consistent network of inventory plots. Plots are measured on a rolling basis over a 5- to 10-year cycle, so estimates related to any variable, including carbon storage, reflect conditions over a 5- to 10-year window. This makes it difficult to identify the carbon impact of discrete events (e.g., a bad fire year; extraction rates related to home-building trends), particularly if the events are recent.We report an approach to make inventory estimates more sensitive to discrete and recent events. We use a growth model (the Forest Vegetation Simulator - FVS) that is maintained by the Forest Service to annually update the tree list for every plot, allowing all plots to contribute to a series of single-year estimates. Satellite imagery from the Landsat platform guides the FVS simulations by providing information about which plots have been disturbed, which are recovering from disturbance, and which are undergoing undisturbed growth. The FVS model is only used to "update" plot tree lists until the next field measurement is made (maximum of 9 years). As a result, predicted changes are usually small and error rates are low. We present a pilot study of this system in Idaho, which has experienced several major fire events in the last decade. Empirical estimates of uncertainty, accounting for both plot sampling error and FVS model error, suggest that this approach greatly increases temporal specificity and sensitivity to discrete events without sacrificing much estimate precision at the level of a US state. This approach has the potential to take better advantage of the Forest Service's rolling plot measurement schedule to report carbon storage in the US, and it offers the basis of a system that might allow near-term, forward-looking analysis of the effects of hypothetical forest disturbance patterns.

  19. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  20. A 1000-year sediment record of recurring hypoxia off the Mississippi River: The potential role of terrestrially-derived organic matter inputs

    USGS Publications Warehouse

    Swarzenski, P.W.; Campbell, P.L.; Osterman, L.E.; Poore, R.Z.

    2008-01-01

    A suite of inorganic and organic geochemical tracers and a low-oxygen tolerant benthic faunal index ('PEB') were measured in a 14C-dated 2+??m long gravity core collected on the Louisiana shelf adjacent to the Mississippi River delta to study potential millennium-scale low-oxygen events. Periodic down-core excursions in the PEB index throughout the core suggest recurring, natural bottom water low-oxygen events that extend back ??? 1000??14C years. Select trace element and biomarker distributions in these same sediments were examined as potential tracers of past hypoxic events and to help distinguish between marine versus terrestrial processes involved in organic carbon production. In discrete sediment horizons where the PEB index was elevated, redox-sensitive vanadium concentrations were consistently depleted, excursions in sedimentary ??13C suggest periodic, preferential terrestrial inputs, and the concentrations of two sterol biomarkers (sitosterol and ??-stigmasterol) also showed concurrent enrichments. If the PEB index successfully records ??? 1000??14C year-scale low-oxygen events, then the distribution of these geochemical tracers can be interpreted to corroborate the view that naturally occurring low-oxygen bottom water conditions have existed on the inner Louisiana continental shelf, not only in recent times, but also over at least the last 1000??14C years. These data support the general hypothesis that historic, low-oxygen bottom water conditions on the Louisiana shelf are likely tied to periods of increased fluvial discharge and associated wetland export in the absence of modern river levees. Enhanced river discharge and associated material export would both stimulate enhanced in situ organic carbon production and foster water column stratification. Such periodic elevated river flows during the last millennium can be linked to climate fluctuations and tropical storm activity. ?? 2008 Elsevier B.V. All rights reserved.

  1. Precipitation pulses and carbon fluxes in semiarid and arid ecosystems.

    PubMed

    Huxman, Travis E; Snyder, Keirith A; Tissue, David; Leffler, A Joshua; Ogle, Kiona; Pockman, William T; Sandquist, Darren R; Potts, Daniel L; Schwinning, Susan

    2004-10-01

    In the arid and semiarid regions of North America, discrete precipitation pulses are important triggers for biological activity. The timing and magnitude of these pulses may differentially affect the activity of plants and microbes, combining to influence the C balance of desert ecosystems. Here, we evaluate how a "pulse" of water influences physiological activity in plants, soils and ecosystems, and how characteristics, such as precipitation pulse size and frequency are important controllers of biological and physical processes in arid land ecosystems. We show that pulse size regulates C balance by determining the temporal duration of activity for different components of the biota. Microbial respiration responds to very small events, but the relationship between pulse size and duration of activity likely saturates at moderate event sizes. Photosynthetic activity of vascular plants generally increases following relatively larger pulses or a series of small pulses. In this case, the duration of physiological activity is an increasing function of pulse size up to events that are infrequent in these hydroclimatological regions. This differential responsiveness of photosynthesis and respiration results in arid ecosystems acting as immediate C sources to the atmosphere following rainfall, with subsequent periods of C accumulation should pulse size be sufficient to initiate vascular plant activity. Using the average pulse size distributions in the North American deserts, a simple modeling exercise shows that net ecosystem exchange of CO2 is sensitive to changes in the event size distribution representative of wet and dry years. An important regulator of the pulse response is initial soil and canopy conditions and the physical structuring of bare soil and beneath canopy patches on the landscape. Initial condition influences responses to pulses of varying magnitude, while bare soil/beneath canopy patches interact to introduce nonlinearity in the relationship between pulse size and soil water response. Building on this conceptual framework and developing a greater understanding of the complexities of these eco-hydrologic systems may enhance our ability to describe the ecology of desert ecosystems and their sensitivity to global change.

  2. Contribution of rainfall, snow and ice melt to the hydrological regime of the Arve upper catchment and to severe flood events

    NASA Astrophysics Data System (ADS)

    Lecourt, Grégoire; Revuelto, Jesús; Morin, Samuel; Zin, Isabella; Lafaysse, Matthieu; Condom, Thomas; Six, Delphine; Vionnet, Vincent; Charrois, Luc; Dumont, Marie; Gottardi, Frédéric; Laarman, Olivier; Coulaud, Catherine; Esteves, Michel; Lebel, Thierry; Vincent, Christian

    2016-04-01

    In Alpine catchments, the hydrological response to meteorological events is highly influenced by the precipitation phase (liquid or solid) and by snow and ice melt. It is thus necessary to simulate accurately the snowpack evolution and its spatial distribution to perform relevant hydrological simulations. This work is focused on the upper Arve Valley (Western Alps). This 205 km2 catchment has large glaciated areas (roughly 32% of the study area) and covers a large range of elevations (1000-4500 m a.s.l.). Snow presence is significant year-round. The area is also characterized by steep terrain and strong vegetation heterogeneity. Modelling hydrological processes in such a complex catchment is therefore challenging. The detailed ISBA land surface model (including the Crocus snowpack scheme) has been applied to the study area using a topography based discretization (classifying terrain by aspect, elevation, slope and presence of glacier). The meteorological forcing used to run the simulations is the reanalysis issued from the SAFRAN model which assimilates meteorological observations from the Meteo-France networks. Conceptual reservoirs with calibrated values of emptying parameters are used to represent the underground water storage. This approach has been tested to simulate the discharge on the Arve catchment and three sub-catchments over 1990-2015. The simulations were evaluated with respect to observed water discharges for several headwaters with varying glaciated areas. They allow to quantify the relative contribution of rainfall, snow and ice melt to the hydrological regime of the basin. Additionally, we present a detailed analysis of several particular flood events. For these events, the ability of the model to correctly represent the catchment behaviour is investigated, looking particularly to the relevance of the simulated snowpack. Particularly, its spatial distribution is evaluated using MODIS snow cover maps, punctual snowpack observations and summer glacier mass balance estimations.

  3. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory

    PubMed Central

    Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank

    2016-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957

  4. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory.

    PubMed

    Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank

    2017-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  6. Discrete event simulation for exploring strategies: an urban water management case.

    PubMed

    Huang, Dong-Bin; Scholz, Roland W; Gujer, Willi; Chitwood, Derek E; Loukopoulos, Peter; Schertenleib, Roland; Siegrist, Hansruedi

    2007-02-01

    This paper presents a model structure aimed at offering an overview of the various elements of a strategy and exploring their multidimensional effects through time in an efficient way. It treats a strategy as a set of discrete events planned to achieve a certain strategic goal and develops a new form of causal networks as an interfacing component between decision makers and environment models, e.g., life cycle inventory and material flow models. The causal network receives a strategic plan as input in a discrete manner and then outputs the updated parameter sets to the subsequent environmental models. Accordingly, the potential dynamic evolution of environmental systems caused by various strategies can be stepwise simulated. It enables a way to incorporate discontinuous change in models for environmental strategy analysis, and enhances the interpretability and extendibility of a complex model by its cellular constructs. It is exemplified using an urban water management case in Kunming, a major city in Southwest China. By utilizing the presented method, the case study modeled the cross-scale interdependencies of the urban drainage system and regional water balance systems, and evaluated the effectiveness of various strategies for improving the situation of Dianchi Lake.

  7. Discrete-event system simulation on small and medium enterprises productivity improvement

    NASA Astrophysics Data System (ADS)

    Sulistio, J.; Hidayah, N. A.

    2017-12-01

    Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.

  8. Patient flow improvement for an ophthalmic specialist outpatient clinic with aid of discrete event simulation and design of experiment.

    PubMed

    Pan, Chong; Zhang, Dali; Kon, Audrey Wan Mei; Wai, Charity Sue Lea; Ang, Woo Boon

    2015-06-01

    Continuous improvement in process efficiency for specialist outpatient clinic (SOC) systems is increasingly being demanded due to the growth of the patient population in Singapore. In this paper, we propose a discrete event simulation (DES) model to represent the patient and information flow in an ophthalmic SOC system in the Singapore National Eye Centre (SNEC). Different improvement strategies to reduce the turnaround time for patients in the SOC were proposed and evaluated with the aid of the DES model and the Design of Experiment (DOE). Two strategies for better patient appointment scheduling and one strategy for dilation-free examination are estimated to have a significant impact on turnaround time for patients. One of the improvement strategies has been implemented in the actual SOC system in the SNEC with promising improvement reported.

  9. Discrete event simulation as a tool in optimization of a professional complex adaptive system.

    PubMed

    Nielsen, Anders Lassen; Hilwig, Helmer; Kissoon, Niranjan; Teelucksingh, Surujpal

    2008-01-01

    Similar urgent needs for improvement of health care systems exist in the developed and developing world. The culture and the organization of an emergency department in developing countries can best be described as a professional complex adaptive system, where each agent (employee) are ignorant of the behavior of the system as a whole; no one understands the entire system. Each agent's action is based on the state of the system at the moment (i.e. lack of medicine, unavailable laboratory investigation, lack of beds and lack of staff in certain functions). An important question is how one can improve the emergency service within the given constraints. The use of simulation signals is one new approach in studying issues amenable to improvement. Discrete event simulation was used to simulate part of the patient flow in an emergency department. A simple model was built using a prototyping approach. The simulation showed that a minor rotation among the nurses could reduce the mean number of visitors that had to be refereed to alternative flows within the hospital from 87 to 37 on a daily basis with a mean utilization of the staff between 95.8% (the nurses) and 87.4% (the doctors). We conclude that even faced with resource constraints and lack of accessible data discrete event simulation is a tool that can be used successfully to study the consequences of changes in very complex and self organizing professional complex adaptive systems.

  10. Influence of macular pigment optical density spatial distribution on intraocular scatter.

    PubMed

    Putnam, Christopher M; Bland, Pauline J; Bassi, Carl J

    This study evaluated the summed measures of macular pigment optical density (MPOD) spatial distribution and their effects on intraocular scatter using a commercially available device (C-Quant, Oculus, USA). A customized heterochromatic flicker photometer (cHFP) device was used to measure MPOD spatial distribution across the central 16° using a 1° stimulus. MPOD was calculated as a discrete measure and summed measures across the central 1°, 3.3°, 10° and 16° diameters. Intraocular scatter was determined as a mean of 5 trials in which reliability and repeatability measures were met using the C-Quant. MPOD spatial distribution maps were constructed and the effects of both discrete and summed values on intraocular scatter were examined. Spatial mapping identified mean values for discrete MPOD [0.32 (s.d.=0.08)], MPOD summed across central 1° [0.37 (s.d.=0.11)], MPOD summed across central 3.3° [0.85 (s.d.=0.20)], MPOD summed across central 10° [1.60 (s.d.=0.35)] and MPOD summed across central 16° [1.78 (s.d.=0.39)]. Mean intraocular scatter was 0.83 (s.d.=0.16) log units. While there were consistent trends for an inverse relationship between MPOD and scatter, these relationships were not statistically significant. Correlations between the highest and lowest quartiles of MPOD within the central 1° were near significance. While there was an overall trend of decreased intraocular forward scatter with increased MPOD consistent with selective short wavelength visible light attenuation, neither discrete nor summed values of MPOD significantly influence intraocular scatter as measured by the C-Quant device. Published by Elsevier España, S.L.U.

  11. Distinguishing between Binomial, Hypergeometric and Negative Binomial Distributions

    ERIC Educational Resources Information Center

    Wroughton, Jacqueline; Cole, Tarah

    2013-01-01

    Recognizing the differences between three discrete distributions (Binomial, Hypergeometric and Negative Binomial) can be challenging for students. We present an activity designed to help students differentiate among these distributions. In addition, we present assessment results in the form of pre- and post-tests that were designed to assess the…

  12. Fast Multilevel Solvers for a Class of Discrete Fourth Order Parabolic Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Bin; Chen, Luoping; Hu, Xiaozhe

    2016-03-05

    In this paper, we study fast iterative solvers for the solution of fourth order parabolic equations discretized by mixed finite element methods. We propose to use consistent mass matrix in the discretization and use lumped mass matrix to construct efficient preconditioners. We provide eigenvalue analysis for the preconditioned system and estimate the convergence rate of the preconditioned GMRes method. Furthermore, we show that these preconditioners only need to be solved inexactly by optimal multigrid algorithms. Our numerical examples indicate that the proposed preconditioners are very efficient and robust with respect to both discretization parameters and diffusion coefficients. We also investigatemore » the performance of multigrid algorithms with either collective smoothers or distributive smoothers when solving the preconditioner systems.« less

  13. Discrete-time Markovian stochastic Petri nets

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco

    1995-01-01

    We revisit and extend the original definition of discrete-time stochastic Petri nets, by allowing the firing times to have a 'defective discrete phase distribution'. We show that this formalism still corresponds to an underlying discrete-time Markov chain. The structure of the state for this process describes both the marking of the Petri net and the phase of the firing time for each transition, resulting in a large state space. We then modify the well-known power method to perform a transient analysis even when the state space is infinite, subject to the condition that only a finite number of states can be reached in a finite amount of time. Since the memory requirements might still be excessive, we suggest a bounding technique based on truncation.

  14. Discrete control of linear distributed systems with application to the deformable primary mirror of a large orbiting telescope. Ph.D. Thesis - Rhode Island Univ.

    NASA Technical Reports Server (NTRS)

    Creedon, J. F.

    1970-01-01

    The results are presented of a detailed study of the discrete control of linear distributed systems with specific application to the design of a practical controller for a plant representative of a telescope primary mirror for an orbiting astronomical observatory. The problem of controlling the distributed plant is treated by employing modal techniques to represent variations in the optical figure. Distortion of the mirror surface, which arises primarily from thermal gradients, is countered by actuators working against a backing structure to apply a corrective force distribution to the controlled surface. Each displacement actuator is in series with a spring attached to the mirror by means of a pad intentionally introduced to restrict the excitation of high-order modes. Control is exerted over a finite number of the most significant modes.

  15. A Hermite-based lattice Boltzmann model with artificial viscosity for compressible viscous flows

    NASA Astrophysics Data System (ADS)

    Qiu, Ruofan; Chen, Rongqian; Zhu, Chenxiang; You, Yancheng

    2018-05-01

    A lattice Boltzmann model on Hermite basis for compressible viscous flows is presented in this paper. The model is developed in the framework of double-distribution-function approach, which has adjustable specific-heat ratio and Prandtl number. It contains a density distribution function for the flow field and a total energy distribution function for the temperature field. The equilibrium distribution function is determined by Hermite expansion, and the D3Q27 and D3Q39 three-dimensional (3D) discrete velocity models are used, in which the discrete velocity model can be replaced easily. Moreover, an artificial viscosity is introduced to enhance the model for capturing shock waves. The model is tested through several cases of compressible flows, including 3D supersonic viscous flows with boundary layer. The effect of artificial viscosity is estimated. Besides, D3Q27 and D3Q39 models are further compared in the present platform.

  16. Discrete-to-continuous transition in quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Rządkowski, Wojciech; Demkowicz-Dobrzański, Rafał

    2017-09-01

    We analyze the problem of quantum phase estimation in which the set of allowed phases forms a discrete N -element subset of the whole [0 ,2 π ] interval, φn=2 π n /N , n =0 ,⋯,N -1 , and study the discrete-to-continuous transition N →∞ for various cost functions as well as the mutual information. We also analyze the relation between the problems of phase discrimination and estimation by considering a step cost function of a given width σ around the true estimated value. We show that in general a direct application of the theory of covariant measurements for a discrete subgroup of the U(1 ) group leads to suboptimal strategies due to an implicit requirement of estimating only the phases that appear in the prior distribution. We develop the theory of subcovariant measurements to remedy this situation and demonstrate truly optimal estimation strategies when performing a transition from discrete to continuous phase estimation.

  17. RINGMesh: A programming library for developing mesh-based geomodeling applications

    NASA Astrophysics Data System (ADS)

    Pellerin, Jeanne; Botella, Arnaud; Bonneau, François; Mazuyer, Antoine; Chauvin, Benjamin; Lévy, Bruno; Caumon, Guillaume

    2017-07-01

    RINGMesh is a C++ open-source programming library for manipulating discretized geological models. It is designed to ease the development of applications and workflows that use discretized 3D models. It is neither a geomodeler, nor a meshing software. RINGMesh implements functionalities to read discretized surface-based or volumetric structural models and to check their validity. The models can be then exported in various file formats. RINGMesh provides data structures to represent geological structural models, either defined by their discretized boundary surfaces, and/or by discretized volumes. A programming interface allows to develop of new geomodeling methods, and to plug in external software. The goal of RINGMesh is to help researchers to focus on the implementation of their specific method rather than on tedious tasks common to many applications. The documented code is open-source and distributed under the modified BSD license. It is available at https://www.ring-team.org/index.php/software/ringmesh.

  18. Modeling the spatially dynamic distribution of humans in the Oregon (USA) coast range.

    Treesearch

    Jeffrey D. Kline; David L. Azuma; Alissa Moses

    2003-01-01

    A common approach to land use change analyses in multidisciplinary landscape-level studies is to delineate discrete forest and non-forest or urban and non-urban land use categories to serve as inputs into sets of integrated sub-models describing socioeconomic and ecological processes. Such discrete land use categories, however, may be inappropriate when the...

  19. On the putative essential discreteness of q-generalized entropies

    NASA Astrophysics Data System (ADS)

    Plastino, A.; Rocca, M. C.

    2017-12-01

    It has been argued in Abe (2010), entitled Essential discreteness in generalized thermostatistics with non-logarithmic entropy, that ;continuous Hamiltonian systems with long-range interactions and the so-called q-Gaussian momentum distributions are seen to be outside the scope of non-extensive statistical mechanics;. The arguments are clever and appealing. We show here that, however, some mathematical subtleties render them unconvincing.

  20. SEGUE 1—A COMPRESSED STAR FORMATION HISTORY BEFORE REIONIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webster, David; Bland-Hawthorn, Joss; Frebel, Anna, E-mail: d.webster@physics.usyd.edu.au

    Segue 1 is the current best candidate for a “first galaxy,” a system that experienced only a single, short burst of star formation and has since remained unchanged. Here we present possible star formation scenarios that can explain Segue 1’s unique metallicity distribution. While the majority of stars in all other ultra-faint dwarfs are within 0.5 dex of the mean [Fe/H] for the galaxy, five of the seven stars in Segue 1 have a spread of Δ[Fe/H]  > 0.8 dex. We show that this distribution of metallicities cannot be explained by a gradual buildup of stars, but instead requires clustered star formation. Chemicalmore » tagging allows the separate unresolved delta functions in abundance space to be associated with discrete events in space and time. This provides an opportunity to put the enrichment events into a time sequence and unravel the history of the system. We investigate two possible scenarios for the star formation history of Segue 1 using Fyris Alpha simulations of gas in a 10{sup 7} M{sub ⊙} dark matter halo. The lack of stars with intermediate metallicities −3 < [Fe/H] < −2 can be explained either by a pause in star formation caused by supernova feedback or by the spread of metallicities resulting from one or two supernovae in a low-mass dark matter halo. Either possibility can reproduce the metallicity distribution function (MDF) as well as the other observed elemental abundances. The unusual MDF and the low luminosity of Segue 1 can be explained by it being a first galaxy that originated with M{sub vir} ∼ 10{sup 7}M{sub ⊙} at z ∼ 10.« less

Top